You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/03/03 10:32:35 UTC

[GitHub] [tvm] jtuyls opened a new pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

jtuyls opened a new pull request #7577:
URL: https://github.com/apache/tvm/pull/7577


   This adds Vitis AI as a composite target to TVMC. As we need access to the CLI options in the partitioning function for the Vitis AI target, this PR updates the TVMC compiler and tuner interface to pass those options to the partitioning functions. The Arm Compute Lib and Ethos-N partitioning functions are adjusted accordingly.
   
   **Documentation**
   I think we should document the use of these targets but not sure what the best place is. For example, we can add documentation to https://tvm.apache.org/docs/tutorials/get_started/tvmc_command_line_driver.html or in the target deploy and integration documentation (for example here: https://tvm.apache.org/docs/deploy/vitis_ai.html). Or we can do both?
   
   **CI**
   The following PR should be merged first for tests to pass: https://github.com/apache/tvm/pull/7575
   
   @leandron @comaniac @mbaret 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r589348925



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Understood. So that makes `pyxir` a mandatory dependency for TVMC, which is not a problem. You just need to declare it here:
   https://github.com/apache/tvm/blob/760e9b2b0423f5b66f206d9a9195a1b2b5a007a3/python/gen_requirements.py#L128-L141




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac merged pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
comaniac merged pull request #7577:
URL: https://github.com/apache/tvm/pull/7577


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-801240146


   @comaniac The ci docker will have be updated first for the changes in PR https://github.com/apache/tvm/pull/7575 to take effect. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r586649354



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -21,6 +21,8 @@
 
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
+from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
+from tvm.contrib.target import vitis_ai

Review comment:
       We don't need this line?

##########
File path: python/tvm/contrib/target/vitis_ai.py
##########
@@ -71,37 +71,66 @@ def vitis_ai_compiler(ref):
 
     pass_context = tvm.get_global_func("transform.GetCurrentPassContext")()
 
-    # The target Vitis-AI accelerator device
-    target = (
-        str(pass_context.config["relay.ext.vitis_ai.options.target"])
-        if "relay.ext.vitis_ai.options.target" in pass_context.config
+    cfg = (
+        pass_context.config["relay.ext.vitis_ai.options"]
+        if "relay.ext.vitis_ai.options" in pass_context.config
         else None
     )
 
-    # (Optional configs) The build and work directories to be used by Vitis-AI
-    vai_build_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.build_dir"])
-        if "relay.ext.vitis_ai.options.build_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
-    vai_work_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.work_dir"])
-        if "relay.ext.vitis_ai.options.work_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
+    # Backward compatibility with old pass context configs
+    if cfg is None:
+        warnings.warn(
+            "You are using a deprecated way of passing build configs (e.g."
+            " `relay.ext.vitis_ai.options.target`). Check out the Vitis AI "
+            " documentation here: https://tvm.apache.org/docs/deploy/vitis_ai.html"
+            " to switch to recommended way for passing build configs."
+        )
 
-    # (Optional configs) Export and load PyXIR runtime module to file if provided. This is used to
-    #   compile and quantize a model on the host and deploy it at the edge
-    export_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.export_runtime_module"])
-        if "relay.ext.vitis_ai.options.export_runtime_module" in pass_context.config
-        else ""
-    )
-    load_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.load_runtime_module"])
-        if "relay.ext.vitis_ai.options.load_runtime_module" in pass_context.config
-        else ""
-    )
+        # The target Vitis-AI accelerator device
+        target = (
+            str(pass_context.config["relay.ext.vitis_ai.options.target"])
+            if "relay.ext.vitis_ai.options.target" in pass_context.config
+            else None
+        )
+
+        # (Optional configs) The build and work directories to be used by Vitis-AI
+        vai_build_dir = (
+            str(pass_context.config["relay.ext.vitis_ai.options.build_dir"])
+            if "relay.ext.vitis_ai.options.build_dir" in pass_context.config
+            else tvm.contrib.utils.tempdir().relpath("")
+        )
+        vai_work_dir = (
+            str(pass_context.config["relay.ext.vitis_ai.options.work_dir"])
+            if "relay.ext.vitis_ai.options.work_dir" in pass_context.config
+            else tvm.contrib.utils.tempdir().relpath("")
+        )
+
+        # (Optional configs) Export and load PyXIR runtime module to file if provided. This is used to
+        #   compile and quantize a model on the host and deploy it at the edge
+        export_runtime_module = (
+            str(pass_context.config["relay.ext.vitis_ai.options.export_runtime_module"])
+            if "relay.ext.vitis_ai.options.export_runtime_module" in pass_context.config
+            else ""
+        )
+        load_runtime_module = (
+            str(pass_context.config["relay.ext.vitis_ai.options.load_runtime_module"])
+            if "relay.ext.vitis_ai.options.load_runtime_module" in pass_context.config
+            else ""
+        )
+    else:
+        target = cfg.target if cfg.target else None
+        # (Optional configs) The build and work directories to be used by Vitis AI
+        vai_build_dir = (
+            cfg.build_dir if cfg.build_dir != "" else tvm.contrib.utils.tempdir().relpath("")

Review comment:
       ```suggestion
               cfg.build_dir if cfg.build_dir else tvm.contrib.utils.tempdir().relpath("")
   ```

##########
File path: python/tvm/relay/op/contrib/vitis_ai.py
##########
@@ -80,25 +82,61 @@ def visit_call(self, call):
                 else:
                     return super().visit_call(call)
 
+        xgraph = pyxir.frontend.tvm.from_relay(mod, self.params, postprocessing=None)
+        xgraph = pyxir.partition(xgraph, targets=[self.target])
+
+        layers = xgraph.get_layers()
+        relay_ids = [
+            list(np.array(layer.attrs["relay_id"]).flatten())
+            for layer in layers
+            if layer.target == self.target
+        ]
+        self.relay_ids = [item for sublist in relay_ids for item in sublist]
+
         return Annotator().visit(func)
 
 
 def annotation(mod, params, target):
-    """Annotate Relay expression for Vitis-AI DPU accelerators"""
+    """Annotate Relay expression for offloading operators to Vitis AI DPU accelerators
+    NOTE: This function does the same as the next one (`partition_for_vitis_ai`) but is
+    still here for backward compatibility"""

Review comment:
       For this case, we usually log a deprecate warning, and we remove the implementation in this or next release cycle.
   You could refer to the following cases:
   - https://github.com/apache/tvm/blob/main/python/tvm/auto_scheduler/search_task.py#L403
   - https://github.com/apache/tvm/blob/main/python/tvm/target/target.py#L481

##########
File path: python/tvm/relay/op/contrib/vitis_ai.py
##########
@@ -80,25 +82,61 @@ def visit_call(self, call):
                 else:
                     return super().visit_call(call)
 
+        xgraph = pyxir.frontend.tvm.from_relay(mod, self.params, postprocessing=None)
+        xgraph = pyxir.partition(xgraph, targets=[self.target])
+
+        layers = xgraph.get_layers()
+        relay_ids = [
+            list(np.array(layer.attrs["relay_id"]).flatten())
+            for layer in layers
+            if layer.target == self.target
+        ]
+        self.relay_ids = [item for sublist in relay_ids for item in sublist]
+
         return Annotator().visit(func)
 
 
 def annotation(mod, params, target):
-    """Annotate Relay expression for Vitis-AI DPU accelerators"""
+    """Annotate Relay expression for offloading operators to Vitis AI DPU accelerators
+    NOTE: This function does the same as the next one (`partition_for_vitis_ai`) but is
+    still here for backward compatibility"""
     # We need type information for supporting models that contain operations that don't
     #   have a Relay to XLayer translation
     mod = relay.transform.InferType()(mod)
+    mod = VitisAIAnnotationPass("vitis_ai", target, params)(mod)
+    return mod
 
-    xgraph = pyxir.frontend.tvm.from_relay(mod, params, postprocessing=None)
-    xgraph = pyxir.partition(xgraph, targets=[target])
 
-    layers = xgraph.get_layers()
-    relay_ids = [
-        list(np.array(layer.attrs["relay_id"]).flatten())
-        for layer in layers
-        if layer.target == target
-    ]
-    relay_ids_flatten = [item for sublist in relay_ids for item in sublist]
-    mod = VitisAIAnnotationPass("vitis_ai", relay_ids_flatten)(mod)
+def partition_for_vitis_ai(mod, params=None, target=None, **opts):
+    """Partition the Relay expression for offloading operators to Vitis AI DPU
 
-    return mod
+    Parameters
+    ----------
+    mod : Module
+        The module to run passes on.
+    params : Optional[Dict[str, NDArray]]
+        Constant input parameters.
+    target : str
+        The DPU identifier (e.g. DPUCZDX8G-zcu104, DPUCADX8G)
+
+    Returns
+    -------
+    ret : annotated and partitioned module.

Review comment:
       ```suggestion
       ret : Module
           annotated and partitioned module.
   ```

##########
File path: python/tvm/contrib/target/vitis_ai.py
##########
@@ -71,37 +71,66 @@ def vitis_ai_compiler(ref):
 
     pass_context = tvm.get_global_func("transform.GetCurrentPassContext")()
 
-    # The target Vitis-AI accelerator device
-    target = (
-        str(pass_context.config["relay.ext.vitis_ai.options.target"])
-        if "relay.ext.vitis_ai.options.target" in pass_context.config
+    cfg = (
+        pass_context.config["relay.ext.vitis_ai.options"]
+        if "relay.ext.vitis_ai.options" in pass_context.config
         else None
     )
 
-    # (Optional configs) The build and work directories to be used by Vitis-AI
-    vai_build_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.build_dir"])
-        if "relay.ext.vitis_ai.options.build_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
-    vai_work_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.work_dir"])
-        if "relay.ext.vitis_ai.options.work_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
+    # Backward compatibility with old pass context configs
+    if cfg is None:
+        warnings.warn(
+            "You are using a deprecated way of passing build configs (e.g."
+            " `relay.ext.vitis_ai.options.target`). Check out the Vitis AI "
+            " documentation here: https://tvm.apache.org/docs/deploy/vitis_ai.html"
+            " to switch to recommended way for passing build configs."
+        )
 
-    # (Optional configs) Export and load PyXIR runtime module to file if provided. This is used to
-    #   compile and quantize a model on the host and deploy it at the edge
-    export_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.export_runtime_module"])
-        if "relay.ext.vitis_ai.options.export_runtime_module" in pass_context.config
-        else ""
-    )
-    load_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.load_runtime_module"])
-        if "relay.ext.vitis_ai.options.load_runtime_module" in pass_context.config
-        else ""
-    )
+        # The target Vitis-AI accelerator device
+        target = (
+            str(pass_context.config["relay.ext.vitis_ai.options.target"])
+            if "relay.ext.vitis_ai.options.target" in pass_context.config
+            else None
+        )
+
+        # (Optional configs) The build and work directories to be used by Vitis-AI
+        vai_build_dir = (
+            str(pass_context.config["relay.ext.vitis_ai.options.build_dir"])
+            if "relay.ext.vitis_ai.options.build_dir" in pass_context.config
+            else tvm.contrib.utils.tempdir().relpath("")
+        )
+        vai_work_dir = (
+            str(pass_context.config["relay.ext.vitis_ai.options.work_dir"])
+            if "relay.ext.vitis_ai.options.work_dir" in pass_context.config
+            else tvm.contrib.utils.tempdir().relpath("")
+        )
+
+        # (Optional configs) Export and load PyXIR runtime module to file if provided. This is used to
+        #   compile and quantize a model on the host and deploy it at the edge
+        export_runtime_module = (
+            str(pass_context.config["relay.ext.vitis_ai.options.export_runtime_module"])
+            if "relay.ext.vitis_ai.options.export_runtime_module" in pass_context.config
+            else ""
+        )
+        load_runtime_module = (
+            str(pass_context.config["relay.ext.vitis_ai.options.load_runtime_module"])
+            if "relay.ext.vitis_ai.options.load_runtime_module" in pass_context.config
+            else ""
+        )
+    else:
+        target = cfg.target if cfg.target else None
+        # (Optional configs) The build and work directories to be used by Vitis AI
+        vai_build_dir = (
+            cfg.build_dir if cfg.build_dir != "" else tvm.contrib.utils.tempdir().relpath("")
+        )
+
+        # (Optional configs) Export and load PyXIR runtime module to file if provided. This is used to
+        #   compile and quantize a model on the host and deploy it at the edge
+        vai_work_dir = (
+            cfg.work_dir if cfg.work_dir != "" else tvm.contrib.utils.tempdir().relpath("")

Review comment:
       ```suggestion
               cfg.work_dir if cfg.work_dir else tvm.contrib.utils.tempdir().relpath("")
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r589619283



##########
File path: docs/deploy/vitis_ai.rst
##########
@@ -196,7 +196,7 @@ Hardware setup and docker build
       pip3 install -e . --user
 
 Edge (DPUCZDX8G)
-^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~

Review comment:
       ```suggestion
   ~~~~~~~~~~~~~~~~
   ```

##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Actually I was thinking how could we avoid adding frontend/codegen specific packages to be a TVMC mandatory dependency. For example, it's weird if a user doesn't enable Xilinx flow in `config.cmake` but is asked to install `pyxir` to run TVMC.
   
   Maybe we should declare partition functions at one place and import related dependencies lazily.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-792710701


   > For the inner options, such as `-mcpu` for `llvm`, and `-dpu` for `vitis-ai`, I think it would require a bit of refactoring, to make that an exposed API in TVM level, just consuming that on TVMC.
   > 
   > Concretely, I don't think this should block your PR here. But certainly is a very important topic to start a broader discussion.
   
   I think it would be very useful to solve this target documentation issue in general and proceeding in that manner is better than relying on hard to maintain documentation. So I definitely agree this shouldn't block this PR but I think it would be good to start/continue looking at this so we can make these options transparent to the user.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r594659277



##########
File path: python/tvm/contrib/target/vitis_ai_utils.py
##########
@@ -0,0 +1,36 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Vitis AI target utilities"""

Review comment:
       Yes




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r594499747



##########
File path: python/tvm/contrib/target/vitis_ai_utils.py
##########
@@ -0,0 +1,36 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Vitis AI target utilities"""

Review comment:
       I would sugest we lazily import `pyxir` in target/vitis_ai.py so that it won't be needed even we import vitis_ai.py. As a result, we don't need this file. In fact, we should do this for all partition functions that need 3rd party packages to make sure users won't need to setup every packages even they only need one of them.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r589327224



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       The `relay.ext.vitis_ai` codegen is registered there and won't be found if we don't import the module: https://github.com/apache/tvm/blob/5d5bbfb1ed2281c3e9b2719be6643281d60022fb/python/tvm/contrib/target/vitis_ai.py#L64




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r589318762



##########
File path: docs/deploy/vitis_ai.rst
##########
@@ -459,21 +459,21 @@ target and partition the graph.
 
 .. code:: python
    
-   target='DPUCADX8G'
-   mod = partition_for_vitis_ai(mod, params, target)
+   dpu = 'DPUCADX8G'
+   mod = partition_for_vitis_ai(mod, params, dpu)
 
 Now, we can build the TVM runtime library for executing the model. The
 TVM target is 'llvm' as the operations that can't be handled by the DPU
-are executed on the CPU. The Vitis-AI target is DPUCADX8G as we are
-targeting the cloud DPU and this target is passed as a config to the TVM
+are executed on the CPU. The Vitis-AI DPU is DPUCADX8G as we are
+targeting the cloud DPU and this DPU indetifier is passed as a config to the TVM

Review comment:
       ```suggestion
   targeting the cloud DPU and this DPU identifier is passed as a config to the TVM
   ```

##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Out of curiosity: why do we need this import here? is this for some module initialisation purposes?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r587386268



##########
File path: python/tvm/relay/op/contrib/vitis_ai.py
##########
@@ -80,25 +82,61 @@ def visit_call(self, call):
                 else:
                     return super().visit_call(call)
 
+        xgraph = pyxir.frontend.tvm.from_relay(mod, self.params, postprocessing=None)
+        xgraph = pyxir.partition(xgraph, targets=[self.target])
+
+        layers = xgraph.get_layers()
+        relay_ids = [
+            list(np.array(layer.attrs["relay_id"]).flatten())
+            for layer in layers
+            if layer.target == self.target
+        ]
+        self.relay_ids = [item for sublist in relay_ids for item in sublist]
+
         return Annotator().visit(func)
 
 
 def annotation(mod, params, target):
-    """Annotate Relay expression for Vitis-AI DPU accelerators"""
+    """Annotate Relay expression for offloading operators to Vitis AI DPU accelerators
+    NOTE: This function does the same as the next one (`partition_for_vitis_ai`) but is
+    still here for backward compatibility"""
     # We need type information for supporting models that contain operations that don't
     #   have a Relay to XLayer translation
     mod = relay.transform.InferType()(mod)
+    mod = VitisAIAnnotationPass("vitis_ai", target, params)(mod)
+    return mod
 
-    xgraph = pyxir.frontend.tvm.from_relay(mod, params, postprocessing=None)
-    xgraph = pyxir.partition(xgraph, targets=[target])
 
-    layers = xgraph.get_layers()
-    relay_ids = [
-        list(np.array(layer.attrs["relay_id"]).flatten())
-        for layer in layers
-        if layer.target == target
-    ]
-    relay_ids_flatten = [item for sublist in relay_ids for item in sublist]
-    mod = VitisAIAnnotationPass("vitis_ai", relay_ids_flatten)(mod)
+def partition_for_vitis_ai(mod, params=None, target=None, **opts):

Review comment:
       I think this `target` here needs to be used from `**opts`, right?
   
   Target is something that has a meaning in TVM, so I suggest we rename it here to something else, but I don't think I have a very good suggestion: `board`, `dpu` maybe?

##########
File path: tests/python/driver/tvmc/test_compiler.py
##########
@@ -29,6 +30,15 @@
 from tvm.driver import tvmc
 
 
+def vitis_ai_available():

Review comment:
       Minor comment: this function could be useful on your main module, near `partition_for_vitis_ai`.

##########
File path: python/tvm/contrib/target/vitis_ai.py
##########
@@ -71,37 +71,66 @@ def vitis_ai_compiler(ref):
 
     pass_context = tvm.get_global_func("transform.GetCurrentPassContext")()
 
-    # The target Vitis-AI accelerator device
-    target = (
-        str(pass_context.config["relay.ext.vitis_ai.options.target"])
-        if "relay.ext.vitis_ai.options.target" in pass_context.config
+    cfg = (
+        pass_context.config["relay.ext.vitis_ai.options"]
+        if "relay.ext.vitis_ai.options" in pass_context.config
         else None
     )
 
-    # (Optional configs) The build and work directories to be used by Vitis-AI
-    vai_build_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.build_dir"])
-        if "relay.ext.vitis_ai.options.build_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
-    vai_work_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.work_dir"])
-        if "relay.ext.vitis_ai.options.work_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
+    # Backward compatibility with old pass context configs
+    if cfg is None:
+        warnings.warn(
+            "You are using a deprecated way of passing build configs (e.g."
+            " `relay.ext.vitis_ai.options.target`). Check out the Vitis AI "
+            " documentation here: https://tvm.apache.org/docs/deploy/vitis_ai.html"
+            " to switch to recommended way for passing build configs."
+        )
 
-    # (Optional configs) Export and load PyXIR runtime module to file if provided. This is used to
-    #   compile and quantize a model on the host and deploy it at the edge
-    export_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.export_runtime_module"])
-        if "relay.ext.vitis_ai.options.export_runtime_module" in pass_context.config
-        else ""
-    )
-    load_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.load_runtime_module"])
-        if "relay.ext.vitis_ai.options.load_runtime_module" in pass_context.config
-        else ""
-    )
+        # The target Vitis-AI accelerator device
+        target = (
+            str(pass_context.config["relay.ext.vitis_ai.options.target"])
+            if "relay.ext.vitis_ai.options.target" in pass_context.config
+            else None
+        )

Review comment:
       See my comment about `target`.

##########
File path: tests/python/driver/tvmc/test_compiler.py
##########
@@ -29,6 +30,15 @@
 from tvm.driver import tvmc
 
 
+def vitis_ai_available():
+    """Return whether Vitis AI tools are available"""
+    pyxir_spec = importlib.util.find_spec("pyxir")
+    if not tvm.get_global_func("tvm.vitis_ai_runtime.from_xgraph", True) or pyxir_spec is None:
+        print("Skip because Vitis AI tools are not available")

Review comment:
       This `print` is probably not needed, as this is used within a `@pytest.mark.skipif`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r590264317



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Agreed. This is maybe another callback we can register, and call within `get_codegen_by_target`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
comaniac commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-815160336


   Thanks @jtuyls @leandron 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-815569725


   Thanks @jtuyls 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r589637205



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Yes, indeed, I agree that we should try to avoid having unnecessary dependencies.  I was thinking about allowing to add setup functions in which we can lazily load dependencies (or do other stuff) to the registered codegens: https://github.com/apache/tvm/blob/5cf67bc00c31069d998ec85d4786a8de830ea597/python/tvm/driver/tvmc/composite_target.py#L36




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r591478453



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Yes, I am thinking of adding `init` key with corresponding initialization function like for example:
   ```
   "vitis-ai": {
           "config_key": "relay.ext.vitis_ai.options",
           "init": init_for_vitis_ai,
           "pass_pipeline": partition_for_vitis_ai,
       },
   ```
   I will work on something and add it to the PR. Are you ok with "init" or do you prefer some other name? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r591501987



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       I agree with `init` naming.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r588541010



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -21,6 +21,8 @@
 
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
+from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
+from tvm.contrib.target import vitis_ai

Review comment:
       Ah I see.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r588536056



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -21,6 +21,8 @@
 
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
+from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
+from tvm.contrib.target import vitis_ai

Review comment:
       We need to make sure it's imported because it registers the `relay.ext.vitis_ai` codegen: https://github.com/apache/tvm/blob/5d5bbfb1ed2281c3e9b2719be6643281d60022fb/python/tvm/contrib/target/vitis_ai.py#L64




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-797373889


   @jtuyls with the merge of #7299 there might be a come minor conflicts with the repo now.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r589331724



##########
File path: python/tvm/contrib/target/vitis_ai.py
##########
@@ -71,37 +71,66 @@ def vitis_ai_compiler(ref):
 
     pass_context = tvm.get_global_func("transform.GetCurrentPassContext")()
 
-    # The target Vitis-AI accelerator device
-    target = (
-        str(pass_context.config["relay.ext.vitis_ai.options.target"])
-        if "relay.ext.vitis_ai.options.target" in pass_context.config
+    cfg = (
+        pass_context.config["relay.ext.vitis_ai.options"]
+        if "relay.ext.vitis_ai.options" in pass_context.config
         else None
     )
 
-    # (Optional configs) The build and work directories to be used by Vitis-AI
-    vai_build_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.build_dir"])
-        if "relay.ext.vitis_ai.options.build_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
-    vai_work_dir = (
-        str(pass_context.config["relay.ext.vitis_ai.options.work_dir"])
-        if "relay.ext.vitis_ai.options.work_dir" in pass_context.config
-        else tvm.contrib.utils.tempdir().relpath("")
-    )
+    # Backward compatibility with old pass context configs
+    if cfg is None:
+        warnings.warn(
+            "You are using a deprecated way of passing build configs (e.g."
+            " `relay.ext.vitis_ai.options.target`). Check out the Vitis AI "
+            " documentation here: https://tvm.apache.org/docs/deploy/vitis_ai.html"
+            " to switch to recommended way for passing build configs."
+        )
 
-    # (Optional configs) Export and load PyXIR runtime module to file if provided. This is used to
-    #   compile and quantize a model on the host and deploy it at the edge
-    export_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.export_runtime_module"])
-        if "relay.ext.vitis_ai.options.export_runtime_module" in pass_context.config
-        else ""
-    )
-    load_runtime_module = (
-        str(pass_context.config["relay.ext.vitis_ai.options.load_runtime_module"])
-        if "relay.ext.vitis_ai.options.load_runtime_module" in pass_context.config
-        else ""
-    )
+        # The target Vitis-AI accelerator device
+        target = (
+            str(pass_context.config["relay.ext.vitis_ai.options.target"])
+            if "relay.ext.vitis_ai.options.target" in pass_context.config
+            else None
+        )

Review comment:
       I didn't change `target` to `dpu` for this pass context config option as it will be deprecated and the new API uses `dpu` instead.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r588548912



##########
File path: python/tvm/relay/op/contrib/vitis_ai.py
##########
@@ -80,25 +82,61 @@ def visit_call(self, call):
                 else:
                     return super().visit_call(call)
 
+        xgraph = pyxir.frontend.tvm.from_relay(mod, self.params, postprocessing=None)
+        xgraph = pyxir.partition(xgraph, targets=[self.target])
+
+        layers = xgraph.get_layers()
+        relay_ids = [
+            list(np.array(layer.attrs["relay_id"]).flatten())
+            for layer in layers
+            if layer.target == self.target
+        ]
+        self.relay_ids = [item for sublist in relay_ids for item in sublist]
+
         return Annotator().visit(func)
 
 
 def annotation(mod, params, target):
-    """Annotate Relay expression for Vitis-AI DPU accelerators"""
+    """Annotate Relay expression for offloading operators to Vitis AI DPU accelerators
+    NOTE: This function does the same as the next one (`partition_for_vitis_ai`) but is
+    still here for backward compatibility"""
     # We need type information for supporting models that contain operations that don't
     #   have a Relay to XLayer translation
     mod = relay.transform.InferType()(mod)
+    mod = VitisAIAnnotationPass("vitis_ai", target, params)(mod)
+    return mod
 
-    xgraph = pyxir.frontend.tvm.from_relay(mod, params, postprocessing=None)
-    xgraph = pyxir.partition(xgraph, targets=[target])
 
-    layers = xgraph.get_layers()
-    relay_ids = [
-        list(np.array(layer.attrs["relay_id"]).flatten())
-        for layer in layers
-        if layer.target == target
-    ]
-    relay_ids_flatten = [item for sublist in relay_ids for item in sublist]
-    mod = VitisAIAnnotationPass("vitis_ai", relay_ids_flatten)(mod)
+def partition_for_vitis_ai(mod, params=None, target=None, **opts):

Review comment:
       Yes, we have been using `target` for some time but I agree it's an overused term and it would be good to rename it. As we will be deprecating some API's anyway, I will change it in this PR too then. I think using  `dpu` would be the clearest and it would also avoid other conflicts. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r589323316



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Out of curiosity: why do we need this `from tvm.contrib.target import vitis_ai` here? is this for some module initialisation purposes?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-792666936


   @leandron Do you have any thoughts on documenting TVMC target usage?
   
   > **Documentation**
   > I think we should document the use of these targets but not sure what the best place is. For example, we can add documentation to https://tvm.apache.org/docs/tutorials/get_started/tvmc_command_line_driver.html or in the target deploy and integration documentation (for example here: https://tvm.apache.org/docs/deploy/vitis_ai.html). Or we can do both?
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r594654897



##########
File path: python/tvm/contrib/target/vitis_ai_utils.py
##########
@@ -0,0 +1,36 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Vitis AI target utilities"""

Review comment:
       I see, you prefer doing it like here: https://github.com/apache/tvm/blob/068fed94cf3468e3df510ac8a9aed635ed746804/python/tvm/auto_scheduler/cost_model/xgb_model.py#L114? But then I think we should remove the initialization functions to avoid having unused functionality around. To me this approach is also fine. @leandron , any thoughts?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-815157073


   @comaniac With the CI docker update and lazy loading the pyxir package also inside the the Vitis AI partitioning pass, the tests are now passing. Could you have another look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r588537925



##########
File path: python/tvm/relay/op/contrib/vitis_ai.py
##########
@@ -80,25 +82,61 @@ def visit_call(self, call):
                 else:
                     return super().visit_call(call)
 
+        xgraph = pyxir.frontend.tvm.from_relay(mod, self.params, postprocessing=None)
+        xgraph = pyxir.partition(xgraph, targets=[self.target])
+
+        layers = xgraph.get_layers()
+        relay_ids = [
+            list(np.array(layer.attrs["relay_id"]).flatten())
+            for layer in layers
+            if layer.target == self.target
+        ]
+        self.relay_ids = [item for sublist in relay_ids for item in sublist]
+
         return Annotator().visit(func)
 
 
 def annotation(mod, params, target):
-    """Annotate Relay expression for Vitis-AI DPU accelerators"""
+    """Annotate Relay expression for offloading operators to Vitis AI DPU accelerators
+    NOTE: This function does the same as the next one (`partition_for_vitis_ai`) but is
+    still here for backward compatibility"""

Review comment:
       Yes, the idea was to keep it for some time and then remove it. I will add a deprecate warning.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jtuyls commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
jtuyls commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r588558288



##########
File path: tests/python/driver/tvmc/test_compiler.py
##########
@@ -29,6 +30,15 @@
 from tvm.driver import tvmc
 
 
+def vitis_ai_available():

Review comment:
       Yes, I wanted to do that but those files import the `pyxir` package, which might already fail if the package is not installed. So, unless we put the function before the imports, importing this function would cause failure. We can put the function in a separate file (e.g. `vitis_ai_tools`) but that felt a bit overkill currently as we are only using it in one place. If we start needing it in multiple places, then I think we should definitely move it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on a change in pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on a change in pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#discussion_r590264317



##########
File path: python/tvm/driver/tvmc/composite_target.py
##########
@@ -22,7 +22,7 @@
 from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
 from tvm.relay.op.contrib.ethosn import partition_for_ethosn
 from tvm.relay.op.contrib.vitis_ai import partition_for_vitis_ai
-from tvm.contrib.target import vitis_ai
+from tvm.contrib.target import vitis_ai  # pylint: disable=unused-import

Review comment:
       Agreed. This is maybe another callback we can register, and call within `get_codegen_by_target`, so that it is transparent for the end-user?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] leandron commented on pull request #7577: [TVMC][VitisAI] Enable Vitis AI target through TVMC

Posted by GitBox <gi...@apache.org>.
leandron commented on pull request #7577:
URL: https://github.com/apache/tvm/pull/7577#issuecomment-792679681


   > @leandron Do you have any thoughts on documenting TVMC target usage?
   > 
   > > **Documentation**
   > > I think we should document the use of these targets but not sure what the best place is. For example, we can add documentation to https://tvm.apache.org/docs/tutorials/get_started/tvmc_command_line_driver.html or in the target deploy and integration documentation (for example here: https://tvm.apache.org/docs/deploy/vitis_ai.html). Or we can do both?
   I agree this is 
   
   Yes, I agree this is something we need to improve. Actually in general for targets in TVM it is not easy to obtain meta-information without inspecting the code.
   
   Ideally, I'd like to make the targets self-documenting, so that we don't need to update a page and keep it in sync with changes we do in the code, and users can find useful help on the `--help` option - it is fine if you have your own documentation page, to write about the options, but I'm not sure this should be our _official_ documentation. So that would be my recommendation for now.
   
   I was planning to submit a separate PR, making `--help` list the available TVM targets plus our composite targets here. This one is very easy (yet not 100% complete as a feature), but I think should be done in a separate PR.
   
   For the inner options, such as `-mcpu` for `llvm`, and `-dpu` for `vitis-ai`, I think it would require a bit of refactoring, to make that an exposed API in TVM level, just consuming that on TVMC.
   
   Concretely, I don't think this should block your PR here. But certainly is a very important topic to start a broader discussion. (cc @masahi @comaniac @junrushao1994)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org