You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/11/29 19:56:05 UTC

[GitHub] [tvm] mkatanbaf opened a new pull request, #13514: [microTVM] micro tuning with meta-schedule

mkatanbaf opened a new pull request, #13514:
URL: https://github.com/apache/tvm/pull/13514

   adds support for tuning microTVm models using meta-schedule


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1044937464


##########
python/tvm/meta_schedule/relay_integration.py:
##########
@@ -385,7 +401,9 @@ def compile_relay(
                 config=pass_config,
             ):
                 if backend == "graph":
-                    return relay.build(mod, target=target, params=params, executor=executor)
+                    return relay.build(
+                        mod, target=target, params=params, executor=executor, runtime=runtime

Review Comment:
   Again, I don't think it would affect other target usage, since the `Runtime("cpp")` is the default runtime in the `relay.build`. I'll wait for others to comment on this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1376581533

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] masahi commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
masahi commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1042911837


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,33 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(
+          /*into_producer=*/false,
+          /*into_consumer=*/true,
+          /*inline_const_tensor=*/true,
+          /*disallow_if_then_else=*/true,
+          /*require_injective=*/true,
+          /*require_ordered=*/true,
+          /*disallow_op=*/Array<String>{"tir.exp"}),
+      ScheduleRule::MultiLevelTilingWideVector(
+          /*structure=*/"SRSRS",
+          /*vector_length_in_bits=*/1024,

Review Comment:
   You probably copied this from Hexagon, but the normal `ScheduleRule::MultiLevelTiling` is probably what you want.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1066503788


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,29 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(

Review Comment:
   @zxybazh I think this could be in a follow up PR since the PR has been open for a while. thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1346904932

   @mkatanbaf you might need to rebase with main.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1063873670


##########
python/tvm/contrib/micro/meta_schedule/test_autotune_ms.py:
##########
@@ -0,0 +1,181 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review Comment:
   done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1066505585


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,29 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(

Review Comment:
   Thanks @zxybazh, I already added the rest of Hexagon rules except the  `ParallelizeVectorizeUnroll` which is not needed on the micro targets.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] guberti commented on pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
guberti commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1376292074

   LGTM, I think we can merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] zxybazh commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
zxybazh commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1065199367


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,29 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(

Review Comment:
   Is there any specific reason that `ApplyCustomRule` is not included as a rule here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1044936260


##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects.that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():
+    """Return micro-compatible Builder for meta schedule."""
+
+    def micro_build(
+        mod: IRModule, target: Target, _params: Optional[Dict[str, NDArray]]
+    ) -> OperatorModule:
+        """build function for micro targets.
+
+        Parameters
+        ----------
+        mod : IRModule
+            The IRModule to be built.
+        target : Target
+            The target to be built.
+        _params : Optional[Dict[str, NDArray]]
+            The parameters to be used for the build. Must be None.
+
+        Returns
+        -------
+        rt_mod : OperatorModule
+            The built Module.
+        """
+
+        # Note: changing the global symbol is necessary for micro targets,
+        # since the generated projects already include a main function.
+        prim_func = mod["main"].with_attr("global_symbol", "default_function")

Review Comment:
   tvm_build assigns "global_symbol" to the name of generated C function. I updated the note.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1044938105


##########
python/tvm/contrib/micro/meta_schedule/rpc_runner_micro.py:
##########
@@ -0,0 +1,243 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""RPC Runner Micro"""
+
+from contextlib import contextmanager
+from typing import Callable, List, Optional
+from collections import namedtuple
+import signal
+
+from tvm import micro
+from tvm import nd
+from tvm.contrib.popen_pool import PopenPoolExecutor
+from tvm.rpc.server import Server
+from tvm.rpc.tracker import Tracker
+from tvm.meta_schedule.logging import get_logger
+from tvm.meta_schedule.utils import (
+    cpu_count,
+    derived_object,
+)
+from tvm.meta_schedule.runner.config import EvaluatorConfig, RPCConfig
+from tvm.meta_schedule.runner import PyRunner, RunnerFuture, RunnerInput
+from tvm.meta_schedule.runner.rpc_runner import RPCRunnerFuture
+from tvm.meta_schedule.runner.utils import T_ARG_INFO_JSON_OBJ_LIST
+
+logger = get_logger(__name__)  # pylint: disable=invalid-name
+
+
+@derived_object
+class RPCRunnerMicro(PyRunner):
+    """RPC based runner for tuning micro models."""
+
+    def __init__(
+        self,
+        platform: str = "crt",
+        project_options: Optional[dict] = None,
+        rpc_config: Optional[RPCConfig] = None,
+        evaluator_config: Optional[EvaluatorConfig] = None,
+        max_workers: Optional[int] = None,
+        initializer: Optional[Callable[[], None]] = None,
+    ) -> None:
+        """Constructor
+
+        Parameters
+        ----------
+        platform: str
+            The platform used for project generation.
+        project_options: dict
+            The options for the generated micro project.
+        rpc_config: RPCConfig
+            The rpc configuration.
+        evaluator_config: EvaluatorConfig
+            The evaluator configuration.
+        max_workers: Optional[int] = None
+            The maximum number of connections. Defaults to number of logical CPU cores.
+        initializer: Optional[Callable[[], None]]
+            The initializer function.
+        """
+        super().__init__()
+        self.platform = platform
+        if project_options is None:
+            project_options = {}
+        self.project_options = project_options
+        self.rpc_config = RPCConfig._normalized(rpc_config)
+        self.evaluator_config = EvaluatorConfig._normalized(evaluator_config)
+
+        if max_workers is None:
+            max_workers = cpu_count(logical=True)
+        logger.info("RPCRunner: max_workers = %d", max_workers)
+        self.pool = PopenPoolExecutor(
+            max_workers=max_workers,
+            timeout=rpc_config.session_timeout_sec,
+            initializer=initializer,
+        )
+
+    def run(self, runner_inputs: List[RunnerInput]) -> List[RunnerFuture]:
+        results: List[RunnerFuture] = []
+
+        for runner_input in runner_inputs:
+            future = RPCRunnerFuture(
+                future=self.pool.submit(
+                    _worker_func,
+                    self.platform,
+                    self.project_options or {},
+                    self.rpc_config,
+                    self.evaluator_config,
+                    str(runner_input.artifact_path),
+                    str(runner_input.device_type),
+                    tuple(arg_info.as_json() for arg_info in runner_input.args_info),
+                ),
+                timeout_sec=self.rpc_config.session_timeout_sec,
+            )
+            results.append(future)  # type: ignore
+        return results
+
+
+def _worker_func(
+    platform: str,
+    project_options: dict,
+    rpc_config: RPCConfig,
+    evaluator_config: EvaluatorConfig,
+    artifact_path: str,
+    device_type: str,
+    args_info: T_ARG_INFO_JSON_OBJ_LIST,
+) -> List[float]:
+
+    module_loader = micro.AutoTvmModuleLoader(
+        template_project_dir=micro.get_microtvm_template_projects(platform),
+        project_options=project_options,
+    )
+
+    remote_kw = {
+        "device_key": rpc_config.tracker_key,
+        "host": rpc_config.tracker_host,
+        "port": rpc_config.tracker_port,
+        "priority": 0,
+        "timeout": 100,
+    }
+    build_result = namedtuple("BuildResult", ["filename"])(artifact_path)
+
+    with module_loader(remote_kw, build_result) as (remote, mod):
+        dev = remote.device(device_type, 0)
+        f_prepare = ""
+        if evaluator_config.enable_cpu_cache_flush:
+            f_prepare = "cache_flush_cpu_non_first_arg"
+        time_f = mod.time_evaluator(
+            mod.entry_name,
+            dev,
+            number=evaluator_config.number,
+            repeat=evaluator_config.repeat,
+            min_repeat_ms=evaluator_config.min_repeat_ms,
+            f_preproc=f_prepare,
+        )
+
+        random_fill = remote.get_function("tvm.contrib.random.random_fill")
+        args = [nd.empty(x[2], x[1], dev) for x in args_info]
+        for arg in args:
+            random_fill(arg)
+        dev.sync()
+
+        costs = time_f(*args).results
+    return costs
+
+
+@contextmanager
+def get_rpc_runner_micro(
+    platform,
+    options,
+    session_timeout_sec: int = 300,
+    number: int = 3,
+    repeat: int = 1,
+    min_repeat_ms: int = 100,
+):
+    """Parameters
+    ----------
+    platform: str
+        The platform used for project generation.
+    project_options: dict
+        The options for the generated micro project.
+    session_timeout_sec: int
+        The session timeout. if the number of candidates sent to runner is larger
+        than the runner workers, increase the timeout.
+    number: int
+        The number of times to run the evaluator function for taking average.
+        We call these runs as one `repeat` of measurement.
+    repeat: int
+        The number of times to repeat the measurement.
+        In total, the function will be invoked (1 + number x repeat) times,
+        where the first one is warm up and will be discarded.
+        The returned result contains `repeat` costs,
+        each of which is an average of `number` costs.
+    min_repeat_ms: int
+        Minimum repeat time in ms. if the execution latency is too short,
+        increase the number of runs to the given time (in ms) to reduce the measurement error.
+    """
+    tracker_host = "127.0.0.1"

Review Comment:
   I modified it as you suggested.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1044937219


##########
python/tvm/meta_schedule/relay_integration.py:
##########
@@ -97,13 +99,16 @@ def _normalize_params(
     if executor is None:
         executor = relay.backend.Executor("graph")
 
+    if runtime is None:
+        runtime = relay.backend.Runtime("cpp")

Review Comment:
   I need to pass the runtime to the `relay.build` for micro targets. I don't think it would affect other target usage, since the `Runtime("cpp")` is the default runtime in the `relay.build` anyway, I followed the example of how `executor` is set for the `runtime`. (see https://github.com/apache/tvm/blob/main/python/tvm/relay/build_module.py#L80)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1061037015


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,34 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(
+          /*into_producer=*/false,
+          /*into_consumer=*/true,
+          /*inline_const_tensor=*/true,
+          /*disallow_if_then_else=*/true,
+          /*require_injective=*/true,
+          /*require_ordered=*/true,
+          /*disallow_op=*/Array<String>{"tir.exp"}),
+      ScheduleRule::MultiLevelTiling(
+          /*structure=*/"SSRSRS",
+          /*tile_binds=*/NullOpt,
+          /*max_innermost_factor=*/Integer(64),
+          /*vector_load_lens=*/NullOpt,
+          /*reuse_read=*/NullOpt,
+          /*reuse_write=*/
+          Map<String, ObjectRef>{{"req", String("may")},
+                                 {"levels", Array<Integer>{1, 2}},
+                                 {"scope", String("global")}}),
+      ScheduleRule::ParallelizeVectorizeUnroll(

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1061753588


##########
python/tvm/contrib/micro/meta_schedule/test_autotune_ms.py:
##########
@@ -0,0 +1,181 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import numpy as np
+import pytest
+from types import MappingProxyType
+import pathlib
+import json
+
+import tvm
+from tvm import relay
+from tvm.relay.backend import Executor
+from tvm.contrib import graph_executor, utils
+from tvm import meta_schedule as ms
+from tvm.contrib.micro.meta_schedule.local_builder_micro import get_micro_local_builder
+from tvm.contrib.micro.meta_schedule.rpc_runner_micro import get_rpc_runner_micro
+
+
+def get_module():
+    data_shape = (1, 3, 16, 16)
+    weight_shape = (8, 3, 5, 5)
+    data = relay.var("data", relay.TensorType(data_shape, "float32"))
+    weight = relay.var("weight", relay.TensorType(weight_shape, "float32"))
+    y = relay.nn.conv2d(
+        data,
+        weight,
+        padding=(2, 2),
+        kernel_size=(5, 5),
+        kernel_layout="OIHW",
+        out_dtype="float32",
+    )
+    f = relay.Function([data, weight], y)
+    mod = tvm.IRModule.from_expr(f)
+    mod = relay.transform.InferType()(mod)
+
+    weight_sample = np.random.rand(
+        weight_shape[0], weight_shape[1], weight_shape[2], weight_shape[3]
+    ).astype("float32")
+    params = {mod["main"].params[1].name_hint: weight_sample}
+
+    model_info = {
+        "in_tensor": "data",
+        "in_shape": data_shape,
+        "in_dtype": "float32",
+    }
+
+    return mod, params, model_info
+
+
+@tvm.testing.requires_micro
+@pytest.mark.parametrize(
+    "platform, options",
+    [
+        pytest.param("crt", None),
+        pytest.param(
+            "zephyr",
+            {
+                "board": "qemu_x86",
+                "project_type": "host_driven",
+            },
+        ),
+    ],
+)
+def test_micro_tuning_with_meta_schedule(platform, options):
+    if platform == "crt":
+        target = tvm.target.target.micro(model="host")
+    else:
+        boards_file = (
+            pathlib.Path(tvm.micro.get_microtvm_template_projects("zephyr")) / "boards.json"
+        )
+        with open(boards_file) as f:
+            boards = json.load(f)
+        target = tvm.target.target.micro(
+            model=boards[options["board"]]["model"], options="-mcpu=cortex-m4"

Review Comment:
   why `-mcpu=cortex-m4` is fixed here?



##########
python/tvm/contrib/micro/meta_schedule/test_autotune_ms.py:
##########
@@ -0,0 +1,181 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review Comment:
   You need to move this file outside of the python package. Usually we have CRT test and Zephyr/Arduino tests in separate path. CRT test would be in the python unittests and platform specific tests are in `tests/micro`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1060903452


##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():
+    """Return micro-compatible Builder for meta schedule."""
+
+    def _micro_build(
+        mod: IRModule, target: Target, _params: Optional[Dict[str, NDArray]]
+    ) -> OperatorModule:
+        """Build function for micro targets.
+
+        Parameters
+        ----------
+        mod : IRModule
+            The IRModule to be built.
+        target : Target
+            The target to be built.
+        _params : Optional[Dict[str, NDArray]]
+            The parameters to be used for the build. Must be None.
+
+        Returns
+        -------
+        rt_mod : OperatorModule
+            The built Module.
+        """
+
+        # Note: tvm_build assigns "global_symbol" to the name of generated C function
+        # changing it is necessary for micro targets,
+        # since the generated projects already include a main function.
+        prim_func = mod["main"].with_attr("global_symbol", "default_function")
+        mod = IRModule({"main": prim_func})
+        runtime = Runtime("crt", {"system-lib": True})
+        mod = RemoveWeightLayoutRewriteBlock(skip_ndarray_rewrite=True)(mod)
+        rt_mod = tvm_build(mod, target=target, runtime=runtime)

Review Comment:
   No, they are not. We continue to use `tvm_build` in tuning. I'm not sure what you mean by "relay.build changes"? The only changes are renaming the TIR global symbol which is needed to rename the generated function to something different from `main`, and passing appropriate runtime for `c` targets.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1060903452


##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():
+    """Return micro-compatible Builder for meta schedule."""
+
+    def _micro_build(
+        mod: IRModule, target: Target, _params: Optional[Dict[str, NDArray]]
+    ) -> OperatorModule:
+        """Build function for micro targets.
+
+        Parameters
+        ----------
+        mod : IRModule
+            The IRModule to be built.
+        target : Target
+            The target to be built.
+        _params : Optional[Dict[str, NDArray]]
+            The parameters to be used for the build. Must be None.
+
+        Returns
+        -------
+        rt_mod : OperatorModule
+            The built Module.
+        """
+
+        # Note: tvm_build assigns "global_symbol" to the name of generated C function
+        # changing it is necessary for micro targets,
+        # since the generated projects already include a main function.
+        prim_func = mod["main"].with_attr("global_symbol", "default_function")
+        mod = IRModule({"main": prim_func})
+        runtime = Runtime("crt", {"system-lib": True})
+        mod = RemoveWeightLayoutRewriteBlock(skip_ndarray_rewrite=True)(mod)
+        rt_mod = tvm_build(mod, target=target, runtime=runtime)

Review Comment:
   No, they are not. We continue to use `tvm_build` in tuning. The changes in `relay_integration.py` are needed for compiling the relay program using the MetaSchedule tuning database.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1061042331


##########
src/target/target_kind.cc:
##########
@@ -309,6 +309,7 @@ TVM_REGISTER_TARGET_KIND("c", kDLCPU)
     .add_attr_option<String>("march")
     .add_attr_option<Integer>("workspace-byte-alignment")
     .add_attr_option<Integer>("constants-byte-alignment")
+    .add_attr_option<Integer>("num-cores")

Review Comment:
   when the `ParallelizeVectorizeUnroll` schedule rule and the `MutateParallel` got eliminated, the `num-cores` attribute is no longer needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on pull request #13514: [microTVM] micro tuning with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1341412293

   cc @areusch @guberti @zxybazh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] masahi commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
masahi commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1042911837


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,33 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(
+          /*into_producer=*/false,
+          /*into_consumer=*/true,
+          /*inline_const_tensor=*/true,
+          /*disallow_if_then_else=*/true,
+          /*require_injective=*/true,
+          /*require_ordered=*/true,
+          /*disallow_op=*/Array<String>{"tir.exp"}),
+      ScheduleRule::MultiLevelTilingWideVector(
+          /*structure=*/"SRSRS",
+          /*vector_length_in_bits=*/1024,

Review Comment:
   You probably copied this from Hexagon, but the normal `ScheduleRule::MultiLevelTiling` is what you want.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] areusch commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
areusch commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1060982279


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,34 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(
+          /*into_producer=*/false,
+          /*into_consumer=*/true,
+          /*inline_const_tensor=*/true,
+          /*disallow_if_then_else=*/true,
+          /*require_injective=*/true,
+          /*require_ordered=*/true,
+          /*disallow_op=*/Array<String>{"tir.exp"}),
+      ScheduleRule::MultiLevelTiling(
+          /*structure=*/"SSRSRS",
+          /*tile_binds=*/NullOpt,
+          /*max_innermost_factor=*/Integer(64),
+          /*vector_load_lens=*/NullOpt,
+          /*reuse_read=*/NullOpt,
+          /*reuse_write=*/
+          Map<String, ObjectRef>{{"req", String("may")},
+                                 {"levels", Array<Integer>{1, 2}},
+                                 {"scope", String("global")}}),
+      ScheduleRule::ParallelizeVectorizeUnroll(

Review Comment:
   can you try excluding this schedule rule and see if that allows you to remove `num-cores` Target attr?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] zxybazh commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
zxybazh commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1066423758


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,29 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(

Review Comment:
   Hi, after you add the `ApplyCustomRule` you may also consider to add other rules in the current Hexagon rules like `InlineConstantScalars`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1065209255


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,29 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(

Review Comment:
   No, there was not. I copied the rules from the Hexagon rules a while ago, and the `ApplyCustomRule` was not added then. I added the rule.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1376209786

   LGTM, I'll wait for the others and then merge it.
   cc @guberti @zxybazh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1060903615


##########
src/target/target_kind.cc:
##########
@@ -309,6 +309,7 @@ TVM_REGISTER_TARGET_KIND("c", kDLCPU)
     .add_attr_option<String>("march")
     .add_attr_option<Integer>("workspace-byte-alignment")
     .add_attr_option<Integer>("constants-byte-alignment")
+    .add_attr_option<Integer>("num-cores")

Review Comment:
   meta_schedule expects this attribute, see https://github.com/apache/tvm/blob/main/src/meta_schedule/utils.h#L384



##########
src/target/source/codegen_c_host.cc:
##########
@@ -54,6 +54,7 @@ void CodeGenCHost::Init(bool output_ssa, bool emit_asserts, bool emit_fwd_func_d
   decl_stream << "#include \"tvm/runtime/c_runtime_api.h\"\n";
   decl_stream << "#include \"tvm/runtime/c_backend_api.h\"\n";
   decl_stream << "#include <math.h>\n";
+  decl_stream << "#include <stdbool.h>\n";

Review Comment:
   some of the generated projects have a variable of type `bool`, and result in build error if the `stdbool.h` header is not included.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] guberti commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
guberti commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1043299575


##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects.that compile on the local host"""

Review Comment:
   nit:
   ```suggestion
   """Local builder for microTVM projects that compile on the local host."""
   ```



##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects.that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():
+    """Return micro-compatible Builder for meta schedule."""
+
+    def micro_build(
+        mod: IRModule, target: Target, _params: Optional[Dict[str, NDArray]]
+    ) -> OperatorModule:
+        """build function for micro targets.
+
+        Parameters
+        ----------
+        mod : IRModule
+            The IRModule to be built.
+        target : Target
+            The target to be built.
+        _params : Optional[Dict[str, NDArray]]
+            The parameters to be used for the build. Must be None.
+
+        Returns
+        -------
+        rt_mod : OperatorModule
+            The built Module.
+        """
+
+        # Note: changing the global symbol is necessary for micro targets,
+        # since the generated projects already include a main function.
+        prim_func = mod["main"].with_attr("global_symbol", "default_function")

Review Comment:
   What is the global symbol? I assume it's a function name, but for what?



##########
python/tvm/contrib/micro/meta_schedule/rpc_runner_micro.py:
##########
@@ -0,0 +1,243 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""RPC Runner Micro"""
+
+from contextlib import contextmanager
+from typing import Callable, List, Optional
+from collections import namedtuple
+import signal
+
+from tvm import micro
+from tvm import nd
+from tvm.contrib.popen_pool import PopenPoolExecutor
+from tvm.rpc.server import Server
+from tvm.rpc.tracker import Tracker
+from tvm.meta_schedule.logging import get_logger
+from tvm.meta_schedule.utils import (
+    cpu_count,
+    derived_object,
+)

Review Comment:
   ```suggestion
   from tvm.meta_schedule.utils import cpu_count, derived_object
   ```



##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects.that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():
+    """Return micro-compatible Builder for meta schedule."""
+
+    def micro_build(
+        mod: IRModule, target: Target, _params: Optional[Dict[str, NDArray]]
+    ) -> OperatorModule:
+        """build function for micro targets.

Review Comment:
   nit:
   ```suggestion
           """Build function for micro targets.
   ```



##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects.that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():
+    """Return micro-compatible Builder for meta schedule."""
+
+    def micro_build(
+        mod: IRModule, target: Target, _params: Optional[Dict[str, NDArray]]
+    ) -> OperatorModule:
+        """build function for micro targets.
+
+        Parameters
+        ----------
+        mod : IRModule
+            The IRModule to be built.
+        target : Target
+            The target to be built.
+        _params : Optional[Dict[str, NDArray]]
+            The parameters to be used for the build. Must be None.
+
+        Returns
+        -------
+        rt_mod : OperatorModule
+            The built Module.
+        """
+
+        # Note: changing the global symbol is necessary for micro targets,
+        # since the generated projects already include a main function.
+        prim_func = mod["main"].with_attr("global_symbol", "default_function")
+        mod = IRModule({"main": prim_func})
+        runtime = Runtime("crt", {"system-lib": True})
+        mod = RemoveWeightLayoutRewriteBlock(skip_ndarray_rewrite=True)(mod)
+        rt_mod = tvm_build(mod, target=target, runtime=runtime)
+        return rt_mod
+
+    def micro_export(mod: OperatorModule) -> str:
+        """export function for micro targets.

Review Comment:
   nit:
   ```suggestion
           """Export function for micro targets.
   ```



##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects.that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():

Review Comment:
   Since `micro_build` and `micro_export` don't use local variables from `get_micro_local_builder`, I'd prefer to have them as module level functions (e.g. `_micro_build` and `_micro_export`).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1044938105


##########
python/tvm/contrib/micro/meta_schedule/rpc_runner_micro.py:
##########
@@ -0,0 +1,243 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""RPC Runner Micro"""
+
+from contextlib import contextmanager
+from typing import Callable, List, Optional
+from collections import namedtuple
+import signal
+
+from tvm import micro
+from tvm import nd
+from tvm.contrib.popen_pool import PopenPoolExecutor
+from tvm.rpc.server import Server
+from tvm.rpc.tracker import Tracker
+from tvm.meta_schedule.logging import get_logger
+from tvm.meta_schedule.utils import (
+    cpu_count,
+    derived_object,
+)
+from tvm.meta_schedule.runner.config import EvaluatorConfig, RPCConfig
+from tvm.meta_schedule.runner import PyRunner, RunnerFuture, RunnerInput
+from tvm.meta_schedule.runner.rpc_runner import RPCRunnerFuture
+from tvm.meta_schedule.runner.utils import T_ARG_INFO_JSON_OBJ_LIST
+
+logger = get_logger(__name__)  # pylint: disable=invalid-name
+
+
+@derived_object
+class RPCRunnerMicro(PyRunner):
+    """RPC based runner for tuning micro models."""
+
+    def __init__(
+        self,
+        platform: str = "crt",
+        project_options: Optional[dict] = None,
+        rpc_config: Optional[RPCConfig] = None,
+        evaluator_config: Optional[EvaluatorConfig] = None,
+        max_workers: Optional[int] = None,
+        initializer: Optional[Callable[[], None]] = None,
+    ) -> None:
+        """Constructor
+
+        Parameters
+        ----------
+        platform: str
+            The platform used for project generation.
+        project_options: dict
+            The options for the generated micro project.
+        rpc_config: RPCConfig
+            The rpc configuration.
+        evaluator_config: EvaluatorConfig
+            The evaluator configuration.
+        max_workers: Optional[int] = None
+            The maximum number of connections. Defaults to number of logical CPU cores.
+        initializer: Optional[Callable[[], None]]
+            The initializer function.
+        """
+        super().__init__()
+        self.platform = platform
+        if project_options is None:
+            project_options = {}
+        self.project_options = project_options
+        self.rpc_config = RPCConfig._normalized(rpc_config)
+        self.evaluator_config = EvaluatorConfig._normalized(evaluator_config)
+
+        if max_workers is None:
+            max_workers = cpu_count(logical=True)
+        logger.info("RPCRunner: max_workers = %d", max_workers)
+        self.pool = PopenPoolExecutor(
+            max_workers=max_workers,
+            timeout=rpc_config.session_timeout_sec,
+            initializer=initializer,
+        )
+
+    def run(self, runner_inputs: List[RunnerInput]) -> List[RunnerFuture]:
+        results: List[RunnerFuture] = []
+
+        for runner_input in runner_inputs:
+            future = RPCRunnerFuture(
+                future=self.pool.submit(
+                    _worker_func,
+                    self.platform,
+                    self.project_options or {},
+                    self.rpc_config,
+                    self.evaluator_config,
+                    str(runner_input.artifact_path),
+                    str(runner_input.device_type),
+                    tuple(arg_info.as_json() for arg_info in runner_input.args_info),
+                ),
+                timeout_sec=self.rpc_config.session_timeout_sec,
+            )
+            results.append(future)  # type: ignore
+        return results
+
+
+def _worker_func(
+    platform: str,
+    project_options: dict,
+    rpc_config: RPCConfig,
+    evaluator_config: EvaluatorConfig,
+    artifact_path: str,
+    device_type: str,
+    args_info: T_ARG_INFO_JSON_OBJ_LIST,
+) -> List[float]:
+
+    module_loader = micro.AutoTvmModuleLoader(
+        template_project_dir=micro.get_microtvm_template_projects(platform),
+        project_options=project_options,
+    )
+
+    remote_kw = {
+        "device_key": rpc_config.tracker_key,
+        "host": rpc_config.tracker_host,
+        "port": rpc_config.tracker_port,
+        "priority": 0,
+        "timeout": 100,
+    }
+    build_result = namedtuple("BuildResult", ["filename"])(artifact_path)
+
+    with module_loader(remote_kw, build_result) as (remote, mod):
+        dev = remote.device(device_type, 0)
+        f_prepare = ""
+        if evaluator_config.enable_cpu_cache_flush:
+            f_prepare = "cache_flush_cpu_non_first_arg"
+        time_f = mod.time_evaluator(
+            mod.entry_name,
+            dev,
+            number=evaluator_config.number,
+            repeat=evaluator_config.repeat,
+            min_repeat_ms=evaluator_config.min_repeat_ms,
+            f_preproc=f_prepare,
+        )
+
+        random_fill = remote.get_function("tvm.contrib.random.random_fill")
+        args = [nd.empty(x[2], x[1], dev) for x in args_info]
+        for arg in args:
+            random_fill(arg)
+        dev.sync()
+
+        costs = time_f(*args).results
+    return costs
+
+
+@contextmanager
+def get_rpc_runner_micro(
+    platform,
+    options,
+    session_timeout_sec: int = 300,
+    number: int = 3,
+    repeat: int = 1,
+    min_repeat_ms: int = 100,
+):
+    """Parameters
+    ----------
+    platform: str
+        The platform used for project generation.
+    project_options: dict
+        The options for the generated micro project.
+    session_timeout_sec: int
+        The session timeout. if the number of candidates sent to runner is larger
+        than the runner workers, increase the timeout.
+    number: int
+        The number of times to run the evaluator function for taking average.
+        We call these runs as one `repeat` of measurement.
+    repeat: int
+        The number of times to repeat the measurement.
+        In total, the function will be invoked (1 + number x repeat) times,
+        where the first one is warm up and will be discarded.
+        The returned result contains `repeat` costs,
+        each of which is an average of `number` costs.
+    min_repeat_ms: int
+        Minimum repeat time in ms. if the execution latency is too short,
+        increase the number of runs to the given time (in ms) to reduce the measurement error.
+    """
+    tracker_host = "127.0.0.1"

Review Comment:
   I followed the example of AutoTVM were server and tracker info were hardcoded. (see https://github.com/apache/tvm/blob/main/python/tvm/autotvm/measure/measure_methods.py#L475). I modified it as you suggested, not sure if it has any practical use.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] zxybazh commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
zxybazh commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1042803967


##########
python/tvm/meta_schedule/relay_integration.py:
##########
@@ -97,13 +99,16 @@ def _normalize_params(
     if executor is None:
         executor = relay.backend.Executor("graph")
 
+    if runtime is None:
+        runtime = relay.backend.Runtime("cpp")

Review Comment:
   May I ask if this change is necessary here and does it impact other target usage? If so can we add some comments to explain this runtime is for cpp runtime only?



##########
python/tvm/meta_schedule/relay_integration.py:
##########
@@ -385,7 +401,9 @@ def compile_relay(
                 config=pass_config,
             ):
                 if backend == "graph":
-                    return relay.build(mod, target=target, params=params, executor=executor)
+                    return relay.build(
+                        mod, target=target, params=params, executor=executor, runtime=runtime

Review Comment:
   Would this have any impact on non-cpp build? Asking because it seems after parameter normalizaion the runtime would be set to `relay.backend.Runtime("cpp")`.



##########
python/tvm/contrib/micro/meta_schedule/test_autotune_ms.py:
##########
@@ -0,0 +1,181 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import numpy as np
+import pytest
+from types import MappingProxyType
+import pathlib
+import json
+
+import tvm
+from tvm import relay
+from tvm.relay.backend import Executor
+from tvm.contrib import graph_executor, utils
+from tvm import meta_schedule as ms
+from tvm.contrib.micro.meta_schedule.local_builder_micro import get_micro_local_builder
+from tvm.contrib.micro.meta_schedule.rpc_runner_micro import get_rpc_runner_micro
+
+
+def get_module():
+    data_shape = (1, 3, 16, 16)
+    weight_shape = (8, 3, 5, 5)
+    data = relay.var("data", relay.TensorType(data_shape, "float32"))
+    weight = relay.var("weight", relay.TensorType(weight_shape, "float32"))
+    y = relay.nn.conv2d(
+        data,
+        weight,
+        padding=(2, 2),
+        kernel_size=(5, 5),
+        kernel_layout="OIHW",
+        out_dtype="float32",
+    )
+    f = relay.Function([data, weight], y)
+    mod = tvm.IRModule.from_expr(f)
+    mod = relay.transform.InferType()(mod)
+
+    weight_sample = np.random.rand(
+        weight_shape[0], weight_shape[1], weight_shape[2], weight_shape[3]
+    ).astype("float32")
+    params = {mod["main"].params[1].name_hint: weight_sample}
+
+    model_info = {
+        "in_tensor": "data",
+        "in_shape": data_shape,
+        "in_dtype": "float32",
+    }
+
+    return mod, params, model_info
+
+
+@tvm.testing.requires_micro
+@pytest.mark.parametrize(
+    "platform, options",
+    [
+        pytest.param("crt", None),
+        pytest.param(
+            "zephyr",
+            {
+                "board": "qemu_x86",
+                "project_type": "host_driven",
+            },
+        ),
+    ],
+)
+def test_micro_tuning_with_meta_schedule(platform, options):
+    if platform == "crt":
+        target = tvm.target.target.micro(model="host", options="-num-cores=1")
+    else:
+        boards_file = (
+            pathlib.Path(tvm.micro.get_microtvm_template_projects("zephyr")) / "boards.json"
+        )
+        with open(boards_file) as f:
+            boards = json.load(f)
+        target = tvm.target.target.micro(
+            model=boards[options["board"]]["model"], options="-mcpu=cortex-m4 -num-cores=1"
+        )
+
+    work_dir = utils.tempdir()
+    mod, params, model_info = get_module()
+    input_name = model_info["in_tensor"]
+    input_shape = model_info["in_shape"]
+    input_dtype = model_info["in_dtype"]
+    data_sample = np.random.rand(*input_shape).astype(input_dtype)
+
+    runtime = relay.backend.Runtime("crt", {"system-lib": True})
+    executor = Executor("aot", {"link-params": True})
+    # This line is necessary for link-params to take effect during
+    # task extraction and relay.build(...).
+    mod = mod.with_attr("executor", executor)
+
+    builder = get_micro_local_builder()
+    with get_rpc_runner_micro(
+        platform=platform, options=options, session_timeout_sec=120
+    ) as runner:
+        with ms.Profiler() as profiler:
+            db: tvm.runtime.Module = ms.relay_integration.tune_relay(

Review Comment:
   db should be MetaSchedule Database right?



##########
python/tvm/contrib/micro/meta_schedule/rpc_runner_micro.py:
##########
@@ -0,0 +1,243 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""RPC Runner Micro"""
+
+from contextlib import contextmanager
+from typing import Callable, List, Optional
+from collections import namedtuple
+import signal
+
+from tvm import micro
+from tvm import nd
+from tvm.contrib.popen_pool import PopenPoolExecutor
+from tvm.rpc.server import Server
+from tvm.rpc.tracker import Tracker
+from tvm.meta_schedule.logging import get_logger
+from tvm.meta_schedule.utils import (
+    cpu_count,
+    derived_object,
+)
+from tvm.meta_schedule.runner.config import EvaluatorConfig, RPCConfig
+from tvm.meta_schedule.runner import PyRunner, RunnerFuture, RunnerInput
+from tvm.meta_schedule.runner.rpc_runner import RPCRunnerFuture
+from tvm.meta_schedule.runner.utils import T_ARG_INFO_JSON_OBJ_LIST
+
+logger = get_logger(__name__)  # pylint: disable=invalid-name
+
+
+@derived_object
+class RPCRunnerMicro(PyRunner):
+    """RPC based runner for tuning micro models."""
+
+    def __init__(
+        self,
+        platform: str = "crt",
+        project_options: Optional[dict] = None,
+        rpc_config: Optional[RPCConfig] = None,
+        evaluator_config: Optional[EvaluatorConfig] = None,
+        max_workers: Optional[int] = None,
+        initializer: Optional[Callable[[], None]] = None,
+    ) -> None:
+        """Constructor
+
+        Parameters
+        ----------
+        platform: str
+            The platform used for project generation.
+        project_options: dict
+            The options for the generated micro project.
+        rpc_config: RPCConfig
+            The rpc configuration.
+        evaluator_config: EvaluatorConfig
+            The evaluator configuration.
+        max_workers: Optional[int] = None
+            The maximum number of connections. Defaults to number of logical CPU cores.
+        initializer: Optional[Callable[[], None]]
+            The initializer function.
+        """
+        super().__init__()
+        self.platform = platform
+        if project_options is None:
+            project_options = {}
+        self.project_options = project_options
+        self.rpc_config = RPCConfig._normalized(rpc_config)
+        self.evaluator_config = EvaluatorConfig._normalized(evaluator_config)
+
+        if max_workers is None:
+            max_workers = cpu_count(logical=True)
+        logger.info("RPCRunner: max_workers = %d", max_workers)
+        self.pool = PopenPoolExecutor(
+            max_workers=max_workers,
+            timeout=rpc_config.session_timeout_sec,
+            initializer=initializer,
+        )
+
+    def run(self, runner_inputs: List[RunnerInput]) -> List[RunnerFuture]:
+        results: List[RunnerFuture] = []
+
+        for runner_input in runner_inputs:
+            future = RPCRunnerFuture(
+                future=self.pool.submit(
+                    _worker_func,
+                    self.platform,
+                    self.project_options or {},
+                    self.rpc_config,
+                    self.evaluator_config,
+                    str(runner_input.artifact_path),
+                    str(runner_input.device_type),
+                    tuple(arg_info.as_json() for arg_info in runner_input.args_info),
+                ),
+                timeout_sec=self.rpc_config.session_timeout_sec,
+            )
+            results.append(future)  # type: ignore
+        return results
+
+
+def _worker_func(
+    platform: str,
+    project_options: dict,
+    rpc_config: RPCConfig,
+    evaluator_config: EvaluatorConfig,
+    artifact_path: str,
+    device_type: str,
+    args_info: T_ARG_INFO_JSON_OBJ_LIST,
+) -> List[float]:
+
+    module_loader = micro.AutoTvmModuleLoader(
+        template_project_dir=micro.get_microtvm_template_projects(platform),
+        project_options=project_options,
+    )
+
+    remote_kw = {
+        "device_key": rpc_config.tracker_key,
+        "host": rpc_config.tracker_host,
+        "port": rpc_config.tracker_port,
+        "priority": 0,
+        "timeout": 100,
+    }
+    build_result = namedtuple("BuildResult", ["filename"])(artifact_path)
+
+    with module_loader(remote_kw, build_result) as (remote, mod):
+        dev = remote.device(device_type, 0)
+        f_prepare = ""
+        if evaluator_config.enable_cpu_cache_flush:
+            f_prepare = "cache_flush_cpu_non_first_arg"
+        time_f = mod.time_evaluator(
+            mod.entry_name,
+            dev,
+            number=evaluator_config.number,
+            repeat=evaluator_config.repeat,
+            min_repeat_ms=evaluator_config.min_repeat_ms,
+            f_preproc=f_prepare,
+        )
+
+        random_fill = remote.get_function("tvm.contrib.random.random_fill")
+        args = [nd.empty(x[2], x[1], dev) for x in args_info]
+        for arg in args:
+            random_fill(arg)
+        dev.sync()
+
+        costs = time_f(*args).results
+    return costs
+
+
+@contextmanager
+def get_rpc_runner_micro(
+    platform,
+    options,
+    session_timeout_sec: int = 300,
+    number: int = 3,
+    repeat: int = 1,
+    min_repeat_ms: int = 100,
+):
+    """Parameters
+    ----------
+    platform: str
+        The platform used for project generation.
+    project_options: dict
+        The options for the generated micro project.
+    session_timeout_sec: int
+        The session timeout. if the number of candidates sent to runner is larger
+        than the runner workers, increase the timeout.
+    number: int
+        The number of times to run the evaluator function for taking average.
+        We call these runs as one `repeat` of measurement.
+    repeat: int
+        The number of times to repeat the measurement.
+        In total, the function will be invoked (1 + number x repeat) times,
+        where the first one is warm up and will be discarded.
+        The returned result contains `repeat` costs,
+        each of which is an average of `number` costs.
+    min_repeat_ms: int
+        Minimum repeat time in ms. if the execution latency is too short,
+        increase the number of runs to the given time (in ms) to reduce the measurement error.
+    """
+    tracker_host = "127.0.0.1"

Review Comment:
   Not sure if hardcoding the tracker information here is a good idea, what about use them as the default of another argument`rpc_config` for the function?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] areusch commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
areusch commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1060800717


##########
python/tvm/rpc/server.py:
##########
@@ -319,6 +319,8 @@ def __init__(
         load_library=None,
         custom_addr=None,
         silent=False,
+        reuse_addr=False,

Review Comment:
   i think we pretty much always want to `reuse_addr`, you could just make this the default



##########
src/target/target_kind.cc:
##########
@@ -309,6 +309,7 @@ TVM_REGISTER_TARGET_KIND("c", kDLCPU)
     .add_attr_option<String>("march")
     .add_attr_option<Integer>("workspace-byte-alignment")
     .add_attr_option<Integer>("constants-byte-alignment")
+    .add_attr_option<Integer>("num-cores")

Review Comment:
   what's this for?



##########
python/tvm/contrib/micro/meta_schedule/local_builder_micro.py:
##########
@@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Local builder for microTVM projects that compile on the local host"""
+
+import os
+import tempfile
+from typing import Optional, Dict
+from tvm.ir import IRModule
+from tvm.runtime import NDArray
+from tvm.target import Target
+from tvm.meta_schedule.builder import LocalBuilder
+from tvm.driver.build_module import OperatorModule
+from tvm import micro
+from tvm.contrib.tar import tar
+from tvm.relay.backend import Runtime
+from tvm.driver import build as tvm_build
+from tvm.tir.transform import RemoveWeightLayoutRewriteBlock
+
+
+def get_micro_local_builder():
+    """Return micro-compatible Builder for meta schedule."""
+
+    def _micro_build(
+        mod: IRModule, target: Target, _params: Optional[Dict[str, NDArray]]
+    ) -> OperatorModule:
+        """Build function for micro targets.
+
+        Parameters
+        ----------
+        mod : IRModule
+            The IRModule to be built.
+        target : Target
+            The target to be built.
+        _params : Optional[Dict[str, NDArray]]
+            The parameters to be used for the build. Must be None.
+
+        Returns
+        -------
+        rt_mod : OperatorModule
+            The built Module.
+        """
+
+        # Note: tvm_build assigns "global_symbol" to the name of generated C function
+        # changing it is necessary for micro targets,
+        # since the generated projects already include a main function.
+        prim_func = mod["main"].with_attr("global_symbol", "default_function")
+        mod = IRModule({"main": prim_func})
+        runtime = Runtime("crt", {"system-lib": True})
+        mod = RemoveWeightLayoutRewriteBlock(skip_ndarray_rewrite=True)(mod)
+        rt_mod = tvm_build(mod, target=target, runtime=runtime)

Review Comment:
   just to confirm--are the relay.build changes needed in this PR? if not, could we remove them until we figure out how to wrap a TIR function in a Relay function?



##########
src/target/source/codegen_c_host.cc:
##########
@@ -54,6 +54,7 @@ void CodeGenCHost::Init(bool output_ssa, bool emit_asserts, bool emit_fwd_func_d
   decl_stream << "#include \"tvm/runtime/c_runtime_api.h\"\n";
   decl_stream << "#include \"tvm/runtime/c_backend_api.h\"\n";
   decl_stream << "#include <math.h>\n";
+  decl_stream << "#include <stdbool.h>\n";

Review Comment:
   why this change?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #13514: [microTVM] micro tuning with meta-schedule

Posted by GitBox <gi...@apache.org>.
tvm-bot commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1331222051

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1063873557


##########
python/tvm/contrib/micro/meta_schedule/test_autotune_ms.py:
##########
@@ -0,0 +1,181 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import numpy as np
+import pytest
+from types import MappingProxyType
+import pathlib
+import json
+
+import tvm
+from tvm import relay
+from tvm.relay.backend import Executor
+from tvm.contrib import graph_executor, utils
+from tvm import meta_schedule as ms
+from tvm.contrib.micro.meta_schedule.local_builder_micro import get_micro_local_builder
+from tvm.contrib.micro.meta_schedule.rpc_runner_micro import get_rpc_runner_micro
+
+
+def get_module():
+    data_shape = (1, 3, 16, 16)
+    weight_shape = (8, 3, 5, 5)
+    data = relay.var("data", relay.TensorType(data_shape, "float32"))
+    weight = relay.var("weight", relay.TensorType(weight_shape, "float32"))
+    y = relay.nn.conv2d(
+        data,
+        weight,
+        padding=(2, 2),
+        kernel_size=(5, 5),
+        kernel_layout="OIHW",
+        out_dtype="float32",
+    )
+    f = relay.Function([data, weight], y)
+    mod = tvm.IRModule.from_expr(f)
+    mod = relay.transform.InferType()(mod)
+
+    weight_sample = np.random.rand(
+        weight_shape[0], weight_shape[1], weight_shape[2], weight_shape[3]
+    ).astype("float32")
+    params = {mod["main"].params[1].name_hint: weight_sample}
+
+    model_info = {
+        "in_tensor": "data",
+        "in_shape": data_shape,
+        "in_dtype": "float32",
+    }
+
+    return mod, params, model_info
+
+
+@tvm.testing.requires_micro
+@pytest.mark.parametrize(
+    "platform, options",
+    [
+        pytest.param("crt", None),
+        pytest.param(
+            "zephyr",
+            {
+                "board": "qemu_x86",
+                "project_type": "host_driven",
+            },
+        ),
+    ],
+)
+def test_micro_tuning_with_meta_schedule(platform, options):
+    if platform == "crt":
+        target = tvm.target.target.micro(model="host")
+    else:
+        boards_file = (
+            pathlib.Path(tvm.micro.get_microtvm_template_projects("zephyr")) / "boards.json"
+        )
+        with open(boards_file) as f:
+            boards = json.load(f)
+        target = tvm.target.target.micro(
+            model=boards[options["board"]]["model"], options="-mcpu=cortex-m4"

Review Comment:
   I removed the `RewriteLayout(),` in postprocessing steps to fix this



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1376138088

   @areusch @mehrdadh @zxybazh Could you please take another look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1065044105


##########
tests/python/unittest/test_micro_ms_tuning.py:
##########
@@ -0,0 +1,126 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import numpy as np
+import pytest
+from types import MappingProxyType
+import pathlib
+import json
+from tests.micro.zephyr.test_ms_tuning import create_relay_module
+import tvm
+from tvm import relay
+from tvm.relay.backend import Executor
+from tvm.contrib import graph_executor, utils
+from tvm import meta_schedule as ms
+from tvm.contrib.micro.meta_schedule.local_builder_micro import get_local_builder_micro
+from tvm.contrib.micro.meta_schedule.rpc_runner_micro import get_rpc_runner_micro
+
+
+def test_micro_tuning_with_meta_schedule():
+    platform = "crt"
+    target = tvm.target.target.micro(model="host")
+    options = {}
+
+    work_dir = utils.tempdir()
+    mod, params, model_info = create_relay_module()
+    input_name = model_info["in_tensor"]
+    input_shape = model_info["in_shape"]
+    input_dtype = model_info["in_dtype"]
+    data_sample = np.random.rand(*input_shape).astype(input_dtype)
+
+    runtime = relay.backend.Runtime("crt", {"system-lib": True})
+    executor = Executor("aot", {"link-params": True})
+    # This line is necessary for link-params to take effect during
+    # task extraction and relay.build(...).
+    mod = mod.with_attr("executor", executor)
+
+    builder = get_local_builder_micro()
+
+    with ms.Profiler() as profiler:
+        with get_rpc_runner_micro(
+            platform=platform, options=options, session_timeout_sec=120
+        ) as runner:
+            db: ms.Database = ms.relay_integration.tune_relay(
+                mod=mod,
+                params=params,
+                target=target,
+                builder=builder,
+                runner=runner,
+                strategy="evolutionary",
+                num_trials_per_iter=2,
+                max_trials_per_task=10,
+                max_trials_global=100,
+                work_dir=str(work_dir),
+                module_equality="ignore-ndarray",
+            )
+
+        #  Build model using meta_schedule logs
+        ms_mod: tvm.runtime.Module = ms.relay_integration.compile_relay(
+            database=db,
+            mod=mod,
+            target=target,
+            params=params,
+            pass_config=MappingProxyType(
+                {
+                    "relay.backend.use_meta_schedule": True,
+                    "relay.backend.tir_converter": "default",
+                    "tir.disable_vectorize": True,
+                }
+            ),
+            executor=executor,
+            runtime=runtime,
+        )
+    print(profiler.table())
+
+    project = tvm.micro.generate_project(
+        str(tvm.micro.get_microtvm_template_projects(platform)),
+        ms_mod,
+        str(work_dir / "project"),
+        options=options,
+    )
+    project.build()
+    project.flash()
+    with tvm.micro.Session(project.transport()) as session:
+        aot_executor = tvm.runtime.executor.aot_executor.AotModule(session.create_aot_executor())
+        aot_executor.get_input(0).copyfrom(data_sample)
+        result = aot_executor.module.time_evaluator("run", session.device, number=3)()
+        output = aot_executor.get_output(0).numpy()
+
+    # Build reference model (without tuning)
+    dev = tvm.cpu()
+    target = tvm.target.target.micro(model="host")
+    with tvm.transform.PassContext(
+        opt_level=3, config={"tir.disable_vectorize": True}, disabled_pass=["AlterOpLayout"]
+    ):
+        ref_mod = relay.build(
+            mod,
+            target=target,
+            params=params,
+            runtime=runtime,
+        )
+    ref_mod.export_library(work_dir / "compiled_lib2.so")
+    mod2: tvm.runtime.Module = tvm.runtime.load_module(work_dir / "compiled_lib2.so")
+    graph_mod = graph_executor.GraphModule(mod2["default"](dev))
+    graph_mod.set_input(input_name, data_sample)
+    graph_mod.run()
+    ref_output = graph_mod.get_output(0).numpy()
+
+    assert np.allclose(output, ref_output, rtol=1e-4, atol=2e-4), "FAILED"
+    work_dir.remove()
+
+
+if __name__ == "__main__":
+    test_micro_tuning_with_meta_schedule()

Review Comment:
   nit: `tvm.testing.main()`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh merged pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mehrdadh merged PR #13514:
URL: https://github.com/apache/tvm/pull/13514


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on code in PR #13514:
URL: https://github.com/apache/tvm/pull/13514#discussion_r1044936385


##########
src/meta_schedule/schedule_rule/schedule_rule.cc:
##########
@@ -251,6 +251,33 @@ Array<ScheduleRule> ScheduleRule::DefaultHexagon() {
   };
 }
 
+Array<ScheduleRule> ScheduleRule::DefaultMicro() {
+  return {
+      ScheduleRule::AutoInline(
+          /*into_producer=*/false,
+          /*into_consumer=*/true,
+          /*inline_const_tensor=*/true,
+          /*disallow_if_then_else=*/true,
+          /*require_injective=*/true,
+          /*require_ordered=*/true,
+          /*disallow_op=*/Array<String>{"tir.exp"}),
+      ScheduleRule::MultiLevelTilingWideVector(
+          /*structure=*/"SRSRS",
+          /*vector_length_in_bits=*/1024,

Review Comment:
   Thanks for the comment. I updated the rule.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mkatanbaf commented on pull request #13514: [microTVM] tuning on micro targets with meta-schedule

Posted by GitBox <gi...@apache.org>.
mkatanbaf commented on PR #13514:
URL: https://github.com/apache/tvm/pull/13514#issuecomment-1344957393

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org