You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/08/05 08:40:11 UTC

[GitHub] [tvm] manupa-arm commented on a diff in pull request #12087: [UMA] UMA v1.0

manupa-arm commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r938577024


##########
apps/uma/_template/strategies.py:
##########
@@ -0,0 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Strategies for the my_ai_hw accelerator"""

Review Comment:
   Maybe it would be better to write some explanation (in the absense of an example) around what should go here



##########
python/tvm/relay/backend/contrib/uma/api/codegen.py:
##########
@@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Codegen base class of the Universal Modular Accelerator Interface (UMA)"""
+
+from typing import Callable, Optional
+import tvm
+
+
+class UMACodegen(object):
+    """
+    Codegen base class of the Universal Modular Accelerator Interface (UMA)
+    """
+
+    def __init__(self, target_name: str) -> None:
+        self.target_name = target_name
+
+    def _register_codegen(self, fmt: str = "c", **kwargs) -> None:
+        if fmt == "c":
+            self._register_c_codegen(**kwargs)
+        else:
+            raise RuntimeError(f'Unsupported codegen format "{fmt}"')
+
+    def _register_c_codegen(self, includes: Optional[Callable[[], str]] = None) -> None:
+        """Registration of UMA helper functions, e.g. includes and replace_call_extern.
+
+        Parameters
+        ----------
+        includes : OptionalCallable[[], str]]
+            user-defined function that adds C-#include statement to UMA C-Code.
+        """
+        if includes is not None:

Review Comment:
   It seems like "includes" is an important option.
   It would better to declare that separately to kwargs ?
   
   Also following the code the current code does not look like includes is optional. See the comment in the other place.



##########
include/tvm/relay/transform.h:
##########
@@ -509,6 +509,7 @@ TVM_DLL Pass SimplifyExpr();
  *
  * \param config All available targets.
  *
+ *

Review Comment:
   nit : was this intentional ?



##########
python/tvm/relay/backend/contrib/uma/api/lower.py:
##########
@@ -0,0 +1,159 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Lowering base class of the Universal Modular Accelerator Interface (UMA)"""
+
+from typing import List, Tuple, Callable, Optional
+
+import tvm
+from tvm import relay, te
+from tvm.relay.op.op import register_strategy
+from . import _ffi_api
+from .utils import PassPhase
+
+
+class UMALower:
+    """Lowering base class of the Universal Modular Accelerator Interface (UMA)."""
+
+    def __init__(self, target_name: str) -> None:
+        self.target_name = target_name
+
+        self._operator_strategies: List[
+            Tuple[
+                str,
+                Callable[
+                    [tvm.ir.Attrs, tvm.ir.Array, tvm.ir.TensorType, tvm.target.Target],
+                    tvm.relay.op.op.OpStrategy,
+                ],
+                Optional[int],
+            ]
+        ] = []
+        self._tir_passes: List[Tuple[PassPhase, tvm.tir.transform.PrimFuncPass]] = []
+
+    def _lower_relay_to_tir(self, relay_prim_func: relay.Function) -> tvm.tir.PrimFunc:
+        """Lower a Relay primitive function to a S-TIR primitive function.
+
+        Parameters
+        ----------
+        prim_func : tvm.relay.Function
+            The Relay function to lower.
+
+        Returns
+        -------
+        out : tvm.tir.PrimFunc
+            The lowered schedulable TensorIR primitive function.
+
+        """
+
+        def _get_tensors(te_cached_func):
+            outputs = list(te_cached_func.outputs)
+            stack = []
+            visited = set()
+            for output_ in outputs:
+                if output_ not in visited:
+                    visited.add(output_)
+                    stack.append(output_)
+
+            args = []
+            while len(stack) != 0:
+                tensor = stack.pop()
+                if isinstance(tensor.op, tvm.te.tensor.PlaceholderOp):
+                    args.append(tensor)
+                elif isinstance(tensor.op, tvm.te.tensor.ComputeOp):
+                    inputs = tensor.op.input_tensors
+                    for input_ in inputs:
+                        if input_ not in visited:
+                            visited.add(input_)
+                            stack.append(input_)
+
+            return args + outputs
+
+        f = tvm._ffi.get_global_func("relay.backend.LowerToTE")
+        te_cached_func = f(relay_prim_func)
+        x = _get_tensors(te_cached_func)
+        tir_prim_func = te.create_prim_func(x)
+        tir_prim_func = tir_prim_func.with_attr(
+            "global_symbol", relay_prim_func.attrs["global_symbol"]
+        )
+        # TODO: The target should probably come from somewhere else instead of being created here.
+        tir_prim_func = tir_prim_func.with_attr("target", tvm.target.Target(self.target_name))

Review Comment:
   Does not the relay prim func has the kCompiler attr ?
   
   Given that we have this duplicity betwen kCompiler string and target name, I suppose we can align to use the same "value" at both places. Thus to guarantee that, maybe we can get rid of target name in UMALower ?



##########
src/relay/backend/contrib/uma/tir_to_runtime.cc:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#include <cmath>
+#include <fstream>
+#include <map>
+#include <sstream>
+#include <string>
+#include <vector>
+
+#include "../../../../runtime/file_utils.h"
+#include "../../../../target/source/codegen_c.h"
+#include "../../../../target/source/codegen_c_host.h"
+
+namespace tvm {
+using namespace tir;
+namespace relay {
+namespace contrib {
+namespace uma {
+
+class UMACodegen : public codegen::CodeGenCHost {
+ public:
+  explicit UMACodegen(String target_str) : target_str_(target_str) {}
+
+  void Init(bool output_ssa, bool emit_asserts) {
+    auto includes_pf =
+        tvm::runtime::Registry::Get("relay.ext.uma.codegen_c_includes_" + target_str_);
+    ICHECK(includes_pf);

Review Comment:
   Would not this break if includes was not provided ?
   
   Also I think this is a user facing error and ICHECK is not suitable for such.
   Please change this LOG(FATAL) or CHECK() and that needs to be tested to ensure the error is emitted.



##########
python/tvm/relay/backend/contrib/uma/api/lower.py:
##########
@@ -0,0 +1,159 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Lowering base class of the Universal Modular Accelerator Interface (UMA)"""
+
+from typing import List, Tuple, Callable, Optional
+
+import tvm
+from tvm import relay, te
+from tvm.relay.op.op import register_strategy
+from . import _ffi_api
+from .utils import PassPhase
+
+
+class UMALower:
+    """Lowering base class of the Universal Modular Accelerator Interface (UMA)."""
+
+    def __init__(self, target_name: str) -> None:
+        self.target_name = target_name
+
+        self._operator_strategies: List[
+            Tuple[
+                str,
+                Callable[
+                    [tvm.ir.Attrs, tvm.ir.Array, tvm.ir.TensorType, tvm.target.Target],
+                    tvm.relay.op.op.OpStrategy,
+                ],
+                Optional[int],
+            ]
+        ] = []
+        self._tir_passes: List[Tuple[PassPhase, tvm.tir.transform.PrimFuncPass]] = []
+
+    def _lower_relay_to_tir(self, relay_prim_func: relay.Function) -> tvm.tir.PrimFunc:
+        """Lower a Relay primitive function to a S-TIR primitive function.
+
+        Parameters
+        ----------
+        prim_func : tvm.relay.Function
+            The Relay function to lower.
+
+        Returns
+        -------
+        out : tvm.tir.PrimFunc
+            The lowered schedulable TensorIR primitive function.
+
+        """
+
+        def _get_tensors(te_cached_func):
+            outputs = list(te_cached_func.outputs)
+            stack = []
+            visited = set()
+            for output_ in outputs:
+                if output_ not in visited:
+                    visited.add(output_)
+                    stack.append(output_)
+
+            args = []
+            while len(stack) != 0:
+                tensor = stack.pop()
+                if isinstance(tensor.op, tvm.te.tensor.PlaceholderOp):
+                    args.append(tensor)
+                elif isinstance(tensor.op, tvm.te.tensor.ComputeOp):
+                    inputs = tensor.op.input_tensors
+                    for input_ in inputs:
+                        if input_ not in visited:
+                            visited.add(input_)
+                            stack.append(input_)
+
+            return args + outputs
+
+        f = tvm._ffi.get_global_func("relay.backend.LowerToTE")

Review Comment:
   nit : it would be better use a descriptive name here. e.g. f_lower_to_te



##########
apps/uma/_template/conv2dnchw.cc:
##########
@@ -0,0 +1,95 @@
+/*
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+*/
+#include <stdlib.h>
+
+#ifdef __cplusplus
+extern "C"
+#endif
+
+    /*!
+     * \brief Conv2D function for mock-accelerator examples. Limited to same-padded Conv2D with
+     * stride (1,1) and datatype float. \param ifmap Pointer to input feature map data of size
+     * iw*ih*ic*sizeof(float). \param weights Pointer to weight data of size
+     * kh*kw*ic**oc*sizeof(float). \param result Pointer to output feature map data of size
+     * iw*ih*oc*sizeof(float). \param oc Number of channels of output feature map. \param iw Width
+     * of input feature map, ifmap. \param ih Height of input feature map, ifmap. \param ic Number
+     * of channels of input feature map. \param kh Height of convolution kernels. \param kw Width of
+     * convolution kernels.
+     *
+     * \return error code
+     *
+     */
+    int
+    my_ai_hw_conv2dnchw(float* ifmap, float* weights, float* result, int oc, int iw, int ih, int ic,

Review Comment:
   This source looks like as it was not linted. Maybe we dont lint apps ..
   Anyhow, would you be able to run clang-format on this file ?



##########
src/relay/backend/contrib/uma/targets.cc:
##########
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file relay/backend/contrib/uma/targets.cc
+ *
+ * \brief this file contains the targets for the Universal Modular Accelerator Interface (UMA).
+ */
+
+#include <tvm/relay/transform.h>
+#include <tvm/target/target.h>
+
+namespace tvm {
+
+namespace relay {
+namespace contrib {
+namespace uma {
+tvm::transform::Pass RelayToTIR(String target_name);
+runtime::Module TIRToRuntime(IRModule mod, Target target);
+}  // namespace uma
+}  // namespace contrib
+}  // namespace relay
+
+TVM_REGISTER_GLOBAL("relay.backend.contrib.uma.RegisterTarget")
+    .set_body_typed([](String target_name, Map<String, ObjectRef> attr_options) -> bool {
+      // @todo(cgerum): We probably should get rid of target.register rather sooner than later
+      //               And use a proper registry for uma backends
+      for (const String registered_target_name : ::tvm::TargetKindRegEntry::ListTargetKinds()) {
+        if (registered_target_name == target_name) {
+          return false;
+        }
+      }
+
+      auto target_kind =
+          ::tvm::TargetKindRegEntry::RegisterOrGet(target_name)
+              .set_name()
+              .set_device_type(kDLCPU)
+              .add_attr_option<Array<String>>("keys")
+              .add_attr_option<String>("tag")
+              .add_attr_option<String>("device")
+              .add_attr_option<String>("model")
+              .add_attr_option<Array<String>>("libs")
+              .add_attr_option<Target>("host")
+              .add_attr_option<Integer>("from_device")
+              .set_attr<FTVMRelayToTIR>(tvm::attr::kRelayToTIR,
+                                        relay::contrib::uma::RelayToTIR(target_name))
+              .set_attr<FTVMTIRToRuntime>("TIRToRuntime", relay::contrib::uma::TIRToRuntime);
+
+      for (auto& attr_option : attr_options) {
+        auto option_name = attr_option.first;
+        auto default_value = attr_option.second;
+        if (default_value->IsInstance<StringObj>()) {
+          target_kind.add_attr_option<String>(option_name, Downcast<String>(default_value));
+        } else if (default_value->IsInstance<IntImmNode>()) {
+          target_kind.add_attr_option<Integer>(option_name, Downcast<Integer>(default_value));
+        } else {
+          LOG(FATAL) << "Attribute option of type " << attr_option.second->GetTypeKey()

Review Comment:
   We also need to test whether this user facing error is emitted



##########
src/relay/backend/contrib/uma/relay_to_tir.cc:
##########
@@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file relay/backend/contrib/uma/codegen.cc
+ *
+ * \brief this file contains the target hooks for the Universal Modular Accelerator Interface (UMA).
+ */
+
+#include <tvm/ir/error.h>
+#include <tvm/relay/analysis.h>
+#include <tvm/relay/attrs/annotation.h>
+#include <tvm/relay/expr.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/relay/transform.h>
+#include <tvm/target/target.h>
+#include <tvm/tir/function.h>
+
+#include <unordered_map>
+#include <unordered_set>
+#include <utility>
+#include <vector>
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace uma {
+
+/*!
+ * \brief This mutator outlines functions that are marked with a named
+ * "Compiler" attribute. Functions that do not match this condition remain
+ * unaltered.
+ */
+class OutlineCompilerFunctionsMutator : public MixedModeMutator {

Review Comment:
   Maybe with a TODO/issue then ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org