You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/06/01 10:49:40 UTC

[GitHub] [tvm] abhikran-quic opened a new pull request, #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

abhikran-quic opened a new pull request, #11522:
URL: https://github.com/apache/tvm/pull/11522

   This patch adds the initial python implementation batch flatten slice op for hexagon.
   
   Slice ops are basically ops that make certain assumptions about the input and output dimensions and are expected to be called after the original op has been sliced according to those dimensions at the graph level.
   
   cc @Lunderberg @cconvey 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1157415139

   CI is passing now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jverma-quic commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
jverma-quic commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r905301485


##########
python/tvm/topi/hexagon/utils.py:
##########
@@ -49,4 +59,8 @@ def get_layout_transform_fn(layout):
         return n11c_1024c_2d
     if layout == "n11c-1024c-1d":
         return n11c_1024c_1d
+    if layout == "nhwc-1024c-1d":
+        return nhwc_1024c_1d
+    if layout == "nc-1d":
+        return nc_1024_1d

Review Comment:
   You should change the layout string "nc-1d" -> "nc_1024c_1d" since that's what the actual physical layout of the buffer seems like.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Lunderberg commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
Lunderberg commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r905080308


##########
python/tvm/topi/hexagon/slice_ops/batch_flatten.py:
##########
@@ -0,0 +1,79 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Hexagon slice batch flatten compute and schedule"""
+import typing
+
+from tvm import te, tir, topi
+from ..utils import get_layout_transform_fn
+
+
+def batch_flatten_compute(inp: te.Tensor) -> te.Tensor:
+    """Compute for slice batch flatten op for hexagon.
+    This op makes the following assumptions:
+    1. This op is written for a sliced batch flatten operation.
+    2. The input is assumed to be in NHWC layout.
+
+    Parameters
+    ----------
+    Input : te.Tensor
+        Input activations padded for inner dimension size
+    Returns
+    -------
+    Output : te.Tensor
+        Output of applying batch flatten operation on input
+    """
+    return topi.nn.flatten(inp)
+
+
+def batch_flatten_stir_schedule(
+    out: te.Tensor,
+    inp: te.Tensor,
+    out_layout: typing.Callable,
+    in_layout: typing.Callable,
+) -> tir.Schedule:
+    """STIR schedule definition for the compute of batch flatten compute.
+    Parameters
+    ----------
+    outputs : te.Tensor
+        The output tensor as returned by a call to batch_flatten_compute
+    input : te.Tensor
+        Input tensor to batch_flatten
+    out_layout: typing.Callable
+        The transformation function definition for the expected output layout
+    in_layout: typing.Callable
+        The transformation function definition for the input layout
+    Returns
+    -------
+    sch : tvm.tir.Schedule
+        The STIR schedule for slice batch flatten compute
+    """
+
+    batch_flatten_func = te.create_prim_func([inp, out])
+    sch = tir.Schedule(batch_flatten_func, debug_mask="all")
+    compute = sch.get_block("compute")
+
+    sch.transform_layout(compute, inp.name, get_layout_transform_fn(in_layout))

Review Comment:
   It looks like the layouts are being passed a string, not a callable, so there's some discrepancy with the type annotation.  Can we either update the typing annotation, move the `get_layout_transform_fn` to the calling scope, or allow for either `str` or `typing.Callable` type inputs?



##########
tests/python/contrib/test_hexagon/topi/test_batch_flatten.py:
##########
@@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from ..infrastructure import allocate_hexagon_array, transform_numpy
+
+
+class BaseTestBatchFlatten:
+    input_shape = tvm.testing.parameter(
+        (1, 1, 1, 2048),
+        (1, 2, 4, 2048),
+        (1, 8, 8, 1024),
+        (2, 4, 8, 1024),
+        (2, 3, 5, 2048),
+    )
+    input_layout, input_axis_sep = tvm.testing.parameters(("nhwc-1024c-1d", [4]))
+    output_layout, output_axis_sep = tvm.testing.parameters(("nc-1d", [2]))
+    data_type = tvm.testing.parameter("float16")
+
+
+class TestBatchFlatten(BaseTestBatchFlatten):
+    @tvm.testing.fixture
+    def output_shape(self, input_shape):
+        return input_shape[0], input_shape[1] * input_shape[2] * input_shape[3]
+
+    @tvm.testing.requires_hexagon
+    def test_batch_flatten(
+        self,
+        data_type,
+        input_shape,
+        input_layout,
+        input_axis_sep,
+        output_shape,
+        output_layout,
+        output_axis_sep,
+        hexagon_session,
+    ):
+        target_hexagon = tvm.target.hexagon("v69")
+        target = tvm.target.Target(target_hexagon, host=target_hexagon)
+        A = te.placeholder(input_shape, name="A", dtype=data_type)
+        D = sl.batch_flatten_compute(A)
+        tir_s = sl.batch_flatten_stir_schedule(
+            D,
+            A,
+            output_layout,
+            input_layout,
+        )
+        func_name = "batch_flatten"
+        with tvm.transform.PassContext(opt_level=3, config={"tir.disable_assert": True}):
+            runtime_module = tvm.build(tir_s.mod, target=target, name=func_name)
+
+        mod = hexagon_session.load_module(runtime_module)
+
+        a_numpy = (np.random.uniform(-1, 1, input_shape)).astype(data_type)
+        ref = np.reshape(a_numpy, output_shape)
+
+        input_np_transformed = transform_numpy(a_numpy, "nhwc", input_layout)
+        ref_np_transformed = transform_numpy(ref, "nhwc", output_layout)
+
+        a_tvm = allocate_hexagon_array(
+            hexagon_session.device,
+            data=input_np_transformed,
+            axis_separators=input_axis_sep,
+            mem_scope="global.vtcm",
+        )
+        output = allocate_hexagon_array(
+            hexagon_session.device,
+            ref_np_transformed.shape,
+            data_type,
+            axis_separators=output_axis_sep,
+            mem_scope="global.vtcm",
+        )
+        mod(a_tvm, output)
+        np.testing.assert_allclose(output.numpy(), ref_np_transformed, atol=1e-07, rtol=0)
+
+
+if __name__ == "__main__":
+    sys.exit(pytest.main(sys.argv))

Review Comment:
   Can use `tvm.testing.main()` instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
cconvey commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1159106779

   @abhikran-quic : Apologies for being slow to reply.  I should be able to review this next Tuesday (2022-6-21) if you don't mind waiting that long.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1144717650

   HI @Lunderberg , @cconvey : Could you please review this PR ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Lunderberg commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
Lunderberg commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r887990956


##########
tests/python/contrib/test_hexagon/test_batch_flatten.py:
##########
@@ -0,0 +1,130 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from .infrastructure import allocate_hexagon_array
+
+
+def n11c_1024c_1d(n, h, w, c):
+    return [n, h, w, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def nc_1024_1d(n, c):
+    return [n, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def transform_numpy(arr_np, layout):
+    if layout == "nhwc":
+        return arr_np
+    elif layout == "n11c-1024c-1d":
+        N, H, W, C = arr_np.shape
+        return arr_np.reshape([N, H, W, C // 1024, 1024])
+    elif layout == "nc-1d":
+        N, C = arr_np.shape
+        return arr_np.reshape([N, C // 1024, 1024])
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, output_layout):
+    return transform_numpy(expected_output_np, output_layout)
+
+
+class BaseTestBatchFlatten:
+    (
+        input_shape,
+        input_layout,
+        output_layout,
+        input_axis_sep,
+        output_axis_sep,
+    ) = tvm.testing.parameters(
+        ((1, 1, 1, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 2, 4, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 8, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 4, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 3, 5, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+    )
+    data_type = tvm.testing.parameter("float16")
+
+
+class TestBatchFlatten(BaseTestBatchFlatten):
+    @tvm.testing.fixture
+    def output_shape(self, input_shape):
+        return input_shape[0], input_shape[1] * input_shape[2] * input_shape[3]
+
+    @tvm.testing.requires_hexagon
+    def test_batch_flatten(
+        self,
+        data_type,
+        input_shape,
+        input_layout,
+        input_axis_sep,
+        output_shape,
+        output_layout,
+        output_axis_sep,
+        hexagon_session,
+    ):
+        target_hexagon = tvm.target.hexagon("v69")
+        target = tvm.target.Target(target_hexagon, host=target_hexagon)
+        A = te.placeholder(input_shape, name="A", dtype=data_type)
+        D = sl.batch_flatten_compute(A)
+        tir_s = sl.batch_flatten_stir_schedule(
+            D,
+            A,
+            nc_1024_1d,
+            n11c_1024c_1d,
+        )
+        func_name = "batch_flatten"
+        with tvm.transform.PassContext(opt_level=3, config={"tir.disable_assert": True}):
+            tir_irm = tvm.lower(tir_s.mod, [A, D], name=func_name)
+            runtime_module = tvm.build(tir_irm, [A, D], target=target, name=func_name)

Review Comment:
   `tvm.build` doesn't require the input to be lowered, as the first step it performs is to call `tvm.lower`.  Lowering shouldn't have any effect the second time around, but isn't required.



##########
tests/python/contrib/test_hexagon/test_batch_flatten.py:
##########
@@ -0,0 +1,130 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from .infrastructure import allocate_hexagon_array
+
+
+def n11c_1024c_1d(n, h, w, c):
+    return [n, h, w, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def nc_1024_1d(n, c):
+    return [n, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def transform_numpy(arr_np, layout):
+    if layout == "nhwc":
+        return arr_np
+    elif layout == "n11c-1024c-1d":
+        N, H, W, C = arr_np.shape
+        return arr_np.reshape([N, H, W, C // 1024, 1024])
+    elif layout == "nc-1d":
+        N, C = arr_np.shape
+        return arr_np.reshape([N, C // 1024, 1024])
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, output_layout):
+    return transform_numpy(expected_output_np, output_layout)
+
+
+class BaseTestBatchFlatten:
+    (
+        input_shape,
+        input_layout,
+        output_layout,
+        input_axis_sep,
+        output_axis_sep,
+    ) = tvm.testing.parameters(
+        ((1, 1, 1, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 2, 4, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 8, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 4, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 3, 5, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+    )
+    data_type = tvm.testing.parameter("float16")
+
+
+class TestBatchFlatten(BaseTestBatchFlatten):
+    @tvm.testing.fixture
+    def output_shape(self, input_shape):
+        return input_shape[0], input_shape[1] * input_shape[2] * input_shape[3]
+
+    @tvm.testing.requires_hexagon
+    def test_batch_flatten(
+        self,
+        data_type,
+        input_shape,
+        input_layout,
+        input_axis_sep,
+        output_shape,
+        output_layout,
+        output_axis_sep,
+        hexagon_session,
+    ):
+        target_hexagon = tvm.target.hexagon("v69")
+        target = tvm.target.Target(target_hexagon, host=target_hexagon)
+        A = te.placeholder(input_shape, name="A", dtype=data_type)
+        D = sl.batch_flatten_compute(A)
+        tir_s = sl.batch_flatten_stir_schedule(
+            D,
+            A,
+            nc_1024_1d,
+            n11c_1024c_1d,
+        )
+        func_name = "batch_flatten"
+        with tvm.transform.PassContext(opt_level=3, config={"tir.disable_assert": True}):
+            tir_irm = tvm.lower(tir_s.mod, [A, D], name=func_name)

Review Comment:
   The argument list passed to `tvm.lower` and `tvm.build` (here, `[A, D]`) is used to specify which TE tensors should be exposed as TIR arguments, and should only be provided for TE-based schedules.  For STIR, the arguments are already defined in the call to `te.create_prim_func`.



##########
tests/python/contrib/test_hexagon/infrastructure.py:
##########
@@ -48,7 +48,7 @@ def allocate_hexagon_array(
         for dim_i, dim_f in zip(boundaries[:-1], boundaries[1:])
     ]
 
-    arr = tvm.nd.empty(physical_shape, dtype=dtype, device=dev)
+    arr = tvm.nd.empty(physical_shape, dtype=dtype, device=dev, mem_scope=mem_scope)

Review Comment:
   Good catch, and thank you for finding this.  I missed passing this through in #10904.



##########
tests/python/contrib/test_hexagon/test_batch_flatten.py:
##########
@@ -0,0 +1,130 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from .infrastructure import allocate_hexagon_array
+
+
+def n11c_1024c_1d(n, h, w, c):
+    return [n, h, w, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def nc_1024_1d(n, c):
+    return [n, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def transform_numpy(arr_np, layout):
+    if layout == "nhwc":
+        return arr_np
+    elif layout == "n11c-1024c-1d":
+        N, H, W, C = arr_np.shape
+        return arr_np.reshape([N, H, W, C // 1024, 1024])
+    elif layout == "nc-1d":
+        N, C = arr_np.shape
+        return arr_np.reshape([N, C // 1024, 1024])
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, output_layout):
+    return transform_numpy(expected_output_np, output_layout)
+
+
+class BaseTestBatchFlatten:

Review Comment:
   Separating parameters out into a base class for testing isn't required.  I use this if I want to define several groups of parameters with associated names (e.g. input sizes used by a specific model), but since there's only one subclass of `BaseTestBatchFlatten`, I'd move the contents into `TestBatchFlatten` and remove `BaseTestBatchFlatten` altogether.
   
   There's also a lot of repetition between the test cases, so it isn't immediately clear that these are only varying the input shape and not the layouts.  I'd rearrange them so that the shape is emphasized as the parameter being varied.
   
   ```python
   input_shape = tvm.testing.parameter(
       (1, 1, 1, 2048),
       (1, 2, 4, 2048),
       (1, 8, 8, 1024),
       (2, 4, 8, 1024),
       (2, 3, 5, 2048),
   )
   input_layout, input_axis_sep = tvm.testing.parameters(("n11c-1024c-1d", [4]))
   output_layout, output_axis_sep = tvm.testing.parameters( ("nc-1d", [2]))
   data_type = tvm.testing.parameter("float16")
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] kparzysz-quic merged pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
kparzysz-quic merged PR #11522:
URL: https://github.com/apache/tvm/pull/11522


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r905730847


##########
python/tvm/topi/hexagon/utils.py:
##########
@@ -49,4 +59,8 @@ def get_layout_transform_fn(layout):
         return n11c_1024c_2d
     if layout == "n11c-1024c-1d":
         return n11c_1024c_1d
+    if layout == "nhwc-1024c-1d":
+        return nhwc_1024c_1d
+    if layout == "nc-1d":
+        return nc_1024_1d

Review Comment:
   Sure. I've fixed this.



##########
tests/python/contrib/test_hexagon/infrastructure.py:
##########
@@ -245,6 +245,12 @@ def transform_numpy(arr_np, current_layout: str, new_layout: str):
             n, h, w, c = arr_np.shape
             assert h == 1 and w == 1, "The size of h and w must be 1"
             return arr_np.reshape([n, 1, 1, c // 1024, 1024])
+        if new_layout == "nc-1d":
+            N, C = arr_np.shape

Review Comment:
   Done.



##########
python/tvm/topi/hexagon/slice_ops/batch_flatten.py:
##########
@@ -0,0 +1,79 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Hexagon slice batch flatten compute and schedule"""
+import typing
+
+from tvm import te, tir, topi
+from ..utils import get_layout_transform_fn
+
+
+def batch_flatten_compute(inp: te.Tensor) -> te.Tensor:
+    """Compute for slice batch flatten op for hexagon.
+    This op makes the following assumptions:
+    1. This op is written for a sliced batch flatten operation.
+    2. The input is assumed to be in NHWC layout.
+
+    Parameters
+    ----------
+    Input : te.Tensor
+        Input activations padded for inner dimension size
+    Returns
+    -------
+    Output : te.Tensor
+        Output of applying batch flatten operation on input
+    """
+    return topi.nn.flatten(inp)
+
+
+def batch_flatten_stir_schedule(
+    out: te.Tensor,
+    inp: te.Tensor,
+    out_layout: typing.Callable,
+    in_layout: typing.Callable,
+) -> tir.Schedule:
+    """STIR schedule definition for the compute of batch flatten compute.
+    Parameters
+    ----------
+    outputs : te.Tensor
+        The output tensor as returned by a call to batch_flatten_compute
+    input : te.Tensor
+        Input tensor to batch_flatten
+    out_layout: typing.Callable
+        The transformation function definition for the expected output layout
+    in_layout: typing.Callable
+        The transformation function definition for the input layout
+    Returns
+    -------
+    sch : tvm.tir.Schedule
+        The STIR schedule for slice batch flatten compute
+    """
+
+    batch_flatten_func = te.create_prim_func([inp, out])
+    sch = tir.Schedule(batch_flatten_func, debug_mask="all")
+    compute = sch.get_block("compute")
+
+    sch.transform_layout(compute, inp.name, get_layout_transform_fn(in_layout))

Review Comment:
   Sure. I've changed the layout to be `str`



##########
tests/python/contrib/test_hexagon/topi/test_batch_flatten.py:
##########
@@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from ..infrastructure import allocate_hexagon_array, transform_numpy
+
+
+class BaseTestBatchFlatten:
+    input_shape = tvm.testing.parameter(
+        (1, 1, 1, 2048),
+        (1, 2, 4, 2048),
+        (1, 8, 8, 1024),
+        (2, 4, 8, 1024),
+        (2, 3, 5, 2048),
+    )
+    input_layout, input_axis_sep = tvm.testing.parameters(("nhwc-1024c-1d", [4]))
+    output_layout, output_axis_sep = tvm.testing.parameters(("nc-1d", [2]))
+    data_type = tvm.testing.parameter("float16")
+
+
+class TestBatchFlatten(BaseTestBatchFlatten):
+    @tvm.testing.fixture
+    def output_shape(self, input_shape):
+        return input_shape[0], input_shape[1] * input_shape[2] * input_shape[3]
+
+    @tvm.testing.requires_hexagon
+    def test_batch_flatten(
+        self,
+        data_type,
+        input_shape,
+        input_layout,
+        input_axis_sep,
+        output_shape,
+        output_layout,
+        output_axis_sep,
+        hexagon_session,
+    ):
+        target_hexagon = tvm.target.hexagon("v69")
+        target = tvm.target.Target(target_hexagon, host=target_hexagon)
+        A = te.placeholder(input_shape, name="A", dtype=data_type)
+        D = sl.batch_flatten_compute(A)
+        tir_s = sl.batch_flatten_stir_schedule(
+            D,
+            A,
+            output_layout,
+            input_layout,
+        )
+        func_name = "batch_flatten"
+        with tvm.transform.PassContext(opt_level=3, config={"tir.disable_assert": True}):
+            runtime_module = tvm.build(tir_s.mod, target=target, name=func_name)
+
+        mod = hexagon_session.load_module(runtime_module)
+
+        a_numpy = (np.random.uniform(-1, 1, input_shape)).astype(data_type)
+        ref = np.reshape(a_numpy, output_shape)
+
+        input_np_transformed = transform_numpy(a_numpy, "nhwc", input_layout)
+        ref_np_transformed = transform_numpy(ref, "nhwc", output_layout)
+
+        a_tvm = allocate_hexagon_array(
+            hexagon_session.device,
+            data=input_np_transformed,
+            axis_separators=input_axis_sep,
+            mem_scope="global.vtcm",
+        )
+        output = allocate_hexagon_array(
+            hexagon_session.device,
+            ref_np_transformed.shape,
+            data_type,
+            axis_separators=output_axis_sep,
+            mem_scope="global.vtcm",
+        )
+        mod(a_tvm, output)
+        np.testing.assert_allclose(output.numpy(), ref_np_transformed, atol=1e-07, rtol=0)
+
+
+if __name__ == "__main__":
+    sys.exit(pytest.main(sys.argv))

Review Comment:
   Done.



##########
tests/python/contrib/test_hexagon/topi/test_batch_flatten.py:
##########
@@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from ..infrastructure import allocate_hexagon_array, transform_numpy
+
+
+class BaseTestBatchFlatten:
+    input_shape = tvm.testing.parameter(
+        (1, 1, 1, 2048),
+        (1, 2, 4, 2048),
+        (1, 8, 8, 1024),
+        (2, 4, 8, 1024),
+        (2, 3, 5, 2048),
+    )
+    input_layout, input_axis_sep = tvm.testing.parameters(("nhwc-1024c-1d", [4]))
+    output_layout, output_axis_sep = tvm.testing.parameters(("nc-1d", [2]))

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1162649984

   Hi @Lunderberg @cconvey @mehrdadh,
   Can you please review this PR ? It's ready to be merged from my side. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1170545588

   @abhikran-quic please resolve the conflict by rebasing with `main` and push into this branch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1158600050

   Hi @Lunderberg @cconvey : Could you please review this PR for any more comments ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1168607144

   Gentle reminder! Please help in merging this PR. I want to raise another PR that's dependent on this one.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jverma-quic commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
jverma-quic commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r906208286


##########
python/tvm/topi/hexagon/slice_ops/batch_flatten.py:
##########
@@ -0,0 +1,77 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Hexagon slice batch flatten compute and schedule"""
+from tvm import te, tir, topi
+from ..utils import get_layout_transform_fn
+
+
+def batch_flatten_compute(inp: te.Tensor) -> te.Tensor:
+    """Compute for slice batch flatten op for hexagon.
+    This op makes the following assumptions:
+    1. This op is written for a sliced batch flatten operation.
+    2. The input is assumed to be in NHWC layout.
+
+    Parameters
+    ----------
+    Input : te.Tensor
+        Input activations padded for inner dimension size
+    Returns
+    -------
+    Output : te.Tensor
+        Output of applying batch flatten operation on input
+    """
+    return topi.nn.flatten(inp)
+
+
+def batch_flatten_stir_schedule(
+    out: te.Tensor,
+    inp: te.Tensor,
+    out_layout: str,
+    in_layout: str,
+) -> tir.Schedule:
+    """STIR schedule definition for the compute of batch flatten compute.
+    Parameters
+    ----------
+    outputs : te.Tensor
+        The output tensor as returned by a call to batch_flatten_compute
+    input : te.Tensor
+        Input tensor to batch_flatten
+    out_layout: typing.Callable
+        The transformation function definition for the expected output layout
+    in_layout: typing.Callable
+        The transformation function definition for the input layout
+    Returns
+    -------
+    sch : tvm.tir.Schedule
+        The STIR schedule for slice batch flatten compute
+    """
+
+    batch_flatten_func = te.create_prim_func([inp, out])
+    sch = tir.Schedule(batch_flatten_func, debug_mask="all")
+    compute = sch.get_block("compute")

Review Comment:
   I completely agree with you and don't like it either. But, there doesn't seem to be a way to specify a different block name for topi defined ops. Please correct me if I'm wrong. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r888692222


##########
tests/python/contrib/test_hexagon/test_batch_flatten.py:
##########
@@ -0,0 +1,130 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from .infrastructure import allocate_hexagon_array
+
+
+def n11c_1024c_1d(n, h, w, c):
+    return [n, h, w, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def nc_1024_1d(n, c):
+    return [n, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def transform_numpy(arr_np, layout):
+    if layout == "nhwc":
+        return arr_np
+    elif layout == "n11c-1024c-1d":
+        N, H, W, C = arr_np.shape
+        return arr_np.reshape([N, H, W, C // 1024, 1024])
+    elif layout == "nc-1d":
+        N, C = arr_np.shape
+        return arr_np.reshape([N, C // 1024, 1024])
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, output_layout):
+    return transform_numpy(expected_output_np, output_layout)
+
+
+class BaseTestBatchFlatten:
+    (
+        input_shape,
+        input_layout,
+        output_layout,
+        input_axis_sep,
+        output_axis_sep,
+    ) = tvm.testing.parameters(
+        ((1, 1, 1, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 2, 4, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 8, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 4, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 3, 5, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+    )
+    data_type = tvm.testing.parameter("float16")
+
+
+class TestBatchFlatten(BaseTestBatchFlatten):
+    @tvm.testing.fixture
+    def output_shape(self, input_shape):
+        return input_shape[0], input_shape[1] * input_shape[2] * input_shape[3]
+
+    @tvm.testing.requires_hexagon
+    def test_batch_flatten(
+        self,
+        data_type,
+        input_shape,
+        input_layout,
+        input_axis_sep,
+        output_shape,
+        output_layout,
+        output_axis_sep,
+        hexagon_session,
+    ):
+        target_hexagon = tvm.target.hexagon("v69")
+        target = tvm.target.Target(target_hexagon, host=target_hexagon)
+        A = te.placeholder(input_shape, name="A", dtype=data_type)
+        D = sl.batch_flatten_compute(A)
+        tir_s = sl.batch_flatten_stir_schedule(
+            D,
+            A,
+            nc_1024_1d,
+            n11c_1024c_1d,
+        )
+        func_name = "batch_flatten"
+        with tvm.transform.PassContext(opt_level=3, config={"tir.disable_assert": True}):
+            tir_irm = tvm.lower(tir_s.mod, [A, D], name=func_name)

Review Comment:
   Thank you, this is really nice! 
   I have removed `[A, D]` from the arguments.



##########
tests/python/contrib/test_hexagon/test_batch_flatten.py:
##########
@@ -0,0 +1,130 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from .infrastructure import allocate_hexagon_array
+
+
+def n11c_1024c_1d(n, h, w, c):
+    return [n, h, w, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def nc_1024_1d(n, c):
+    return [n, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def transform_numpy(arr_np, layout):
+    if layout == "nhwc":
+        return arr_np
+    elif layout == "n11c-1024c-1d":
+        N, H, W, C = arr_np.shape
+        return arr_np.reshape([N, H, W, C // 1024, 1024])
+    elif layout == "nc-1d":
+        N, C = arr_np.shape
+        return arr_np.reshape([N, C // 1024, 1024])
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, output_layout):
+    return transform_numpy(expected_output_np, output_layout)
+
+
+class BaseTestBatchFlatten:

Review Comment:
   Thank you. I have updated the test cases in a new commit.



##########
tests/python/contrib/test_hexagon/test_batch_flatten.py:
##########
@@ -0,0 +1,130 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from .infrastructure import allocate_hexagon_array
+
+
+def n11c_1024c_1d(n, h, w, c):
+    return [n, h, w, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def nc_1024_1d(n, c):
+    return [n, c // 1024, tvm.te.AXIS_SEPARATOR, c % 1024]
+
+
+def transform_numpy(arr_np, layout):
+    if layout == "nhwc":
+        return arr_np
+    elif layout == "n11c-1024c-1d":
+        N, H, W, C = arr_np.shape
+        return arr_np.reshape([N, H, W, C // 1024, 1024])
+    elif layout == "nc-1d":
+        N, C = arr_np.shape
+        return arr_np.reshape([N, C // 1024, 1024])
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, output_layout):
+    return transform_numpy(expected_output_np, output_layout)
+
+
+class BaseTestBatchFlatten:
+    (
+        input_shape,
+        input_layout,
+        output_layout,
+        input_axis_sep,
+        output_axis_sep,
+    ) = tvm.testing.parameters(
+        ((1, 1, 1, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 2, 4, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((1, 8, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 4, 8, 1024), "n11c-1024c-1d", "nc-1d", [4], [2]),
+        ((2, 3, 5, 2048), "n11c-1024c-1d", "nc-1d", [4], [2]),
+    )
+    data_type = tvm.testing.parameter("float16")
+
+
+class TestBatchFlatten(BaseTestBatchFlatten):
+    @tvm.testing.fixture
+    def output_shape(self, input_shape):
+        return input_shape[0], input_shape[1] * input_shape[2] * input_shape[3]
+
+    @tvm.testing.requires_hexagon
+    def test_batch_flatten(
+        self,
+        data_type,
+        input_shape,
+        input_layout,
+        input_axis_sep,
+        output_shape,
+        output_layout,
+        output_axis_sep,
+        hexagon_session,
+    ):
+        target_hexagon = tvm.target.hexagon("v69")
+        target = tvm.target.Target(target_hexagon, host=target_hexagon)
+        A = te.placeholder(input_shape, name="A", dtype=data_type)
+        D = sl.batch_flatten_compute(A)
+        tir_s = sl.batch_flatten_stir_schedule(
+            D,
+            A,
+            nc_1024_1d,
+            n11c_1024c_1d,
+        )
+        func_name = "batch_flatten"
+        with tvm.transform.PassContext(opt_level=3, config={"tir.disable_assert": True}):
+            tir_irm = tvm.lower(tir_s.mod, [A, D], name=func_name)
+            runtime_module = tvm.build(tir_irm, [A, D], target=target, name=func_name)

Review Comment:
   Sure. I've modified `tvm.build` to use `tir_s.mod`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jverma-quic commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
jverma-quic commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r905301485


##########
python/tvm/topi/hexagon/utils.py:
##########
@@ -49,4 +59,8 @@ def get_layout_transform_fn(layout):
         return n11c_1024c_2d
     if layout == "n11c-1024c-1d":
         return n11c_1024c_1d
+    if layout == "nhwc-1024c-1d":
+        return nhwc_1024c_1d
+    if layout == "nc-1d":
+        return nc_1024_1d

Review Comment:
   You should change the layout string "nc-1d" -> "nc_1024c_2d" since that's what the actual physical layout of the buffer seems like.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
cconvey commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r906199414


##########
python/tvm/topi/hexagon/slice_ops/batch_flatten.py:
##########
@@ -0,0 +1,77 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Hexagon slice batch flatten compute and schedule"""
+from tvm import te, tir, topi
+from ..utils import get_layout_transform_fn
+
+
+def batch_flatten_compute(inp: te.Tensor) -> te.Tensor:
+    """Compute for slice batch flatten op for hexagon.
+    This op makes the following assumptions:
+    1. This op is written for a sliced batch flatten operation.
+    2. The input is assumed to be in NHWC layout.
+
+    Parameters
+    ----------
+    Input : te.Tensor
+        Input activations padded for inner dimension size
+    Returns
+    -------
+    Output : te.Tensor
+        Output of applying batch flatten operation on input
+    """
+    return topi.nn.flatten(inp)
+
+
+def batch_flatten_stir_schedule(
+    out: te.Tensor,
+    inp: te.Tensor,
+    out_layout: str,
+    in_layout: str,
+) -> tir.Schedule:
+    """STIR schedule definition for the compute of batch flatten compute.
+    Parameters
+    ----------
+    outputs : te.Tensor
+        The output tensor as returned by a call to batch_flatten_compute
+    input : te.Tensor
+        Input tensor to batch_flatten
+    out_layout: typing.Callable
+        The transformation function definition for the expected output layout
+    in_layout: typing.Callable
+        The transformation function definition for the input layout
+    Returns
+    -------
+    sch : tvm.tir.Schedule
+        The STIR schedule for slice batch flatten compute
+    """
+
+    batch_flatten_func = te.create_prim_func([inp, out])
+    sch = tir.Schedule(batch_flatten_func, debug_mask="all")
+    compute = sch.get_block("compute")

Review Comment:
   I'm a bit suspicious about assuming that there's a block named "compute", as I don't see any promises in the documentation about the name and what it represents.  But making assumptions like this seems somewhat idiomatic within TVM, so IMHO it's okay enough.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r906249824


##########
python/tvm/topi/hexagon/slice_ops/batch_flatten.py:
##########
@@ -0,0 +1,77 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Hexagon slice batch flatten compute and schedule"""
+from tvm import te, tir, topi
+from ..utils import get_layout_transform_fn
+
+
+def batch_flatten_compute(inp: te.Tensor) -> te.Tensor:
+    """Compute for slice batch flatten op for hexagon.
+    This op makes the following assumptions:
+    1. This op is written for a sliced batch flatten operation.
+    2. The input is assumed to be in NHWC layout.
+
+    Parameters
+    ----------
+    Input : te.Tensor
+        Input activations padded for inner dimension size
+    Returns
+    -------
+    Output : te.Tensor
+        Output of applying batch flatten operation on input
+    """
+    return topi.nn.flatten(inp)
+
+
+def batch_flatten_stir_schedule(
+    out: te.Tensor,
+    inp: te.Tensor,
+    out_layout: str,
+    in_layout: str,
+) -> tir.Schedule:
+    """STIR schedule definition for the compute of batch flatten compute.
+    Parameters
+    ----------
+    outputs : te.Tensor
+        The output tensor as returned by a call to batch_flatten_compute
+    input : te.Tensor
+        Input tensor to batch_flatten
+    out_layout: typing.Callable
+        The transformation function definition for the expected output layout
+    in_layout: typing.Callable
+        The transformation function definition for the input layout
+    Returns
+    -------
+    sch : tvm.tir.Schedule
+        The STIR schedule for slice batch flatten compute
+    """
+
+    batch_flatten_func = te.create_prim_func([inp, out])
+    sch = tir.Schedule(batch_flatten_func, debug_mask="all")
+    compute = sch.get_block("compute")

Review Comment:
   Since I'm reusing `batch_flatten` compute in `topi` , the block name `compute` comes up. Sharing the schedule below
   
   ```
   @main = primfn(var_A: handle, var_compute: handle) -> ()
     attr = {"global_symbol": "main", "tir.noalias": True}
     buffers = {A: Buffer(A_1: Pointer(global float16), float16, [1, 1, 1, 2, 1024], [], axis_separators=[4]),
                compute: Buffer(compute_1: Pointer(global float16), float16, [1, 2, 1024], [], axis_separators=[2])}
     buffer_map = {var_A: A, var_compute: compute} {
     block([], "root") {
       tir.reads([])
       tir.writes([])
       for (i0: int32, 0, 1) {
         for (i1_0_0: int32, 0, 1) {
           for (i1_0_1: int32, 0, 1) {
             for (i1_1_0: int32, 0, 2) {
               for (i1_1_1_0: int32, 0, 16) {
                 for (i1_1_1_1: int32, 0, 64) "vectorized" {
                   block([1, 2048], "compute") as [i, j] {
                     bind(i, 0)
                     bind(j, (((i1_1_0*1024) + (i1_1_1_0*64)) + i1_1_1_1))
                     tir.reads([A[0, 0, 0, (j / 1024), (j % 1024)]])
                     tir.writes([compute[0, (j / 1024), (j % 1024)]])
                     compute[0, (j / 1024), (j % 1024)] = A[0, 0, 0, (j / 1024), (j % 1024)]
                   }
                 }
               }
             }
           }
         }
       }
     }
     }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1167228810

   @Lunderberg , @mehrdadh , @cconvey : Could you please merge this PR ? I've fixed merge conflicts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
cconvey commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r907521093


##########
python/tvm/topi/hexagon/slice_ops/batch_flatten.py:
##########
@@ -0,0 +1,77 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Hexagon slice batch flatten compute and schedule"""
+from tvm import te, tir, topi
+from ..utils import get_layout_transform_fn
+
+
+def batch_flatten_compute(inp: te.Tensor) -> te.Tensor:
+    """Compute for slice batch flatten op for hexagon.
+    This op makes the following assumptions:
+    1. This op is written for a sliced batch flatten operation.
+    2. The input is assumed to be in NHWC layout.
+
+    Parameters
+    ----------
+    Input : te.Tensor
+        Input activations padded for inner dimension size
+    Returns
+    -------
+    Output : te.Tensor
+        Output of applying batch flatten operation on input
+    """
+    return topi.nn.flatten(inp)
+
+
+def batch_flatten_stir_schedule(
+    out: te.Tensor,
+    inp: te.Tensor,
+    out_layout: str,
+    in_layout: str,
+) -> tir.Schedule:
+    """STIR schedule definition for the compute of batch flatten compute.
+    Parameters
+    ----------
+    outputs : te.Tensor
+        The output tensor as returned by a call to batch_flatten_compute
+    input : te.Tensor
+        Input tensor to batch_flatten
+    out_layout: typing.Callable
+        The transformation function definition for the expected output layout
+    in_layout: typing.Callable
+        The transformation function definition for the input layout
+    Returns
+    -------
+    sch : tvm.tir.Schedule
+        The STIR schedule for slice batch flatten compute
+    """
+
+    batch_flatten_func = te.create_prim_func([inp, out])
+    sch = tir.Schedule(batch_flatten_func, debug_mask="all")
+    compute = sch.get_block("compute")

Review Comment:
   :+1: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] abhikran-quic commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
abhikran-quic commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1170857126

   > @abhikran-quic please resolve the conflict by rebasing with `main` and push into this branch.
   
   @mehrdadh : I have resolved merge conflicts. Could you please review this ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] cconvey commented on pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
cconvey commented on PR #11522:
URL: https://github.com/apache/tvm/pull/11522#issuecomment-1164450867

   > Can you please review this PR ? It's ready to be merged from my side.
   
   Reviewing now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jverma-quic commented on a diff in pull request #11522: [TOPI] [Hexagon] Batch flatten slice op initial version

Posted by GitBox <gi...@apache.org>.
jverma-quic commented on code in PR #11522:
URL: https://github.com/apache/tvm/pull/11522#discussion_r905303666


##########
tests/python/contrib/test_hexagon/topi/test_batch_flatten.py:
##########
@@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import pytest
+
+import tvm
+import tvm.testing
+import tvm.topi.hexagon.slice_ops as sl
+from tvm import te, topi
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.topi import testing
+
+from ..infrastructure import allocate_hexagon_array, transform_numpy
+
+
+class BaseTestBatchFlatten:
+    input_shape = tvm.testing.parameter(
+        (1, 1, 1, 2048),
+        (1, 2, 4, 2048),
+        (1, 8, 8, 1024),
+        (2, 4, 8, 1024),
+        (2, 3, 5, 2048),
+    )
+    input_layout, input_axis_sep = tvm.testing.parameters(("nhwc-1024c-1d", [4]))
+    output_layout, output_axis_sep = tvm.testing.parameters(("nc-1d", [2]))

Review Comment:
   Please change "1d" to "2d" in the layout string.



##########
tests/python/contrib/test_hexagon/infrastructure.py:
##########
@@ -245,6 +245,12 @@ def transform_numpy(arr_np, current_layout: str, new_layout: str):
             n, h, w, c = arr_np.shape
             assert h == 1 and w == 1, "The size of h and w must be 1"
             return arr_np.reshape([n, 1, 1, c // 1024, 1024])
+        if new_layout == "nc-1d":
+            N, C = arr_np.shape

Review Comment:
   Same here. Should be nc_1024c_2d. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org