You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/06/01 14:51:21 UTC

[GitHub] [tvm] lhutton1 commented on a diff in pull request #11453: [microNPU] Add transform matrices and part matcher to identity op

lhutton1 commented on code in PR #11453:
URL: https://github.com/apache/tvm/pull/11453#discussion_r886898788


##########
python/tvm/relay/backend/contrib/ethosu/te/identity.py:
##########
@@ -76,7 +78,85 @@ def identity_compute(
         name="ethosu_identity",
         attrs=id_attrs,
     )
+    length = len(ifm.shape)
+    ifm_matrix = np.identity(length + 1)
+    offset = np.zeros(length, dtype="int64")
+    ifm_propagator = Propagator(
+        ifm_matrix,
+        offset.tolist(),
+    )
+    propagator_attrs = {
+        "ifm_propagator": ifm_propagator,
+    }
+    return write_compute(identity, ofm_zero_point, ofm_scale, attrs=propagator_attrs)
+
+
+@register_matcher
+def match_ethosu_identity(output_tensor, device_config):
+    """Match a Tensor Expression corresponding to an NPU identity.
 
-    dmaed_ofm = write_compute(identity, ofm_zero_point, ofm_scale)
+    If the Tensor Expression matches, an EthosuPart will be created that models the
+    matched Tensor Expression. Otherwise, None will be returned.
 
-    return dmaed_ofm
+    Parameters
+    ----------
+    output_tensor : tvm.te.Tensor
+        The tensor to attempt to match with.
+    device_config : EthosuDeviceConfig
+        Target device configuration
+
+    Returns
+    -------
+    Union[None, EthosuPart]
+        The created EthosuPart if there was a match, otherwise None.
+    """
+    write = output_tensor
+    if write.op.name != "ethosu_write":
+        return None
+    identity = write.op.input_tensors[0]
+    if identity.op.name != "ethosu_identity":
+        return None
+    read = identity.op.input_tensors[0]
+    if read.op.name != "ethosu_read":
+        return None
+
+    input_tensors = [
+        read.op.input_tensors[0],
+    ]
+    subgraph = TESubgraph(input_tensors, output_tensor)
+    propagators = [
+        write.op.attrs["ifm_propagator"],
+    ]
+    ifm_dtype = input_tensors[0].dtype
+    ofm_dtype = output_tensor.dtype
+
+    input_tensors_shape = input_tensors[0].shape
+    length = len(input_tensors_shape)
+    channels = int(input_tensors_shape[length - 1]) if length >= 3 else 1
+
+    subkernels = len(device_config.get_kernel_steps(identity.op.name, 1, 1, ifm_dtype))
+
+    input_layout = output_layout = "NHWC"

Review Comment:
   Ah, apologies, I should have said that a message would have been helpful alongside the assert as well. Lets take it in a follow up :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org