You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/04/29 14:21:25 UTC

[GitHub] [tvm] SebastianBoblestETAS opened a new pull request, #11183: Add unidirectional sequence lstm

SebastianBoblestETAS opened a new pull request, #11183:
URL: https://github.com/apache/tvm/pull/11183

   This work has mostly been done by @vdkhoi Khoi Duy Vo from ETAS Gmbh.
   We add parser support for UnidirectionalSequenceLSTM layers in tflite.
   
   A question regarding the test: 
   At the moment it uses a toy model that I store in a repo in my github account.
   Should we copy this to the TVM repo or what is the best way to do this?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1137323185

   @AndrewZhaoLuo I updated the comment in the function and made it more precise.
   The most important difference of the two "unbind" versions is actually that the onnx-Version takes a exp.Call object and the one in tflite a tvm.relay.frontend.tflite.TensorWrapper. So inferring the shape also works differently.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867579364


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]

Review Comment:
   @vdkhoi , thanks for the follow up, when I run your unit test, the value of  len(input_tensors) is 24 instead of 20,   , this seems like conflict with what your comments mentioned, could you double check?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r865616692


##########
tests/python/frontend/tflite/test_forward.py:
##########
@@ -4572,6 +4572,36 @@ def test_forward_tflite_float16():
     tvm.testing.assert_allclose(tvm_sorted_labels, tflite_sorted_labels)
 
 
+#######################################################################
+# Unidirectional Sequence LSTM
+# ---------------------
+def test_unidirectional_sequence_lstm():

Review Comment:
   Thanks for pointing this out.
   Added the test to the main function. I also renamed it to test_forward_unidirectional_sequence_lstm.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867488650


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]

Review Comment:
   Currently, tflite provides 116 operators, while relay IR just supports more than 100 operators for tflite and we think it should be better to support it directly in relay IR



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867314136


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]

Review Comment:
   is a better way to implement "unidirectional_sequence_lstm" in 'topi' instead of unrolling the lstm in  relay IR layer?  like what "convert_strided_slice" did.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r868344601


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]

Review Comment:
   this make sense, thanks for the follow up.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] UlrikHjort-Bosch commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
UlrikHjort-Bosch commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r862685991


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,142 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 2, "input tensors length should be >= 2"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]
+
+        # cell_state_weights is equivalent to output_state_in tflite model
+        cell_state_in_shape = tuple(self.get_tensor_shape(cell_state_in))
+        cell_state_in_dtype = self.get_tensor_type_str(cell_state_in.tensor.Type())
+        cell_state_in_expr = _op.zeros(cell_state_in_shape, dtype=cell_state_in_dtype)
+        weights_dict["cell_state"] = _op.split(cell_state_in_expr, 1)[0]
+
+        # Process weight matrix of input: w_inp
+        # Concatenate of [input_input_weight, input_forget_weights, input_cell_weights, input_output_weights]
+        input_input_weights_default_values = self.get_tensor_value(input_input_weights)
+        input_input_weights_op = _op.split(
+            _op.const(input_input_weights_default_values.tolist()), 1
+        )
+        input_output_weights_default_values = self.get_tensor_value(input_output_weights)
+        input_output_weights_op = _op.split(
+            _op.const(input_output_weights_default_values.tolist()), 1
+        )
+        input_forget_weights_default_values = self.get_tensor_value(input_forget_weights)
+        input_forget_weights_op = _op.split(
+            _op.const(input_forget_weights_default_values.tolist()), 1
+        )
+        input_cell_weights_default_values = self.get_tensor_value(input_cell_weights)
+        input_cell_weights_op = _op.split(_op.const(input_cell_weights_default_values.tolist()), 1)
+        weights_dict["w_inp"] = _op.concatenate(
+            [
+                _op.squeeze(input_input_weights_op[0]),
+                _op.squeeze(input_forget_weights_op[0]),
+                _op.squeeze(input_cell_weights_op[0]),
+                _op.squeeze(input_output_weights_op[0]),
+            ],
+            axis=0,
+        )
+
+        # Process weight matrix of hidden state: w_hid to support lstm_cell function. Not used in tflite

Review Comment:
   Line too long (104 chars limit is 100)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1130198586

   @mbrookhart @jwfromm @Huyuwei @hlu1 @AndrewZhaoLuo @kazum @siju-samuel @srkreddy1238 @FrozenGene 
   Hi all,
   could someone of you help us with this PR?
   Sorry to bother all of you, I just looked in CONTRIBUTORS.md for all committers that are familiar with frontends.
   Thanks in advance!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo merged pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo merged PR #11183:
URL: https://github.com/apache/tvm/pull/11183


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
Mousius commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1118763954

   @SebastianBoblestETAS I think this is effecting more than just this PR, I've raised #11220 to track it, please stand by 😸 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867592533


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input

Review Comment:
   inputs_tensors, [9,10, 16, 17]  not get involved by computation, why?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r865210575


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 2, "input tensors length should be >= 2"

Review Comment:
   There are  20  tensor get used
   `assert len(input_tensors) >= 20, "input tensors length should be >= 20` ?



##########
tests/python/frontend/tflite/test_forward.py:
##########
@@ -4572,6 +4572,36 @@ def test_forward_tflite_float16():
     tvm.testing.assert_allclose(tvm_sorted_labels, tflite_sorted_labels)
 
 
+#######################################################################
+# Unidirectional Sequence LSTM
+# ---------------------
+def test_unidirectional_sequence_lstm():

Review Comment:
   add this function into "_main_" function ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r865616895


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 2, "input tensors length should be >= 2"

Review Comment:
   Thanks! I corrected this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r880755003


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -220,6 +221,38 @@ def check_unsupported_ops(self):
         if len(raise_msg) > 0:
             raise tvm.error.OpNotImplemented(raise_msg)
 
+    def unbind(self, data, axis=1):
+        """
+        This is a slightly modified version compared to the one in common.py

Review Comment:
   Can you just call `common.py` unbind with it's axis parameter as 1?



##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -220,6 +221,38 @@ def check_unsupported_ops(self):
         if len(raise_msg) > 0:
             raise tvm.error.OpNotImplemented(raise_msg)
 
+    def unbind(self, data, axis=1):
+        """
+        This is a slightly modified version compared to the one in common.py

Review Comment:
   Can you just call `common.py` unbind with its axis parameter as 1?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867319861


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"

Review Comment:
   can this LSTM function also support any small input tensor case? for example input tensor size = 1?



##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]

Review Comment:
   if input_tensors size > 20, what will happened?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867767234


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]

Review Comment:
   A relay LSTM operator might be valuable. But it should then support all frontends, right? For the moment we decided to use the existing lstm_cell that is also used by the onnx frontend.
   If we have a native LSTM operator in relay later, we can adapt the parsers to use that instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1122023891

   @Mousius Hi, sorry this is again failing but in a totally different place now. 
   Should we simply wait or trigger the CI/CD again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1134594647

   > Ill take a look tomorrow
   
   @AndrewZhaoLuo did you review our PR? We are ready to answer your question.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r880559992


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -220,6 +221,38 @@ def check_unsupported_ops(self):
         if len(raise_msg) > 0:
             raise tvm.error.OpNotImplemented(raise_msg)
 
+    def unbind(self, data, axis=1):
+        """
+        This is a slightly modified version compared to the one in common.py

Review Comment:
   I added an additional sentence to the docstring. The difference is the location of the timestep index in the shape of data.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1136114364

   @Mousius We just add some comments as request from @AndrewZhaoLuo, but the tvm-ci failed again. Could you please support us to restart it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1137576421

   Sometimes you also need to rebase on main to solve flaky CI issues :/ 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1136170765

   @vdkhoi  you can restart CI by pushing an empty commit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867314136


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]

Review Comment:
   is a better way to implement "unidirectional_sequence_lstm" in 'topi' instead of unrolling the lstm in  relay IR?  like what "convert_strided_slice" did.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867489280


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]

Review Comment:
   From our understanding, the input tensor will never be more than 20. The data is from the tflite model, and with tflite format, maximum number of tensors (inputs + weights) for unidirection sequence lstm is 20. Of course that is version dependence of tflite. If tflite change its format in the future, we have to adjust is assert statement.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Mousius commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
Mousius commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1123388261

   @SebastianBoblestETAS I re-ran the CI yesterday and it looks green, though I don't really know that much about this PR other than the CI issues - maybe @leandron can help get this merged? 😸 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1139547325

   @AndrewZhaoLuo I just rebased. Let's hope for the best :smile: 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1124649859

   @leandron Hi, could you maybe help with getting this merged?
   The issue that still should get resolved however is that our test uses a model that I have stored in a repo in my own github profile.
   Other tests seem to use models from public model zoos. But for our case we would prefer a sample model that only has a UnidirectionalSequenceLSTM layer. Should we put it in a data folder in TVM or what would you suggest?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1123260933

   > code LGTM, thanks @SebastianBoblestETAS, @vdkhoi.
   @huajsj: Thank you. Could you suggest how can it be merged or are we missing something?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867490976


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"

Review Comment:
   As I mentioned above, from our understanding about tflite format, the number of vectors which can be stored in flatbuffer designed by tflite is 20 tensors. If you generate a tflite model without explicitly specifying other weights except input tensor, the ignored weights will be assigned by default values (0) and store in model file as inputs. Of course, in this code we just support some recent tflite versions. If you can find some models which support different numbers of inputs and the references from tflite documentation to instruct how to process it, we deeply thank you and extend out support for that kind of model.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867490976


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"

Review Comment:
   As I mentioned above, from our understanding about tflite format, the number of vectors which can be stored in flatbuffer designed by tflite is 20 tensors. If you generate a tflite model without specifying other weights except input tensor, the ignored weights will be assigned by default values (0) and store in model file as inputs. Of course, in this code we just support some recent tflite versions. If you can find some models which support different numbers of inputs and the references from tflite documentation to instruct how to process it, we deeply thank you and extend out support for that kind of model.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] UlrikHjort-Bosch commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
UlrikHjort-Bosch commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r862685816


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,142 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 2, "input tensors length should be >= 2"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]
+
+        # cell_state_weights is equivalent to output_state_in tflite model
+        cell_state_in_shape = tuple(self.get_tensor_shape(cell_state_in))
+        cell_state_in_dtype = self.get_tensor_type_str(cell_state_in.tensor.Type())
+        cell_state_in_expr = _op.zeros(cell_state_in_shape, dtype=cell_state_in_dtype)
+        weights_dict["cell_state"] = _op.split(cell_state_in_expr, 1)[0]
+
+        # Process weight matrix of input: w_inp
+        # Concatenate of [input_input_weight, input_forget_weights, input_cell_weights, input_output_weights]

Review Comment:
   Line too long (109 chars limit is 100)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867490976


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"

Review Comment:
   As I mentioned above, from our understanding about tflite format, the number of vectors which can be stored in flatbuffer designed by tflite is 20 tensors. If you generate a tflite model without explicitly specifying other weights except input tensor, the ignored weights will be assigned by default values (0) and store in model file as inputs. It is obvious that we just support some recent tflite versions. If you can find some models which support different numbers of inputs and the references from tflite documentation to instruct how to process it, we deeply thank you and extend out support for that kind of model.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867314136


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]

Review Comment:
   is a better way to implement "unidirectional_sequence_lstm" in 'topi' instead of unrolling the lstm in  relay IR?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867774761


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input

Review Comment:
   In all the models we checked, the tensors at locations 9, 10, 11 as well as 16, 17 and 20, 21, 22, 23 are not occupied. (get_input_tensors returns -1 for their locations.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867699105


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]

Review Comment:
   Thank you for your feedback. Yes. The maximum number of tensors that tflite can store in model is 24 tensors. Let's isolate two concepts of index: 
   - **array index** is the location of tensor in array _input_tensors_
   - **tensor_idx** is a property of each tensor, to indicate the location of the tensor in tflite model. You can find location values by opening Netron to look at
   
   Manipulation by the tflite tool, the value of array index and tensor_idx are different. What we are interested just tensor_idx. And the last meaningful value of tensor_idx in UnidirectionalSequenceLSTM is at array index (input_tensor[19]), which mean, this operator only utilize 20 first location to store its tensors.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1130279706

   Ill take a look tomorrow


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1123248084

   There is still the issue of the model we use in the tests.
   Where should we put it? 
   @huajsj Is your approval not enough for this to get merged? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867489280


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]

Review Comment:
   From our understanding, the input tensor will never be more than 20. The data is from the tflite model, and with tflite format, maximum number of tensors (inputs + weights) for unidirection sequence lstm is 20. Of course that is version dependence of tflite. If tflite change its format in the future, we have to adjust this assert statement.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1139768191

   Thanks! Sorry this took a long time to get merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1124910451

   @junrushao1994 Hi, could you also have a look on our PR and help to make it merged? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r878347707


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -220,6 +221,38 @@ def check_unsupported_ops(self):
         if len(raise_msg) > 0:
             raise tvm.error.OpNotImplemented(raise_msg)
 
+    def unbind(self, data, axis=1):
+        """
+        This is a slightly modified version compared to the one in common.py

Review Comment:
   Nit, explain differences than to the one in common.py



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1139953548

   > Thanks! Sorry this took a long time to get merged.
   
   Thank you. Finally :) it done


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867773762


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]

Review Comment:
   We changed the assertion to test for exactly 24 input tensors.
   However, in all the models we checked, the tensors at locations 9, 10, 11 as well as 16, 17 and 20, 21, 22, 23 are not occupied.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] huajsj commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
huajsj commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867314136


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]

Review Comment:
   is a better way to implement "unidirectional_sequence_lstm" in 'topi' instead of unrolling the lstm in  relay IR?  like what "convert_strided_slice" did?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867490976


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"

Review Comment:
   As I mentioned above, from our understanding about tflite format, the number of vectors which can be stored in flatbuffer designed by tflite is 20 tensors. If you generate a tflite model without explicitly specifying other weights except input tensor, the ignored weights will be assigned by default values (0) and store in model file as inputs. It is obvious that we just support some recent tflite versions. If you can find some models which support different numbers of inputs and the references from tflite documentation to instruct how to process it, we deeply thank you and extend our support for that kind of model.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r867699105


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,145 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 20, "input tensors length should be >= 20"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]

Review Comment:
   Thank you for your feedback. Yes. The maximum number of tensors that tflite can store in model is 24 tensors. Let's isolate two concepts of index: 
   - **array index** is the location of tensor in array _input_tensors_
   - **tensor_idx** is a property of each tensor, to indicate the location of the tensor in tflite model. You can find location values by opening the model in Netron to look at
   
   Manipulation by the tflite tool, the value of array index and tensor_idx are different. What we are interested just tensor_idx. And the last meaningful value of tensor_idx in UnidirectionalSequenceLSTM is at array index (input_tensor[19]), which mean, this operator only utilize 20 first location to store its tensors.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] SebastianBoblestETAS commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
SebastianBoblestETAS commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1118731859

   Now this fails on windows. I restarted the CI pipeline three times. Does not seem to be related to our changes. Should or could we do anything to resolve this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] UlrikHjort-Bosch commented on a diff in pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
UlrikHjort-Bosch commented on code in PR #11183:
URL: https://github.com/apache/tvm/pull/11183#discussion_r862686141


##########
python/tvm/relay/frontend/tflite.py:
##########
@@ -2710,6 +2743,142 @@ def convert_unpack(self, op):
 
         return squeezed
 
+    def convert_unidirectional_sequence_lstm(self, op):
+        """Long Short Term Memory for TFLite implementation."""
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                "TFlite quantized UNIDIRECTIONALSEQUENCELSTM operator is not supported yet."
+            )
+
+        input_tensors = self.get_input_tensors(op)
+        assert len(input_tensors) >= 2, "input tensors length should be >= 2"
+
+        # Extract input tensor from saved model
+        input_tensor = input_tensors[0]
+
+        # Extract tensors from input tensors from saved model
+        # Input weights
+        input_input_weights = input_tensors[1]
+        input_forget_weights = input_tensors[2]
+        input_cell_weights = input_tensors[3]
+        input_output_weights = input_tensors[4]
+        # Recurrent weights
+        recurrent_input_weights = input_tensors[5]
+        recurrent_forget_weights = input_tensors[6]
+        recurrent_cell_weights = input_tensors[7]
+        recurrent_output_weights = input_tensors[8]
+        # Bias weights
+        input_gate_bias = input_tensors[12]
+        forget_gate_bias = input_tensors[13]
+        cell_gate_bias = input_tensors[14]
+        output_gate_bias = input_tensors[15]
+        # State input
+        output_state_in = input_tensors[18]
+        cell_state_in = input_tensors[19]
+
+        # Extract output tensor from saved model
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        X_steps = self.unbind(input_tensor, axis=1)
+        weights_dict = {}
+
+        # hidden_state_weights is equivalent to output_state_in in tflite model
+        out_state_in_shape = tuple(self.get_tensor_shape(output_state_in))
+        out_state_in_dtype = self.get_tensor_type_str(output_state_in.tensor.Type())
+        out_state_in_expr = _op.zeros(out_state_in_shape, dtype=out_state_in_dtype)
+        weights_dict["hidden_state"] = _op.split(out_state_in_expr, 1)[0]
+
+        # cell_state_weights is equivalent to output_state_in tflite model
+        cell_state_in_shape = tuple(self.get_tensor_shape(cell_state_in))
+        cell_state_in_dtype = self.get_tensor_type_str(cell_state_in.tensor.Type())
+        cell_state_in_expr = _op.zeros(cell_state_in_shape, dtype=cell_state_in_dtype)
+        weights_dict["cell_state"] = _op.split(cell_state_in_expr, 1)[0]
+
+        # Process weight matrix of input: w_inp
+        # Concatenate of [input_input_weight, input_forget_weights, input_cell_weights, input_output_weights]
+        input_input_weights_default_values = self.get_tensor_value(input_input_weights)
+        input_input_weights_op = _op.split(
+            _op.const(input_input_weights_default_values.tolist()), 1
+        )
+        input_output_weights_default_values = self.get_tensor_value(input_output_weights)
+        input_output_weights_op = _op.split(
+            _op.const(input_output_weights_default_values.tolist()), 1
+        )
+        input_forget_weights_default_values = self.get_tensor_value(input_forget_weights)
+        input_forget_weights_op = _op.split(
+            _op.const(input_forget_weights_default_values.tolist()), 1
+        )
+        input_cell_weights_default_values = self.get_tensor_value(input_cell_weights)
+        input_cell_weights_op = _op.split(_op.const(input_cell_weights_default_values.tolist()), 1)
+        weights_dict["w_inp"] = _op.concatenate(
+            [
+                _op.squeeze(input_input_weights_op[0]),
+                _op.squeeze(input_forget_weights_op[0]),
+                _op.squeeze(input_cell_weights_op[0]),
+                _op.squeeze(input_output_weights_op[0]),
+            ],
+            axis=0,
+        )
+
+        # Process weight matrix of hidden state: w_hid to support lstm_cell function. Not used in tflite
+        recurrent_input_weights_values = self.get_tensor_value(recurrent_input_weights)
+        recurrent_input_weights_op = _op.split(
+            _op.const(recurrent_input_weights_values.tolist()), 1
+        )
+        recurrent_output_weights_values = self.get_tensor_value(recurrent_output_weights)
+        recurrent_output_weights_op = _op.split(
+            _op.const(recurrent_output_weights_values.tolist()), 1
+        )
+        recurrent_forget_weights_values = self.get_tensor_value(recurrent_forget_weights)
+        recurrent_forget_weights_op = _op.split(
+            _op.const(recurrent_forget_weights_values.tolist()), 1
+        )
+        recurrent_cell_weights_values = self.get_tensor_value(recurrent_cell_weights)
+        recurrent_cell_weights_op = _op.split(_op.const(recurrent_cell_weights_values.tolist()), 1)
+        weights_dict["w_hid"] = _op.concatenate(
+            [
+                recurrent_input_weights_op[0],
+                recurrent_forget_weights_op[0],
+                recurrent_cell_weights_op[0],
+                recurrent_output_weights_op[0],
+            ],
+            axis=0,
+        )
+
+        # Process weight matrix of bias: b_inp
+        input_gate_bias_values = self.get_tensor_value(input_gate_bias)
+        input_gate_bias_op = _op.split(_op.const(input_gate_bias_values.tolist()), 1)
+        output_gate_bias_values = self.get_tensor_value(output_gate_bias)
+        output_gate_bias_op = _op.split(_op.const(output_gate_bias_values.tolist()), 1)
+        forget_gate_bias_values = self.get_tensor_value(forget_gate_bias)
+        forget_gate_bias_op = _op.split(_op.const(forget_gate_bias_values.tolist()), 1)
+        cell_gate_bias_values = self.get_tensor_value(cell_gate_bias)
+        cell_gate_bias_op = _op.split(_op.const(cell_gate_bias_values.tolist()), 1)
+        weights_dict["b_inp"] = _op.concatenate(
+            [
+                input_gate_bias_op[0],
+                forget_gate_bias_op[0],
+                cell_gate_bias_op[0],
+                output_gate_bias_op[0],
+            ],
+            axis=0,
+        )
+
+        # Process wieght matrix of hidden bias: b_hid (with the same shape as b_inp)
+        gate_bias_dtype = self.get_tensor_type_str(input_gate_bias.tensor.Type())
+        weights_dict["b_hid"] = _op.split(
+            _op.const(
+                np.zeros(_infer_shape(weights_dict["b_inp"]), dtype=gate_bias_dtype),
+                dtype=gate_bias_dtype,
+            ),
+            1,
+        )[0]
+
+        outputs, H, C = lstm_cell(input_seqs=X_steps, **weights_dict)

Review Comment:
   H and C are unused



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] vdkhoi commented on pull request #11183: Add unidirectional sequence lstm

Posted by GitBox <gi...@apache.org>.
vdkhoi commented on PR #11183:
URL: https://github.com/apache/tvm/pull/11183#issuecomment-1117004202

   All tests passed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org