You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/07/23 11:01:38 UTC

[GitHub] [incubator-tvm] d-smirnov opened a new pull request #6127: quanitze operation expanded to take const argument

d-smirnov opened a new pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127


   https://www.tensorflow.org/lite/performance/quantization_spec#referencesQUANTIZE (Requantization)
     Input 0:
       data_type  : int8
       range      : [-128, 127]
       granularity: per-tensor
     Output 0:
       data_type  : int8
       range      : [-128, 127]
       granularity: per-tensor


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r460425812



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1850,7 +1850,7 @@ def _test_quantize_dequantize(data):
     # First TFLite quantize op converts float32 tensor to int8 tensor - Qnn quantize.
     # Second TFLite quantize op converts int8 tensor to int8 tensor - Qnn requantize.
     data_in = tf.keras.layers.Input(shape=data.shape[1:])
-    relu = tf.keras.layers.ReLU()(data_in)
+    relu = tf.keras.layers.ReLU()(data)

Review comment:
       Why is this changed? I would suggest to add a new test for quantize. This one takes the Input layer of Keras, previous line.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] tqchen commented on pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
tqchen commented on pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#issuecomment-671440132


   cc @anijain2305 please followup


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 merged pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
anijain2305 merged pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r472332199



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1850,7 +1850,7 @@ def _test_quantize_dequantize(data):
     # First TFLite quantize op converts float32 tensor to int8 tensor - Qnn quantize.
     # Second TFLite quantize op converts int8 tensor to int8 tensor - Qnn requantize.
     data_in = tf.keras.layers.Input(shape=data.shape[1:])
-    relu = tf.keras.layers.ReLU()(data_in)
+    relu = tf.keras.layers.ReLU()(data)

Review comment:
       Please add a new test with const_input. Lets keep this test as it is




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r460439414



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1850,7 +1850,7 @@ def _test_quantize_dequantize(data):
     # First TFLite quantize op converts float32 tensor to int8 tensor - Qnn quantize.
     # Second TFLite quantize op converts int8 tensor to int8 tensor - Qnn requantize.
     data_in = tf.keras.layers.Input(shape=data.shape[1:])
-    relu = tf.keras.layers.ReLU()(data_in)
+    relu = tf.keras.layers.ReLU()(data)

Review comment:
       The idea is to use data as constant parameter for quantize operation which will be inserted on quantisation step

##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2726,7 +2726,13 @@ def convert_quantize(self, op):
         assert len(input_tensors) == 1, "input tensors length should be 1"
         input_tensor = input_tensors[0]
         input_tensor_type_str = self.get_tensor_type_str(input_tensor.tensor.Type())
-        in_expr = self.get_expr(input_tensor.tensor_idx)
+
+        if self.has_expr(input_tensor.tensor_idx):
+            in_expr = self.get_expr(input_tensor.tensor_idx)
+        else:
+            in_value = self.get_tensor_value(input_tensor)

Review comment:
       Deepspeech itself is here https://github.com/mozilla/DeepSpeech/releases/tag/v0.7.4
   However the model which was used was a quantised version fro internal model zoo. I am not sure that I can share it fully




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r472331908



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2726,7 +2726,13 @@ def convert_quantize(self, op):
         assert len(input_tensors) == 1, "input tensors length should be 1"
         input_tensor = input_tensors[0]
         input_tensor_type_str = self.get_tensor_type_str(input_tensor.tensor.Type())
-        in_expr = self.get_expr(input_tensor.tensor_idx)
+
+        if self.has_expr(input_tensor.tensor_idx):

Review comment:
       Does it make sense to use `self.get_tensor_expr` here?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r460425929



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2726,7 +2726,13 @@ def convert_quantize(self, op):
         assert len(input_tensors) == 1, "input tensors length should be 1"
         input_tensor = input_tensors[0]
         input_tensor_type_str = self.get_tensor_type_str(input_tensor.tensor.Type())
-        in_expr = self.get_expr(input_tensor.tensor_idx)
+
+        if self.has_expr(input_tensor.tensor_idx):
+            in_expr = self.get_expr(input_tensor.tensor_idx)
+        else:
+            in_value = self.get_tensor_value(input_tensor)

Review comment:
       Is it possible to share the model? I am wondering if parsing of constants is the problem here. We can think of this case just like conv where weights are converted to relay consts and we add a var for them.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r478944450



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2726,7 +2726,13 @@ def convert_quantize(self, op):
         assert len(input_tensors) == 1, "input tensors length should be 1"
         input_tensor = input_tensors[0]
         input_tensor_type_str = self.get_tensor_type_str(input_tensor.tensor.Type())
-        in_expr = self.get_expr(input_tensor.tensor_idx)
+
+        if self.has_expr(input_tensor.tensor_idx):

Review comment:
       Replaced




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
d-smirnov commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r478944325



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1850,7 +1850,7 @@ def _test_quantize_dequantize(data):
     # First TFLite quantize op converts float32 tensor to int8 tensor - Qnn quantize.
     # Second TFLite quantize op converts int8 tensor to int8 tensor - Qnn requantize.
     data_in = tf.keras.layers.Input(shape=data.shape[1:])
-    relu = tf.keras.layers.ReLU()(data_in)
+    relu = tf.keras.layers.ReLU()(data)

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on pull request #6127: quanitze operation expanded to take const argument

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#issuecomment-683072852


   Thanks for the changes @d-smirnov This is merged!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org