You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/04/21 12:52:31 UTC

[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5394: [TFLITE]Quantize & Dequantize op

siju-samuel opened a new pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394


   @anijain2305 @FrozenGene  Please help to review.
   
   I havent added any testcase beacause Im not able to simulate this from testcases as there are no such ops in tf. This tflite ops will be added to network when quantizing models with and inputs are float. And couldnt find any publically available model.
   
   eg:
   ![image](https://user-images.githubusercontent.com/15828974/79867851-9d2a1700-83fc-11ea-855e-8d6a72e6a342.png)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
siju-samuel commented on a change in pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#discussion_r426561425



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1552,6 +1552,48 @@ def test_forward_squeeze():
     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])
 
 
+#######################################################################
+# Quantize/DeQuantize
+# -------------------
+
+def _test_quantize_dequantize(data):
+    """ One iteration of quantize and dequantize """
+
+    import tensorflow as tf2

Review comment:
       It's importing tensorflow.compat.v1 as tf. It doesn't have  tf2.lite.TFLiteConverter.from_keras_model




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on issue #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-617284954


   LGTM. Is there any way to add a test? 
   
   @inadob Do you have any suggestions here?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
siju-samuel commented on a change in pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#discussion_r426560071



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1552,6 +1552,48 @@ def test_forward_squeeze():
     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])
 
 
+#######################################################################
+# Quantize/DeQuantize
+# -------------------
+
+def _test_quantize_dequantize(data):
+    """ One iteration of quantize and dequantize """
+
+    import tensorflow as tf2
+    # Define a dummy model
+    data_in = tf2.keras.layers.Input(shape=data.shape[1:])
+    act_func =  tf2.keras.layers.Activation('linear')
+    keras_model = tf2.keras.models.Model(data_in, act_func(data_in))
+
+    # Load the model
+    converter = tf2.lite.TFLiteConverter.from_keras_model(keras_model)
+
+    # To create quantized values with dynamic range of activations, needs representative dataset
+    def representative_data_gen():
+        for i in range(100):
+            yield [data]
+
+    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
+    converter.representative_dataset = representative_data_gen
+    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
+    converter.inference_input_type = tf.uint8
+    converter.inference_output_type = tf.uint8
+
+    # Convert the model to TensorFlow Lite format
+    tflite_model_quant = converter.convert()
+
+    tflite_output = run_tflite_graph(tflite_model_quant, data)
+    tvm_output = run_tvm_graph(tflite_model_quant, data, 'input_1')
+    tvm.testing.assert_allclose(np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]),
+                                rtol=1e-5, atol=1e-5)
+
+
+def test_forward_quantize_dequantize():
+    """ Quantize Dequantize """
+    data = np.random.uniform(0, 1, (1, 4, 4, 3)).astype("float32")
+    _test_quantize_dequantize(data)

Review comment:
       No need because the ci is already upgraded to tf2.0. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] siju-samuel commented on issue #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
siju-samuel commented on issue #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618468746


   @inadob Thanks. I will give a try.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene merged pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
FrozenGene merged pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] u99127 edited a comment on pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
u99127 edited a comment on pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618499720


   > 
   > 
   > @inadob Thanks. I will give a try.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] u99127 edited a comment on pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
u99127 edited a comment on pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618499720


   > 
   > 
   > @inadob Thanks. I will give a try.
   
   I have a more fundamental question - I don't expect Quantize and Dequantize to show up in Tflite models for inference as IIUC these are operators that will appear in the training loop. This is purely curiosity. 
   
   Ramana


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-635069404


   THANKS!Merged!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] siju-samuel commented on pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
siju-samuel commented on pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-630102640


   @masahi @anijain2305 @inadob Please help to review and merge this PR. TIA


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 edited a comment on issue #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
anijain2305 edited a comment on issue #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-617284954


   LGTM.
   
   @inadob  Is there any way to add a test? Do you have any suggestions here?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] inadob commented on issue #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
inadob commented on issue #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618459856


   @siju-samuel 
   
   - Have you tried to recreate the TFL operations using tf.quantization.quantize() https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/quantization/quantize. 
   - Another way to produce quantize/dequantize in the graph is to use `converter.optimizations = [tf.lite.Optimize.DEFAULT]`. To avoid messing up with the method we already use to test quantized ops, I suggest creating a really simple fp32 TF graph, convert it to TFL as above and then use only that chunk containing "quantize" for testing. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
siju-samuel commented on a change in pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#discussion_r426560071



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1552,6 +1552,48 @@ def test_forward_squeeze():
     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])
 
 
+#######################################################################
+# Quantize/DeQuantize
+# -------------------
+
+def _test_quantize_dequantize(data):
+    """ One iteration of quantize and dequantize """
+
+    import tensorflow as tf2
+    # Define a dummy model
+    data_in = tf2.keras.layers.Input(shape=data.shape[1:])
+    act_func =  tf2.keras.layers.Activation('linear')
+    keras_model = tf2.keras.models.Model(data_in, act_func(data_in))
+
+    # Load the model
+    converter = tf2.lite.TFLiteConverter.from_keras_model(keras_model)
+
+    # To create quantized values with dynamic range of activations, needs representative dataset
+    def representative_data_gen():
+        for i in range(100):
+            yield [data]
+
+    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
+    converter.representative_dataset = representative_data_gen
+    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
+    converter.inference_input_type = tf.uint8
+    converter.inference_output_type = tf.uint8
+
+    # Convert the model to TensorFlow Lite format
+    tflite_model_quant = converter.convert()
+
+    tflite_output = run_tflite_graph(tflite_model_quant, data)
+    tvm_output = run_tvm_graph(tflite_model_quant, data, 'input_1')
+    tvm.testing.assert_allclose(np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]),
+                                rtol=1e-5, atol=1e-5)
+
+
+def test_forward_quantize_dequantize():
+    """ Quantize Dequantize """
+    data = np.random.uniform(0, 1, (1, 4, 4, 3)).astype("float32")
+    _test_quantize_dequantize(data)

Review comment:
       Ok. I will add.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] inadob commented on a change in pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
inadob commented on a change in pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#discussion_r426555314



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1552,6 +1552,48 @@ def test_forward_squeeze():
     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])
 
 
+#######################################################################
+# Quantize/DeQuantize
+# -------------------
+
+def _test_quantize_dequantize(data):
+    """ One iteration of quantize and dequantize """
+
+    import tensorflow as tf2
+    # Define a dummy model
+    data_in = tf2.keras.layers.Input(shape=data.shape[1:])
+    act_func =  tf2.keras.layers.Activation('linear')
+    keras_model = tf2.keras.models.Model(data_in, act_func(data_in))
+
+    # Load the model
+    converter = tf2.lite.TFLiteConverter.from_keras_model(keras_model)
+
+    # To create quantized values with dynamic range of activations, needs representative dataset
+    def representative_data_gen():
+        for i in range(100):
+            yield [data]
+
+    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
+    converter.representative_dataset = representative_data_gen
+    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
+    converter.inference_input_type = tf.uint8
+    converter.inference_output_type = tf.uint8
+
+    # Convert the model to TensorFlow Lite format
+    tflite_model_quant = converter.convert()
+
+    tflite_output = run_tflite_graph(tflite_model_quant, data)
+    tvm_output = run_tvm_graph(tflite_model_quant, data, 'input_1')
+    tvm.testing.assert_allclose(np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]),
+                                rtol=1e-5, atol=1e-5)
+
+
+def test_forward_quantize_dequantize():
+    """ Quantize Dequantize """
+    data = np.random.uniform(0, 1, (1, 4, 4, 3)).astype("float32")
+    _test_quantize_dequantize(data)

Review comment:
       You can add a version check for TF2




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] u99127 edited a comment on pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
u99127 edited a comment on pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618499720


   > 
   > 
   > @inadob Thanks. I will give a try.
   
   . (withdraw my comment)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#discussion_r428406145



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1552,6 +1552,48 @@ def test_forward_squeeze():
     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])
 
 
+#######################################################################
+# Quantize/DeQuantize
+# -------------------
+
+def _test_quantize_dequantize(data):
+    """ One iteration of quantize and dequantize """
+
+    import tensorflow as tf2

Review comment:
       How about we use `try` here so that we could prompt users we should have tf 2 so that we could run this test?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] anijain2305 commented on pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-631650814


   LGTM. I will leave the PR to FrozenGene to resolve his comment.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] siju-samuel commented on pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
siju-samuel commented on pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-631405877


   @FrozenGene @masahi @anijain2305 Please help to review and merge this PR. TIA.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#discussion_r426550701



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1552,6 +1552,48 @@ def test_forward_squeeze():
     _test_squeeze(np.arange(6).reshape((2, 1, 3, 1)), [1, 3])
 
 
+#######################################################################
+# Quantize/DeQuantize
+# -------------------
+
+def _test_quantize_dequantize(data):
+    """ One iteration of quantize and dequantize """
+
+    import tensorflow as tf2

Review comment:
       I think we don't need this as we have imported in header




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] u99127 commented on issue #5394: [TFLITE]Quantize & Dequantize op

Posted by GitBox <gi...@apache.org>.
u99127 commented on issue #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-618499720


   > 
   > 
   > @inadob Thanks. I will give a try.
   
   I have a more fundamental question - I don't expect Quantize and Dequantize to show up in Tflite models for inference as IIUC these are operators that will appear in the training loop. 
   
   Are we expecting users, using tvm as part of a training loop to be using the tflite frontend or would they have to integrate with their own python scripts directly into Relay ? 
   
   Ramana


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org