You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/05/12 12:38:50 UTC

[GitHub] [tvm] NicolaLancellotti opened a new pull request #8024: Quantized TANH operator support in TF Lite Frontend

NicolaLancellotti opened a new pull request #8024:
URL: https://github.com/apache/tvm/pull/8024


   Currently, the `TANH` operator with quantized input and output tensors is lowered to
   `tanh` without any prior dequantization and posterior quantization.
   This pr adds the dequantization and quantization operators to the lowering of `TANH`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbaret commented on a change in pull request #8024: Quantized TANH operator support in TF Lite Frontend

Posted by GitBox <gi...@apache.org>.
mbaret commented on a change in pull request #8024:
URL: https://github.com/apache/tvm/pull/8024#discussion_r631831862



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -3255,17 +3255,30 @@ def test_forward_log_softmax():
 # ----
 
 
-def _test_tanh(data):
+def _test_tanh(data, quantized=False):
     """ One iteration of TANH """
     with tf.Graph().as_default():
-        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)
-        out = math_ops.tanh(in_data)
-        compare_tflite_with_tvm(data, "Placeholder:0", [in_data], [out])
+        in_data = array_ops.placeholder(shape=data.shape, dtype="float32", name="in_0")
+
+        if quantized:
+            inq_data = tf.quantization.fake_quant_with_min_max_args(
+                in_data, min=-3, max=3, name="inq_0"
+            )
+            input_range = {"inq_0": (-3, 3)}
+            out = math_ops.tanh(inq_data)
+            out = tf.quantization.fake_quant_with_min_max_args(out, min=-1, max=1, name="out")
+            compare_tflite_with_tvm(
+                data, "inq_0:0", [inq_data], [out], quantized=True, input_range=input_range
+            )
+        else:
+            out = math_ops.tanh(in_data)
+            compare_tflite_with_tvm(data, "in_0:0", [in_data], [out])
 
 
 def test_forward_tanh():
     """ TANH """
-    _test_tanh(np.arange(6.0, dtype=np.float32).reshape((1, 6)))

Review comment:
       Here we've replaced deterministic test data (through np.arange) with random data (through np.random.uniform). As we know the quantized TANH test only passes when allowing a tolerance of 1, I'd prefer we stick to using deterministic test data. This is in case there's some strange combination of values that could lead to a difference of > 1 which may make the test flaky.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbaret merged pull request #8024: Quantized TANH operator support in TF Lite Frontend

Posted by GitBox <gi...@apache.org>.
mbaret merged pull request #8024:
URL: https://github.com/apache/tvm/pull/8024


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] NicolaLancellotti commented on pull request #8024: Quantized TANH operator support in TF Lite Frontend

Posted by GitBox <gi...@apache.org>.
NicolaLancellotti commented on pull request #8024:
URL: https://github.com/apache/tvm/pull/8024#issuecomment-840402847


   Please, can you review? @mbaret @manupa-arm  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbaret commented on pull request #8024: Quantized TANH operator support in TF Lite Frontend

Posted by GitBox <gi...@apache.org>.
mbaret commented on pull request #8024:
URL: https://github.com/apache/tvm/pull/8024#issuecomment-844060832


   Could you re-trigger the CI? It's been very flaky lately.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbaret commented on pull request #8024: Quantized TANH operator support in TF Lite Frontend

Posted by GitBox <gi...@apache.org>.
mbaret commented on pull request #8024:
URL: https://github.com/apache/tvm/pull/8024#issuecomment-845100285


   Thanks @NicolaLancellotti @manupa-arm @d-smirnov this is now merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] NicolaLancellotti commented on a change in pull request #8024: Quantized TANH operator support in TF Lite Frontend

Posted by GitBox <gi...@apache.org>.
NicolaLancellotti commented on a change in pull request #8024:
URL: https://github.com/apache/tvm/pull/8024#discussion_r634151515



##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -3255,17 +3255,30 @@ def test_forward_log_softmax():
 # ----
 
 
-def _test_tanh(data):
+def _test_tanh(data, quantized=False):
     """ One iteration of TANH """
     with tf.Graph().as_default():
-        in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)
-        out = math_ops.tanh(in_data)
-        compare_tflite_with_tvm(data, "Placeholder:0", [in_data], [out])
+        in_data = array_ops.placeholder(shape=data.shape, dtype="float32", name="in_0")
+
+        if quantized:
+            inq_data = tf.quantization.fake_quant_with_min_max_args(
+                in_data, min=-3, max=3, name="inq_0"
+            )
+            input_range = {"inq_0": (-3, 3)}
+            out = math_ops.tanh(inq_data)
+            out = tf.quantization.fake_quant_with_min_max_args(out, min=-1, max=1, name="out")
+            compare_tflite_with_tvm(
+                data, "inq_0:0", [inq_data], [out], quantized=True, input_range=input_range
+            )
+        else:
+            out = math_ops.tanh(in_data)
+            compare_tflite_with_tvm(data, "in_0:0", [in_data], [out])
 
 
 def test_forward_tanh():
     """ TANH """
-    _test_tanh(np.arange(6.0, dtype=np.float32).reshape((1, 6)))

Review comment:
       Thanks for the review. I have updated the patch with deterministic test data. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org