You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/01/13 16:01:34 UTC

[GitHub] [incubator-tvm] inadob opened a new pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

inadob opened a new pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696
 
 
    * add qnn implementation in the parser
    * add test case for qnn logistic
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-582584426
 
 
   @inadob Please rebase

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tqchen commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
tqchen commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-582719957
 
 
   @inadob please rebase, @FrozenGene please https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#discussion_r374335873
 
 

 ##########
 File path: tests/python/frontend/tflite/test_forward.py
 ##########
 @@ -143,11 +144,13 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
             converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
             input_arrays = converter.get_input_arrays()
             input_stats = {}
-            # hardcode the mean_values and std_dev_values (m,s) to be the same
-            # if all inputs are in (float_min; float_max) == (-100, 100)
+            # calculate the mean and quantization scale for every input tensor,
+            # with respect to its fp32 input range, defined in fake_quant.
             # s = 255/(fmax-fmin);  m = -fmin*s (the zero point)
             for i in input_arrays:
-                input_stats[i] = (128., 1.275)
+                quant_scale = 255 / (input_range[i][1] - input_range[i][0])
 
 Review comment:
   Very Minor - a check to ensure denominator is not zero

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] u99127 commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
u99127 commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#discussion_r374233836
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -384,7 +401,16 @@ def convert_logistic(self, op):
         input_tensor = input_tensors[0]
         in_expr = self.get_expr(input_tensor.tensor_idx)
 
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        output_tensor = output_tensors[0]
+
+        if input_tensor.qnn_params:
+            in_expr = self.dequantize(in_expr, input_tensor)
         out = _op.sigmoid(in_expr)
 
 Review comment:
   If there is a need to improve quantized logistic performance, introducing a qnn.logistic sounds reasonable and then lowering it as part of the legalize_qnn pass would be what I would do: However that in my mind is the next step in terms of improving the performance for logistic , the first step on the other hand to make this work and the dequantize , operation, quantize is a good way of making progress.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] inadob commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
inadob commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-573736643
 
 
   @anijain2305 @FrozenGene, can you please check this patch 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] inadob opened a new pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
inadob opened a new pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696
 
 
    * add qnn implementation in the parser
    * add test case for qnn logistic
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] u99127 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
u99127 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-579765000
 
 
   ping for an approving review ? @FrozenGene @anijain2305.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#discussion_r374193768
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -384,7 +401,16 @@ def convert_logistic(self, op):
         input_tensor = input_tensors[0]
         in_expr = self.get_expr(input_tensor.tensor_idx)
 
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        output_tensor = output_tensors[0]
+
+        if input_tensor.qnn_params:
+            in_expr = self.dequantize(in_expr, input_tensor)
         out = _op.sigmoid(in_expr)
 
 Review comment:
   Does the INT computation of sigmoid is very complex so that we could only `dequantize->float compute->requantize`?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] inadob closed pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
inadob closed pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-583327668
 
 
   Thanks @inadob @anijain2305 @u99127 This is merged.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] u99127 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
u99127 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-576194962
 
 
   > 
   > 
   >     * add qnn implementation in the parser
   > 
   >     * add test case for qnn logistic
   
   LGTM
   
   Ramana

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] inadob commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
inadob commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#discussion_r374201180
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -384,7 +401,16 @@ def convert_logistic(self, op):
         input_tensor = input_tensors[0]
         in_expr = self.get_expr(input_tensor.tensor_idx)
 
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        output_tensor = output_tensors[0]
+
+        if input_tensor.qnn_params:
+            in_expr = self.dequantize(in_expr, input_tensor)
         out = _op.sigmoid(in_expr)
 
 Review comment:
   Well I have the following understanding for that case: since by definition sigmoid outputs a probability in [0, 1] in real space, then in uint8 [0, 1] will have a different meaning, so we should dequantize before doing the operation. Correct me if I'm wrong.
   Or you are talking about INT, not UINT?  

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene merged pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
FrozenGene merged pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] inadob commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
inadob commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#discussion_r374712987
 
 

 ##########
 File path: tests/python/frontend/tflite/test_forward.py
 ##########
 @@ -143,11 +144,13 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
             converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
             input_arrays = converter.get_input_arrays()
             input_stats = {}
-            # hardcode the mean_values and std_dev_values (m,s) to be the same
-            # if all inputs are in (float_min; float_max) == (-100, 100)
+            # calculate the mean and quantization scale for every input tensor,
+            # with respect to its fp32 input range, defined in fake_quant.
             # s = 255/(fmax-fmin);  m = -fmin*s (the zero point)
             for i in input_arrays:
-                input_stats[i] = (128., 1.275)
+                quant_scale = 255 / (input_range[i][1] - input_range[i][0])
 
 Review comment:
   I updated this here #4789. Once we agree on the changes in that PR, I will update all the others depending on it. 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#discussion_r374205954
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -384,7 +401,16 @@ def convert_logistic(self, op):
         input_tensor = input_tensors[0]
         in_expr = self.get_expr(input_tensor.tensor_idx)
 
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        output_tensor = output_tensors[0]
+
+        if input_tensor.qnn_params:
+            in_expr = self.dequantize(in_expr, input_tensor)
         out = _op.sigmoid(in_expr)
 
 Review comment:
   I was talking about **pure integer computation of sigmoid.** That is to say, current computation of quantized sigmoid is float arithmetic in fact, we just `dequantize` to float, `float arithmetic`, `requantize` to quantized UINT8 type. However, like TFLite, we have **pure integer computation of sigmoid.** like this : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/internal/reference/integer_ops/logistic.h#L24, which is I checked just now. It seems a little complicated. So maybe we use current way is an acceptable way.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on a change in pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#discussion_r374334979
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -384,7 +401,16 @@ def convert_logistic(self, op):
         input_tensor = input_tensors[0]
         in_expr = self.get_expr(input_tensor.tensor_idx)
 
+        output_tensors = self.get_output_tensors(op)
+        assert len(output_tensors) == 1, "output tensors length should be 1"
+        output_tensor = output_tensors[0]
+
+        if input_tensor.qnn_params:
+            in_expr = self.dequantize(in_expr, input_tensor)
         out = _op.sigmoid(in_expr)
 
 Review comment:
   I had exact similar thoughts. As @FrozenGene pointed, the implementation for integer-only softmax looked complicated, so I decided to just support functionally for now.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services