You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2019/11/15 19:31:05 UTC

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4339: Added tflite frontend support for quantized mean

anijain2305 commented on a change in pull request #4339: Added tflite frontend support for quantized mean
URL: https://github.com/apache/incubator-tvm/pull/4339#discussion_r346978076
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -659,7 +659,23 @@ def _convert_reduce(self, relay_op, op):
         reduce_options.Init(op_options.Bytes, op_options.Pos)
         keep_dims = reduce_options.KeepDims()
 
+        if input_tensor.qnn_params:
+            in_expr = _qnn.op.dequantize(data=in_expr,
 
 Review comment:
   Similar concern as that of @FrozenGene 
   From maths size, it looks safe to me, but let me know if it wrong
   
   ~~~~
   Initial Quantized tensor - QA - shape (N, dtype='int8') 
   
   Mean would be 
   
   scale_output * (Qout - zp_out) = scale_in * [(QA[0] + QA[1] .... + QA[N-1]) - N* zp_in]/N
   
   scale_output * (Qout - zp_out) = scale_in * (Mean(QA) - zp_in)
   ~~~~
   
   So, basically, upcast the quantized tensor to int32, call Int Mean, and then requantize if the output scale/zp are different.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services