You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/07/01 20:00:19 UTC

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5848: [TFLite] QNN support for TFLite 2.1.0 quantized models

anijain2305 commented on a change in pull request #5848:
URL: https://github.com/apache/incubator-tvm/pull/5848#discussion_r448586450



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -263,21 +305,29 @@ def get_tensor_value(self, tensor_wrapper):
         except ImportError:
             raise ImportError("The tflite package must be installed")
 
+        # Read the data from the buffer. Also extract the shape.
+        # The shape is used later to reshape the data.
+        data = tensor_wrapper.buffer.DataAsNumpy()
+        shape = tensor_wrapper.tensor.ShapeAsNumpy()
+
+        # When TFLite buffer is of size 1 (scalar), then TFLite tensor shape is set to 0.
+        # Therefore, we set the shape to 1 for numpy reshape to work.  Set shape to 1 if the data is
+        # a scalar type
+        if data.size == 1 and isinstance(shape, int) and shape == 0:
+            shape = (1,)
+
+        if tensor_wrapper.tensor.Type() == TensorType.INT8:

Review comment:
       For the first comment, thanks, let me take a look.
   
   For the second suggestion for has_same_qnn_params, I think we do not need that. For all the ops where we have to check params are same, they have scalar scale and zero point. This is because per-axis quantization is limited to weights, and thus limited to conv2d and dense op where we do not need this check.
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org