You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/11 10:03:55 UTC

[GitHub] [tvm] euntaik opened a new pull request #7093: [TFLite] add support for float16

euntaik opened a new pull request #7093:
URL: https://github.com/apache/tvm/pull/7093


   add support for float16


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#issuecomment-743115836


   It would be nice to supply testing.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene edited a comment on pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
FrozenGene edited a comment on pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#issuecomment-747429353


   @euntaik could your pr solve this issue https://github.com/apache/tvm/issues/5823? If it could, we could associate this pr with that issue and close. 
   
   @giuseros could you have another round of look? Thanks!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen merged pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
tqchen merged pull request #7093:
URL: https://github.com/apache/tvm/pull/7093


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] giuseros commented on a change in pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
giuseros commented on a change in pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#discussion_r542390418



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -1991,20 +1994,29 @@ def convert_conv(self, op, conv_type):
         weight_tensor_type_str = self.get_tensor_type_str(weight_tensor_type)
 
         in_expr = self.get_expr(input_tensor_idx)
-        weight_value = self.get_tensor_value(weight_tensor)
-
-        # TFLite kernel layout:
-        # convolution:
-        # OC KH KW IC, we require KH KW IC OC (HWIO)
-        # depthwise convolution:
-        # 1 KH KW C(input_c * depth_multiplier), we require
-        # KH KW IC M (depth_multiplier) (HWOI)
-        if is_depthwise_conv:
-            weight_value = weight_value.reshape(kernel_h, kernel_w, input_c, depth_multiplier)
+
+        if self.has_expr(weight_tensor.tensor_idx):
+            weight_expr = self.get_expr(weight_tensor.tensor_idx)
+            if is_depthwise_conv:
+                weight_expr = _op.reshape(
+                    weight_expr, (kernel_h, kernel_w, input_c, depth_multiplier)
+                )
+            else:
+                weight_expr = _op.transpose(weight_expr, axes=(1, 2, 3, 0))

Review comment:
       Oh, ok now I see, thanks for the explanation! When you refer to a Quantize op, you mean a float32->float16 quantization, right? Also, could you write this in a comment? In this way everyone that will read this can understand the logic 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] euntaik commented on a change in pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
euntaik commented on a change in pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#discussion_r542268005



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -1991,20 +1994,29 @@ def convert_conv(self, op, conv_type):
         weight_tensor_type_str = self.get_tensor_type_str(weight_tensor_type)
 
         in_expr = self.get_expr(input_tensor_idx)
-        weight_value = self.get_tensor_value(weight_tensor)
-
-        # TFLite kernel layout:
-        # convolution:
-        # OC KH KW IC, we require KH KW IC OC (HWIO)
-        # depthwise convolution:
-        # 1 KH KW C(input_c * depth_multiplier), we require
-        # KH KW IC M (depth_multiplier) (HWOI)
-        if is_depthwise_conv:
-            weight_value = weight_value.reshape(kernel_h, kernel_w, input_c, depth_multiplier)
+
+        if self.has_expr(weight_tensor.tensor_idx):
+            weight_expr = self.get_expr(weight_tensor.tensor_idx)
+            if is_depthwise_conv:
+                weight_expr = _op.reshape(
+                    weight_expr, (kernel_h, kernel_w, input_c, depth_multiplier)
+                )
+            else:
+                weight_expr = _op.transpose(weight_expr, axes=(1, 2, 3, 0))

Review comment:
       Yes it is related to float16 support.
   tflite converts 32bit models to 16bit models by introducing a Quantize op in every op that contains a 32-bit values (weights, biases, and constants etc. ).
   So for 16-bit converted tflite model has conv2d op receiving the weight as an input tensor instead of a weight value attached to itself.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#issuecomment-747429353


   @euntaik could your pr solve this issue ? https://github.com/apache/tvm/issues/5823 If it could, we could associate this pr with that issue and close. 
   
   @giuseros could you have another round of look? Thanks!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] euntaik commented on a change in pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
euntaik commented on a change in pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#discussion_r542391750



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -1991,20 +1994,29 @@ def convert_conv(self, op, conv_type):
         weight_tensor_type_str = self.get_tensor_type_str(weight_tensor_type)
 
         in_expr = self.get_expr(input_tensor_idx)
-        weight_value = self.get_tensor_value(weight_tensor)
-
-        # TFLite kernel layout:
-        # convolution:
-        # OC KH KW IC, we require KH KW IC OC (HWIO)
-        # depthwise convolution:
-        # 1 KH KW C(input_c * depth_multiplier), we require
-        # KH KW IC M (depth_multiplier) (HWOI)
-        if is_depthwise_conv:
-            weight_value = weight_value.reshape(kernel_h, kernel_w, input_c, depth_multiplier)
+
+        if self.has_expr(weight_tensor.tensor_idx):
+            weight_expr = self.get_expr(weight_tensor.tensor_idx)
+            if is_depthwise_conv:
+                weight_expr = _op.reshape(
+                    weight_expr, (kernel_h, kernel_w, input_c, depth_multiplier)
+                )
+            else:
+                weight_expr = _op.transpose(weight_expr, axes=(1, 2, 3, 0))

Review comment:
       Oh, my bad. It should be Dequantize instead of Quantize. And yes I'll write that in the comment. Thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] euntaik commented on pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
euntaik commented on pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#issuecomment-747389683


   Is there something else I should fix or change?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
tqchen commented on pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#issuecomment-748982915


   Thanks @euntaik @giuseros @FrozenGene !


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] giuseros commented on a change in pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
giuseros commented on a change in pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#discussion_r542233725



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -1991,20 +1994,29 @@ def convert_conv(self, op, conv_type):
         weight_tensor_type_str = self.get_tensor_type_str(weight_tensor_type)
 
         in_expr = self.get_expr(input_tensor_idx)
-        weight_value = self.get_tensor_value(weight_tensor)
-
-        # TFLite kernel layout:
-        # convolution:
-        # OC KH KW IC, we require KH KW IC OC (HWIO)
-        # depthwise convolution:
-        # 1 KH KW C(input_c * depth_multiplier), we require
-        # KH KW IC M (depth_multiplier) (HWOI)
-        if is_depthwise_conv:
-            weight_value = weight_value.reshape(kernel_h, kernel_w, input_c, depth_multiplier)
+
+        if self.has_expr(weight_tensor.tensor_idx):
+            weight_expr = self.get_expr(weight_tensor.tensor_idx)
+            if is_depthwise_conv:
+                weight_expr = _op.reshape(
+                    weight_expr, (kernel_h, kernel_w, input_c, depth_multiplier)
+                )
+            else:
+                weight_expr = _op.transpose(weight_expr, axes=(1, 2, 3, 0))

Review comment:
       Few questions about this: 
   * is this related to `float16` support? 
   * is your test hitting this code?
   * could you write some comment to explain when this branch is taken and why?
   
   Same questions hold for the bias later




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] giuseros commented on a change in pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
giuseros commented on a change in pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#discussion_r545685677



##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -1991,20 +1994,29 @@ def convert_conv(self, op, conv_type):
         weight_tensor_type_str = self.get_tensor_type_str(weight_tensor_type)
 
         in_expr = self.get_expr(input_tensor_idx)
-        weight_value = self.get_tensor_value(weight_tensor)
-
-        # TFLite kernel layout:
-        # convolution:
-        # OC KH KW IC, we require KH KW IC OC (HWIO)
-        # depthwise convolution:
-        # 1 KH KW C(input_c * depth_multiplier), we require
-        # KH KW IC M (depth_multiplier) (HWOI)
-        if is_depthwise_conv:
-            weight_value = weight_value.reshape(kernel_h, kernel_w, input_c, depth_multiplier)
+
+        if self.has_expr(weight_tensor.tensor_idx):
+            weight_expr = self.get_expr(weight_tensor.tensor_idx)
+            if is_depthwise_conv:
+                weight_expr = _op.reshape(
+                    weight_expr, (kernel_h, kernel_w, input_c, depth_multiplier)
+                )
+            else:
+                weight_expr = _op.transpose(weight_expr, axes=(1, 2, 3, 0))

Review comment:
       Hi @euntaik , what I meant was to write it as comment into the code (before the `if` statement). It is so that when people go through the code they will know what is going on. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#issuecomment-747973235


   @euntaik Could you help to check your pr could solve issue #5823? IMO, your pr is enough.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] euntaik commented on pull request #7093: [TFLite] add support for float16

Posted by GitBox <gi...@apache.org>.
euntaik commented on pull request #7093:
URL: https://github.com/apache/tvm/pull/7093#issuecomment-744190213


   I've added a test.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org