You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/30 22:43:05 UTC

[GitHub] [incubator-tvm] masahi opened a new pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

masahi opened a new pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

Posted by GitBox <gi...@apache.org>.
siju-samuel commented on a change in pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#discussion_r505998308



##########
File path: python/tvm/relay/frontend/qnn_torch.py
##########
@@ -26,6 +26,14 @@
 from tvm.relay import op as _op
 from tvm.relay.frontend.common import infer_shape
 
+from packaging import version

Review comment:
       move inside `_is_newer_than_1_5`

##########
File path: python/tvm/relay/frontend/qnn_torch.py
##########
@@ -46,59 +54,95 @@ def __init__(self, weight, bias, scale, zero_point, param_key):
         self.zero_point = _expr.const(zero_point, dtype="int32")
 
 
-def _unpack_quant_params(param_name, packed_params, unpack_func):
-    # Torch stores quantized params in a custom packed format,
-    # need to unpack and retrieve them as numpy arrays
-    qweight, bias = unpack_func(packed_params)
-    weight_np = qweight.dequantize().numpy()
+class ConvPackedParam(QNNParam):
+    """A placeholder for quantized conv2d op attributs

Review comment:
       attributs > attributes

##########
File path: python/tvm/relay/frontend/qnn_torch.py
##########
@@ -458,24 +513,40 @@ def _impl(inputs, _):
         # inputs[7]: output_zero_point
         # inputs[8]: input_scale (added manually by frontend)
         # inputs[9]: input_zero_point (added manually by frontend)
-        weight = inputs[1][0]
-        weight_scale = inputs[1][1]
-        weight_zero_point = inputs[1][2]
-
-        output_scale = _expr.const(inputs[6])
-        output_zero_point = _expr.const(inputs[7])
+        conv_params = inputs[1]
+        weight = conv_params[0]
+        weight_scale = conv_params[1]
+        weight_zero_point = conv_params[2]
+        bias = conv_params[3]
+
+        if len(conv_params) > 4:
+            # Torch 1.6 or newer case
+            strides = conv_params[4]
+            padding = conv_params[5]
+            dilation = conv_params[6]
+            groups = conv_params[7]
+
+            output_scale = _expr.const(inputs[2])
+            output_zero_point = _expr.const(inputs[3])
+
+            assert len(inputs) == 6, "Input quant params not found in op inputs"
+
+            # These are manually added by add_input_quant_params_to_op_inputs above
+            # In torch, they are retrieved from QTensor data structure at runt

Review comment:
       runtime?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi merged pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#issuecomment-709702663


   thanks @siju-samuel 
   I added `pytorch_utils.py`  and moved the version check function there


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#issuecomment-709954612


   Thanks @siju-samuel 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#issuecomment-709602965


   @siju-samuel @anijain2305 can you merge this? It is a prereq for upgrading our CI to the latest pytorch version


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org