You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/09/27 13:32:43 UTC

[GitHub] [tvm] masahi opened a new pull request #9135: [Torch] Support returning quantized weights and bias for BYOC use cases

masahi opened a new pull request #9135:
URL: https://github.com/apache/tvm/pull/9135


   This addresses the issue discussed in https://discuss.tvm.apache.org/t/qnn-pytorch-byoc-full-integer-qnn-support/11127
   
   PyTorch stores quantized weights in a custom format, so we cannot directly access 8 bit weights as Numpy arrays. We use  a PyTorch function to unpack quantized weights into float32 arrays and quantization parameters. 
   
   By default, we use `qnn.op.quantize(...)` to recover int8 weights, return float32 weights to users, and rely on the QNN lowering and the Relay constant folding pass to quantize weights at compile time. In BYOC use cases, however,  we cannot apply the constant folding pass on a QNN graph. 
   
   I added a new option to quantize weights in the frontend using a function that is equivalent to qnn.op.quantize(...) operating on Numpy arrays. In hindsight, we should've chosen this way from the beginning. The old behavior is kept as the default for backward compatibility. 
   
   cc @comaniac 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #9135: [Torch] Support returning quantized weights and bias for BYOC use cases

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #9135:
URL: https://github.com/apache/tvm/pull/9135


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #9135: [Torch] Support returning quantized weights and bias for BYOC use cases

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #9135:
URL: https://github.com/apache/tvm/pull/9135


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #9135: [Torch] Support returning quantized weights and bias for BYOC use cases

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #9135:
URL: https://github.com/apache/tvm/pull/9135#discussion_r716871212



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -3713,6 +3713,7 @@ def from_pytorch(
     custom_convert_map=None,
     default_dtype="float32",
     use_parser_friendly_name=False,
+    return_int8_weight=False,

Review comment:
       How about `keep_quantized_weight`? IIUC, this is effective when the PyTorch model is already quantized. `return_int8_weight` might confuse users and let them feel this flag can do quantization for them.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #9135: [Torch] Support returning quantized weights and bias for BYOC use cases

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #9135:
URL: https://github.com/apache/tvm/pull/9135#discussion_r717283401



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -3713,6 +3713,7 @@ def from_pytorch(
     custom_convert_map=None,
     default_dtype="float32",
     use_parser_friendly_name=False,
+    return_int8_weight=False,

Review comment:
       Thanks, I think this is a good suggestion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #9135: [Torch] Support returning quantized weights and bias for BYOC use cases

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #9135:
URL: https://github.com/apache/tvm/pull/9135#discussion_r717283401



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -3713,6 +3713,7 @@ def from_pytorch(
     custom_convert_map=None,
     default_dtype="float32",
     use_parser_friendly_name=False,
+    return_int8_weight=False,

Review comment:
       Thanks, I think this is a good suggestion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org