You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/09/23 06:58:00 UTC

[GitHub] [tvm] blackkker opened a new issue, #12882: [Bug] TOpPattern attr of qnn op discussion.

blackkker opened a new issue, #12882:
URL: https://github.com/apache/tvm/issues/12882

   When importing a quantized model, i found that the qnn op has not registered the **TOpPattern** attribute.
   Can I set them all to kOpaque first? Then, more discussion are needed to decide the **TOpPattern** attr of the qnn op.
   ### Expected behavior
   
   Build normally.
   
   ### Actual behavior
   
   `Check failed: (idx < data_.size() && data_[idx].second != 0) is false: Attribute TOpPattern has not been registered for qnn.concatenate`
   `Check failed: (idx < data_.size() && data_[idx].second != 0) is false: Attribute TOpPattern has not been registered for qnn.dense`
   `Check failed: (idx < data_.size() && data_[idx].second != 0) is false: Attribute TOpPattern has not been registered for qnn.requantize`
   ### Environment
   
   Any environment details, such as: Operating System, TVM version, etc
   pytorch(1.12.0)
   tvm version: [9ce95a9](https://github.com/apache/tvm/tree/9ce95a9abe3db43b4a4187111c9e2ad0d6bf3dbd)
   
   ### Steps to reproduce
   Download
   [bug.zip](https://github.com/apache/tvm/files/9631099/bug.zip)
   
   
   Run python check.py
   ```
   import tvm
   from tvm import relay
   
   import torch
   
   model_name = "googlenet_quant_torchscript.pt"
   pytorch_model = torch.jit.load(model_name).float().eval()
   input_name = "x"
   shape_list = [(input_name, (1, 3, 224, 224))]
   mod, params = relay.frontend.from_pytorch(pytorch_model, shape_list)
   
   target = tvm.target.Target("llvm", host="llvm")
   dev = tvm.cpu(0)
   with tvm.transform.PassContext(opt_level=0):
       lib = relay.build(mod, target=target, params=params)
   ```
   Preferably a minimal script to cause the issue to occur.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mbrookhart commented on issue #12882: [Bug] TOpPattern attr of qnn op discussion.

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on issue #12882:
URL: https://github.com/apache/tvm/issues/12882#issuecomment-1262695672

   You can't compile a qnn model at Opt level 0, there are no default kernels for the qnn ops. Instead they are lowered to standard relay ops via [qnn canocialize and legalize](https://github.com/apache/tvm/blob/main/python/tvm/relay/qnn/transform.py), which use the relay Legalize pass, which is registered for [Opt Level 1](https://github.com/apache/tvm/blob/813136401a11a49d6c15e6013c34dd822a5c4ff6/src/relay/transforms/legalize.cc#L104)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] blackkker commented on issue #12882: [Bug] TOpPattern attr of qnn op discussion.

Posted by GitBox <gi...@apache.org>.
blackkker commented on issue #12882:
URL: https://github.com/apache/tvm/issues/12882#issuecomment-1255860877

   @mbrookhart @AndrewZhaoLuo @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] blackkker commented on issue #12882: [Bug] TOpPattern attr of qnn op discussion.

Posted by GitBox <gi...@apache.org>.
blackkker commented on issue #12882:
URL: https://github.com/apache/tvm/issues/12882#issuecomment-1264379588

   > You can't compile a qnn model at Opt level 0, there are no default kernels for the qnn ops. Instead they are lowered to standard relay ops via [qnn canocialize and legalize](https://github.com/apache/tvm/blob/main/python/tvm/relay/qnn/transform.py), which use the relay Legalize pass, which is registered for [Opt Level 1](https://github.com/apache/tvm/blob/813136401a11a49d6c15e6013c34dd822a5c4ff6/src/relay/transforms/legalize.cc#L104)
   
   Thanks for your reply. I got it!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org