You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/06/06 01:35:47 UTC

[GitHub] [tvm] mengceng15 opened a new pull request, #11586: [Relay][OP] Fix batch matmul quantization implementation

mengceng15 opened a new pull request, #11586:
URL: https://github.com/apache/tvm/pull/11586

   Fix a minor problem with batch matmul quantization realize.
   
   If the two inputs are cast into the same data type, no vnni implementation will be added to batch_matmul x86 strategy.
   (In /python/tvm/relay/op/strategy/x86.py, function batch_matmul_strategy_cpu)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] masahi merged pull request #11586: [Relay][OP] Fix batch matmul quantization implementation

Posted by GitBox <gi...@apache.org>.
masahi merged PR #11586:
URL: https://github.com/apache/tvm/pull/11586


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org