You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/04/14 03:02:36 UTC

[GitHub] [tvm] comaniac commented on pull request #7843: Fix PyTorch batch_matmul conversion when given (3-dim, 2-dim) input pair

comaniac commented on pull request #7843:
URL: https://github.com/apache/tvm/pull/7843#issuecomment-819194151


   Is this related to #7730? The current CuBLAS support for batch_matmul doesn't support implicit broadcasting, but the TE compute does. It would be better to support it on the CuBLAS side without introducing a new op.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org