You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/06/04 16:43:38 UTC

[GitHub] [tvm] comaniac commented on issue #7730: [Bug] Missing broadcast_to before batch_matmul for CuBLAS

comaniac commented on issue #7730:
URL: https://github.com/apache/tvm/issues/7730#issuecomment-854866628


   While @csullivan proposed a long term solution to resolve the implementation difference between targets, this issue on CUDA has been workaround in PyTorch frontend in the PR mentioned above. Specifically, now if either one of the two inputs of matmul is 2D, then PyTorch frontend reshapes the 3D tensor to 2D and uses `dense` instead of expanding the 2D tensor to 3D and using `batch_matmul`. Meanwhile, other frontends may still have this issue, so I'll see if I can get time to file a PR to fix the CuBLAS issue next week.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org