You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/06/14 21:53:54 UTC

[GitHub] [tvm] comaniac commented on a change in pull request #8251: [Frontend, Tensorflow] Support for broadcasting in batch_matmul when shapes differ

comaniac commented on a change in pull request #8251:
URL: https://github.com/apache/tvm/pull/8251#discussion_r651303466



##########
File path: python/tvm/relay/frontend/tensorflow_ops.py
##########
@@ -1157,11 +1154,18 @@ def _impl(inputs, attr, params, mod):
                 new_shape_y = _op.concatenate(_op.Tuple(new_shape_y), axis=0)
 
             input_x = _op.reshape(input_x, newshape=new_shape_x)
-            input_y = _op.reshape(input_y, newshape=new_shape_y)
+
+            if np.prod(orig_shape_y) < np.prod(new_shape_y):
+                input_y = _op.broadcast_to(input_y, new_shape_y)

Review comment:
       Agree. Please refer to ONNX and PyTorch frontend to avoid explicit broadcasting. Now both x86 and CUDA implementations of batch_matmul support implicit broadcasting, so simply `expand_dims(input_y)` to make it `(1, k, n)` would be sufficient.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org