You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/10/16 23:54:36 UTC

[GitHub] [incubator-tvm] sxjscience commented on a change in pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

sxjscience commented on a change in pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#discussion_r506766247



##########
File path: python/tvm/topi/x86/batch_matmul.py
##########
@@ -157,6 +163,10 @@ def batch_matmul_cblas(cfg, x, y):
     YB, N, YK = get_const_tuple(y.shape)
     assert XB == YB, "batch dimension doesn't match"
     assert XK == YK, "shapes of x and y is inconsistant"
+    if out_shape is not None:
+        assert out_shape[0] == XB, "got invalid output shape"
+        assert out_shape[1] == M, "got invalid output shape"
+        assert out_shape[2] == N, "got invalid output shape"

Review comment:
       The reason is that if we do not add this, running the end-to-end script with `target = "llvm -mcpu=skylake-avx512 -libs=cblas"` will trigger the following error:
   ```python
   TypeError: Traceback (most recent call last):
     [bt] (8) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr_(tvm::relay::CallNode const*)+0xf12) [0x7f8f383774b2]
     [bt] (7) /home/ubuntu/tvm/build/libtvm.so(+0xf87235) [0x7f8f3834b235]
     [bt] (6) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x8a1) [0x7f8f38355f81]
     [bt] (5) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0x25b) [0x7f8f3835265b]
     [bt] (4) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::backend::MemoizedExprTranslator<tvm::runtime::Array<tvm::te::Tensor, void> >::VisitExpr(tvm::RelayExpr const&)+0xa9) [0x7f8f38358b89]
     [bt] (3) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x82) [0x7f8f38358952]
     [bt] (2) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>*)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::runtime::Array<tvm::te::Tensor, void> (tvm::RelayExpr const&)>*)+0x27) [0x7f8f3834b717]
     [bt] (1) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x68c) [0x7f8f3835175c]
     [bt] (0) /home/ubuntu/tvm/build/libtvm.so(+0x112beab) [0x7f8f384efeab]
     File "tvm/_ffi/_cython/./packed_func.pxi", line 55, in tvm._ffi._cy3.core.tvm_callback
     File "/home/ubuntu/tvm/python/tvm/relay/backend/compile_engine.py", line 284, in lower_call
       best_impl, outputs = select_implementation(op, call.attrs, inputs, ret_type, target)
     File "/home/ubuntu/tvm/python/tvm/relay/backend/compile_engine.py", line 206, in select_implementation
       outs = impl.compute(attrs, inputs, out_type)
     File "/home/ubuntu/tvm/python/tvm/relay/op/op.py", line 91, in compute
       return _OpImplementationCompute(self, attrs, inputs, out_type)
     File "tvm/_ffi/_cython/./packed_func.pxi", line 321, in tvm._ffi._cy3.core.PackedFuncBase.__call__
     File "tvm/_ffi/_cython/./packed_func.pxi", line 266, in tvm._ffi._cy3.core.FuncCall
     File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
     [bt] (3) /home/ubuntu/tvm/build/libtvm.so(TVMFuncCall+0x65) [0x7f8f384f3205]
     [bt] (2) /home/ubuntu/tvm/build/libtvm.so(+0x104b8c8) [0x7f8f3840f8c8]
     [bt] (1) /home/ubuntu/tvm/build/libtvm.so(tvm::relay::OpImplementation::Compute(tvm::Attrs const&, tvm::runtime::Array<tvm::te::Tensor, void> const&, tvm::Type const&)+0xb1) [0x7f8f3840f691]
     [bt] (0) /home/ubuntu/tvm/build/libtvm.so(+0x112beab) [0x7f8f384efeab]
     File "tvm/_ffi/_cython/./packed_func.pxi", line 55, in tvm._ffi._cy3.core.tvm_callback
     File "/home/ubuntu/tvm/python/tvm/relay/op/strategy/generic.py", line 686, in _compute_batch_matmul
       return [topi_compute(inputs[0], inputs[1], out_type.shape)]
     File "/home/ubuntu/tvm/python/tvm/autotvm/task/topi_integration.py", line 162, in wrapper
       node = topi_compute(cfg, *args)
   TypeError: batch_matmul_cblas() takes 3 positional arguments but 4 were given
   ```
   
   The root cause is that the logic here requires the batch_matmul to take the output_shape: 
   https://github.com/apache/incubator-tvm/blob/461e75bd5ffaf45a0f270998514d444463d11261/python/tvm/relay/op/strategy/generic.py#L685-L686




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org