You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by ckh via TVM Discuss <no...@discuss.tvm.ai> on 2020/04/01 12:00:29 UTC

[TVM Discuss] [Questions] Relay Winograd conv2d compile error


Hello!

I am currently implementing Winograd conv2d through Relay.

    data_shape = (1,64,224,224)
    w_shape= (64,64,3,3)

    input_data = relay.var('input', relay.TensorType(data_shape, "float32") )
    p1 = relay.var('wgt', relay.TensorType(w_shape, "float32") )

    FIR = relay.nn.contrib_conv2d_winograd_weight_transform(p1,4)
    conv = relay.nn.contrib_conv2d_winograd_without_weight_transform( input_data
                                                                        ,FIR
                                                                        ,4 # Tile Size
                                                                        ,(1,1) # stride
                                                                        ,(1,1) # padding
                                                                        ,(1,1)  # dilation
                                                                        ,channels=64
                                                                        ,kernel_size=(3,3))

    f = relay.Function( [input_data,p1], conv )
    mod = tvm.IRModule.from_expr(f)

    with relay.build_config(opt_level=3):
        with tvm.target.create('cuda'):
            gj, lib, params = tvm.relay.build_module.build(mod,'cuda')


When i compile the above code i get the below error.

    Traceback (most recent call last):

      File "RELAY_WINO.py", line 66, in <module>
        gj, lib, params = tvm.relay.build_module.build(mod,'cuda')

      File "/home/alpha930/Desktop/TVM/tvm/python/tvm/relay/build_module.py", line 251, in build
        graph_json, mod, params = bld_mod.build(mod, target, target_host, params)

      File "/home/alpha930/Desktop/TVM/tvm/python/tvm/relay/build_module.py", line 120, in build
        self._build(mod, target, target_host)

      File "/home/alpha930/Desktop/TVM/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 213, in __call__
        raise get_last_ffi_error()

    tvm._ffi.base.TVMError: Traceback (most recent call last):
      [bt] (7) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(TVMFuncCall+0x65) [0x7f81cbe4b4b5]
      [bt] (6) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x17) [0x7f81cbcf66d7]
      [bt] (5) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1b2) [0x7f81cbcf65e2]
      [bt] (4) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, tvm::runtime::NDArray> > > const&)+0xd2c) [0x7f81cbcf5dec]
      [bt] (3) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(tvm::build(tvm::Map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::Array<tvm::tir::LoweredFunc, void>, void, void> const&, tvm::Target const&, tvm::BuildConfig const&)+0x481) [0x7f81cb948cf1]
      [bt] (2) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(tvm::build(tvm::Map<tvm::Target, tvm::Array<tvm::tir::LoweredFunc, void>, void, void> const&, tvm::Target const&, tvm::BuildConfig const&)+0x228) [0x7f81cb947b18]
      [bt] (1) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(tvm::split_dev_host_funcs(tvm::Array<tvm::tir::LoweredFunc, void> const&, tvm::Target const&, tvm::Target const&, tvm::BuildConfig const&)+0xbeb) [0x7f81cb945f0b]
      [bt] (0) /home/alpha930/Desktop/TVM/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x43) [0x7f81cb5c69b3]
      File "/home/alpha930/Desktop/TVM/tvm/src/driver/driver_api.cc", line 192
    TVMError: Check failed: tir: :VerifyMemory(x, target->device_type): Direct host side access to device memory is detected in fused_nn_contrib_conv2d_winograd_weight_transform. Did you forget to bind?

As I remember, if I set a target when building, in the case of cuda, the bind is automatically performed, but the above code does not proceed.
Is there something I'm missing on the code?





---
[Visit Topic](https://discuss.tvm.ai/t/relay-winograd-conv2d-compile-error/6168/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/63c055b99881c436b97feb62fef1f3f61f61ef69b3b5e3658a49baac4283837a).