You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/11/02 08:45:16 UTC

[GitHub] [tvm] NataliaTabirca opened a new issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

NataliaTabirca opened a new issue #9423:
URL: https://github.com/apache/tvm/issues/9423


   I am using the tvm version from the main branch in order to test simple models on my mimxrt1060_evk board. I used the script provided in the documentation for sine_model.tflite and it runs perfectly on my board with the expected output.
   
   When I try to run mobilenet_v1_0.5_128.tflite (or any other model) I get this error:
   
   Traceback (most recent call last):
     File "test_mobilenet.py", line 118, in <module>
       module.get_graph_json(), session.get_system_lib(), session.device
     File "/home/vagrant/uTVM/python/tvm/micro/session.py", line 214, in create_local_graph_executor
       fcreate(graph_json_str, mod, lookup_remote_linked_param, *device_type_id)
     File "/home/vagrant/uTVM/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
       raise get_last_ffi_error()
   tvm.error.RPCError: Traceback (most recent call last):
     13: TVMFuncCall
     12: _ZNSt17_Function_handlerI
     11: tvm::runtime::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const [clone .isra.765]
     10: tvm::runtime::GraphExecutorCreate(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::Module const&, std::vector<DLDevice, std::allocator<DLDevice> > const&, tvm::runtime::PackedFunc)
     9: tvm::runtime::GraphExecutor::Init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::Module, std::vector<DLDevice, std::allocator<DLDevice> > const&, tvm::runtime::PackedFunc)
     8: tvm::runtime::GraphExecutor::SetupStorage()
     7: tvm::runtime::NDArray::Empty(tvm::runtime::ShapeTuple, DLDataType, DLDevice, tvm::runtime::Optional<tvm::runtime::String>)
     6: tvm::runtime::RPCDeviceAPI::AllocDataSpace(DLDevice, int, long const*, DLDataType, tvm::runtime::Optional<tvm::runtime::String>)
     5: tvm::runtime::RPCClientSession::AllocDataSpace(DLDevice, int, long const*, DLDataType, tvm::runtime::Optional<tvm::runtime::String>)
     4: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::RPCEndpoint::Init()::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
     3: tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::function<void (tvm::runtime::TVMArgs)>)
     2: tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::function<void (tvm::runtime::TVMArgs)>)
     1: tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::function<void (tvm::runtime::TVMArgs)>)
     0: tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::function<void (tvm::runtime::TVMArgs)>)
     File "/home/vagrant/uTVM/src/runtime/rpc/rpc_endpoint.cc", line 376
   RPCError: Error caught from RPC call:
   
   This is the entire output (the "RPCError: Error caught from RPC call:" displays nothing). Also, the error appears in function "tvm.micro.create_local_graph_executor"
   [micro_tvm_issue.zip](https://github.com/apache/tvm/files/7459101/micro_tvm_issue.zip)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] gromero commented on issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
gromero commented on issue #9423:
URL: https://github.com/apache/tvm/issues/9423#issuecomment-975998894


   @NataliaTabirca Hi. Thanks for reporting the issue and for creating a simple reproducer.
   
   The issue boils down to a lack of RAM at runtime due to the amount of memory required by your model (mobilenet), as we suspected.
   
   More specifically, the trace you've pasted above is due to `TVMPlatformMemoryAllocate()` failing to allocate memory on Zephyr's heap (i.e. allocation via `k_heap_alloc()` - `TVMPlatformMemoryAllocate()` is wrapper around `k_heap_alloc()`) upon receiving a `tvm::runtime::RPCCode::kDevAllocDataWithScope` packet from the host instructing to allocate 196608 bytes on the heap.
   
   On increasing the memory heap size in `K_HEAP_DEFINE` in `apps/microtvm/zephyr/template_project/src/host_driven/main.c` I was able to go a little bit ahead and fail only on the fourth call to `k_heap_alloc()`. But since I'm using a mx1050 board and you're using a mx1060 (with more memory) could you please try to increase the heap size and try again on your board:
   
   ```
   diff --git a/apps/microtvm/zephyr/template_project/src/host_driven/main.c b/apps/microtvm/zephyr/template_project/src/host_driven/main.c
   index 44d656028..64cd10fed 100644
   --- a/apps/microtvm/zephyr/template_project/src/host_driven/main.c
   +++ b/apps/microtvm/zephyr/template_project/src/host_driven/main.c
   @@ -130,7 +130,7 @@ tvm_crt_error_t TVMPlatformGenerateRandom(uint8_t* buffer, size_t num_bytes) {
    }
    
    // Heap for use by TVMPlatformMemoryAllocate.
   -K_HEAP_DEFINE(tvm_heap, 216 * 1024);
   +K_HEAP_DEFINE(tvm_heap, (216+768) * 1024);
    
    // Called by TVM to allocate memory.
    tvm_crt_error_t TVMPlatformMemoryAllocate(size_t num_bytes, DLDevice dev, void** out_ptr) {
   ``` 
   
   You need to run `make` inside TVM's build dir again after that change to update the template source dirs in `<build_dir>/microtvm_template_projects` accordingly.
   
   Also, now that `[TVMC][Relay] Introduce executor and runtime parameters (#9352)` is merged you need to adapt your reproducer and pass `runtime=RUNTIME`  to `relay.build()`, for example:
   
   ```
   ######################################################################
   # Now, compile the model for the target:
   from tvm.relay.backend import Runtime
   
   RUNTIME = Runtime("crt", {"system-lib": True})
   TARGET = tvm.target.target.micro("imxrt10xx")
   with tvm.transform.PassContext(
       opt_level=3, config={"tir.disable_vectorize": True}, disabled_pass=["AlterOpLayout"]
   ):
       module = relay.build(mod, target=TARGET, runtime=RUNTIME, params=params)
   ```
   
   That said, if you're looking for some performance numbers, I think you should try to use a `aot` executor instead of the `graph` executor.  I think also that `mobilenet`  on `aot` executor will have much smaller memory footprint due to some optimizations on handling tensor data between the operations in the workspace memory.
   
   Also, it's necessary to fix that useless error message: `RPCError: Error caught from RPC call:` by setting the `g_last_error` correctly  when allocation fails. So the user at least know there is a lack of memory at runtime. I'll send a patch for it soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mehrdadh commented on issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on issue #9423:
URL: https://github.com/apache/tvm/issues/9423#issuecomment-1048028283


   @masahi we can close this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mehrdadh commented on issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on issue #9423:
URL: https://github.com/apache/tvm/issues/9423#issuecomment-958024421


   @NataliaTabirca thanks for reporting this. I will take a look and get back to you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] JKANG94 edited a comment on issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
JKANG94 edited a comment on issue #9423:
URL: https://github.com/apache/tvm/issues/9423#issuecomment-972612749


   @mehrdadh  I also met this problem. I used micro tvm to run tflitemodel on x86 host just like this http://tvm.apache.org/docs/how_to/work_with_microtvm/micro_tflite.html#sphx-glr-download-how-to-work-with-microtvm-micro-tflite-py, but I used the create_local_debug_executo for getting the runtime of the model.
   
   with tvm.micro.Session(generated_project.transport()) as session:
       debug_module = tvm.micro.create_local_debug_executor(
           module.get_graph_json(), session.get_system_lib(), session.device
       )
       debug_module.set_input(**module.get_params())
       debug_module.set_input(input_tensor, tvm.nd.array(np.array([0.5], dtype="float32")))
       print("########## Build untuning ##########")
       debug_module.run()
       del debug_module
   
   
   
   Traceback (most recent call last):
     File "micro_tflite.py", line 324, in <module>
       debug_module.run()
     File "/workspace/tvm/python/tvm/contrib/debugger/debug_executor.py", line 260, in run
       self._run_debug()
     File "/workspace/tvm/python/tvm/contrib/debugger/debug_executor.py", line 217, in _run_debug
       self.debug_datum._time_list = [[float(t)] for t in self.run_individual(10, 1, 1)]
     File "/workspace/tvm/python/tvm/contrib/debugger/debug_executor.py", line 269, in run_individual
       ret = self._run_individual(number, repeat, min_repeat_ms)
     File "/workspace/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
       raise get_last_ffi_error()
   tvm.error.RPCError: Traceback (most recent call last):
     13: TVMFuncCall
     12: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::GraphExecutorDebug::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#4}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
     11: tvm::runtime::GraphExecutorDebug::RunIndividual[abi:cxx11](int, int, int)
     10: tvm::runtime::GraphExecutorDebug::RunOpRPC(int, int, int, int)
     9: tvm::runtime::TypedPackedFunc<tvm::runtime::PackedFunc (tvm::runtime::Optional<tvm::runtime::Module>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, int, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)>::AssignTypedLambda<tvm::runtime::{lambda(tvm::runtime::Optional<tvm::runtime::Module>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, int, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)#1}>(tvm::runtime::{lambda(tvm::runtime::Optional<tvm::runtime::Module>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, int, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(
 tvm::runtime::TVMArgs const, tvm::runtime::TVMRetValue) const
     8: tvm::runtime::RPCModuleNode::GetTimeEvaluator(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
     7: _ZNSt17_Function_handlerIFvN3tvm7runtime7TVMArgsEPNS1_11
     6: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
     5: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
     4: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
     3: tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::function<void (tvm::runtime::TVMArgs)>)
     2: tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::function<void (tvm::runtime::TVMArgs)>)
     1: tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::function<void (tvm::runtime::TVMArgs)>)
     0: tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::function<void (tvm::runtime::TVMArgs)>)
     File "/workspace/tvm/src/runtime/rpc/rpc_endpoint.cc", line 376
   RPCError: Error caught from RPC call:
   
   
   
   Can you see it?  Thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] JKANG94 commented on issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
JKANG94 commented on issue #9423:
URL: https://github.com/apache/tvm/issues/9423#issuecomment-972612749


   @mehrdadh  I also met this problem. I used micro tvm to run tflitemodel on x86 host just like this http://tvm.apache.org/docs/how_to/work_with_microtvm/micro_tflite.html#sphx-glr-download-how-to-work-with-microtvm-micro-tflite-py, but I used the create_local_debug_executo for getting the runtime of the model.
   
   with tvm.micro.Session(generated_project.transport()) as session:
       debug_module = tvm.micro.create_local_debug_executor(
           module.get_graph_json(), session.get_system_lib(), session.device
       )
       debug_module.set_input(**module.get_params())
       debug_module.set_input(input_tensor, tvm.nd.array(np.array([0.5], dtype="float32")))
       print("########## Build untuning ##########")
       debug_module.run()
       del debug_module
   
   
   Can you see it?  Thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi closed issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
masahi closed issue #9423:
URL: https://github.com/apache/tvm/issues/9423


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] JKANG94 edited a comment on issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
JKANG94 edited a comment on issue #9423:
URL: https://github.com/apache/tvm/issues/9423#issuecomment-972612749


   @mehrdadh  I also met this problem. I used micro tvm to run tflitemodel on x86 host just like this http://tvm.apache.org/docs/how_to/work_with_microtvm/micro_tflite.html#sphx-glr-download-how-to-work-with-microtvm-micro-tflite-py, but I used the create_local_debug_executo for getting the running time of the executor.
   
   with tvm.micro.Session(generated_project.transport()) as session:
       debug_module = tvm.micro.create_local_debug_executor(
           module.get_graph_json(), session.get_system_lib(), session.device
       )
       debug_module.set_input(**module.get_params())
       debug_module.set_input(input_tensor, tvm.nd.array(np.array([0.5], dtype="float32")))
       print("########## Build untuning ##########")
       debug_module.run()
       del debug_module
   
   
   
   Traceback (most recent call last):
     File "micro_tflite.py", line 324, in <module>
       debug_module.run()
     File "/workspace/tvm/python/tvm/contrib/debugger/debug_executor.py", line 260, in run
       self._run_debug()
     File "/workspace/tvm/python/tvm/contrib/debugger/debug_executor.py", line 217, in _run_debug
       self.debug_datum._time_list = [[float(t)] for t in self.run_individual(10, 1, 1)]
     File "/workspace/tvm/python/tvm/contrib/debugger/debug_executor.py", line 269, in run_individual
       ret = self._run_individual(number, repeat, min_repeat_ms)
     File "/workspace/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
       raise get_last_ffi_error()
   tvm.error.RPCError: Traceback (most recent call last):
     13: TVMFuncCall
     12: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::GraphExecutorDebug::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#4}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
     11: tvm::runtime::GraphExecutorDebug::RunIndividual[abi:cxx11](int, int, int)
     10: tvm::runtime::GraphExecutorDebug::RunOpRPC(int, int, int, int)
     9: tvm::runtime::TypedPackedFunc<tvm::runtime::PackedFunc (tvm::runtime::Optional<tvm::runtime::Module>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, int, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)>::AssignTypedLambda<tvm::runtime::{lambda(tvm::runtime::Optional<tvm::runtime::Module>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, int, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)#1}>(tvm::runtime::{lambda(tvm::runtime::Optional<tvm::runtime::Module>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, int, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(
 tvm::runtime::TVMArgs const, tvm::runtime::TVMRetValue) const
     8: tvm::runtime::RPCModuleNode::GetTimeEvaluator(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, DLDevice, int, int, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
     7: _ZNSt17_Function_handlerIFvN3tvm7runtime7TVMArgsEPNS1_11
     6: tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
     5: tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)
     4: tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)
     3: tvm::runtime::RPCEndpoint::HandleUntilReturnEvent(bool, std::function<void (tvm::runtime::TVMArgs)>)
     2: tvm::runtime::RPCEndpoint::EventHandler::HandleNextEvent(bool, bool, std::function<void (tvm::runtime::TVMArgs)>)
     1: tvm::runtime::RPCEndpoint::EventHandler::HandleProcessPacket(std::function<void (tvm::runtime::TVMArgs)>)
     0: tvm::runtime::RPCEndpoint::EventHandler::HandleReturn(tvm::runtime::RPCCode, std::function<void (tvm::runtime::TVMArgs)>)
     File "/workspace/tvm/src/runtime/rpc/rpc_endpoint.cc", line 376
   RPCError: Error caught from RPC call:
   
   
   
   Can you see it?  Thank you!
   
   
   I try to use create_local_graph_executor, it can run model and get the output correctly. But it's hard to get the time of executor when running model on board such as stm32f407.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mehrdadh commented on issue #9423: [Bug] microTVM run problem on mimxrt1060_evk (NXP) board

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on issue #9423:
URL: https://github.com/apache/tvm/issues/9423#issuecomment-958024421


   @NataliaTabirca thanks for reporting this. I will take a look and get back to you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org