You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/01/13 20:33:27 UTC

[GitHub] [incubator-tvm] tmoreau89 opened a new pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

tmoreau89 opened a new pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698
 
 
   This PR extends the TFLite runtime to support edgeTPU-equipped Coral boards in order to measure inference time of models on edgeTPU with TVM RPC.
   
   ## Instructions to run the EdgeTPU runtime experiments
   
   ### Coral Board setup
   You'll need to follow these instructions: https://coral.ai/docs/dev-board/get-started/
   ```
   # Clone TensorFlow, and prepare the library dir
   # Note the older version of TF that we'll need to use
   git clone https://github.com/tensorflow/tensorflow --recursive --branch=1.8.0
   cd tensorflow
   mkdir tensorflow/lite/tools/make/gen
   mkdir tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a
   mkdir tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a/lib
   
   # TF dependence
   cd ~ && git clone https://github.com/google/flatbuffers.git
   cd flatbuffers && cmake -G "Unix Makefiles" && make && sudo make install
   
   # EdgeTPU lib
   cd ~ && git clone https://github.com/google-coral/edgetpu.git
   ```
   
   ### Cross compile tflite static library on x86 machine
   ```
   # Prerequisites 
   sudo apt-get update
   sudo apt-get install crossbuild-essential-arm64
   
   # cross-compile tflite library (note you need to use older version)
   git clone https://github.com/tensorflow/tensorflow.git --recursive --branch=1.8.0
   cd tensorflow
   ./tensorflow/lite/tools/make/download_dependencies.sh
   ./tensorflow/lite/tools/make/build_aarch64_lib.sh
   # Copy the tensorflow lib over to your coral board
   scp tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a/lib/libtensorflow-lite.a  mendel@coral:/home/mendel/tensorflow/tensorflow/lite/tools/make/gen/generic-aarch64_armv8-a/lib/
   ```
   
   ### Build TVM runtime on Coral Board
   ```
   cd ~ && git clone --recursive --branch=master https://github.com/apache/incubator-tvm.git tvm
   cd tvm && mkdir build && cp cmake/config.cmake build
   echo 'set(USE_GRAPH_RUNTIME_DEBUG ON)' >> build/config.cmake
   echo 'set(USE_TFLITE ON)' >> build/config.cmake
   echo 'set(USE_TENSORFLOW_PATH /home/mendel/tensorflow)' >> build/config.cmake
   echo 'set(USE_EDGETPU /home/mendel/edgetpu)' >> build/config.cmake
   cd build && cmake ..
   make runtime -j4
   ```
   
   ### Execute the RPC server on Coral
   First, follow this guide to set up a tracker for your remote devices: https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html#start-rpc-tracker.
   On the coral, once TVM runtime has been built, execute:
   ```
   PYTHONPATH=/home/mendel/tvm/python:$PYTHONPATH python3 -m tvm.exec.rpc_server --tracker $TVM_TRACKER_HOST:$TVM_TRACKER_NODE --key coral
   ```
   
   ### Evaluate MobileNet on Coral board
   
   Execute the following python script:
   ```python
   import numpy as np
   
   import tvm
   from tvm import autotvm, relay
   from tvm.contrib import tflite_runtime
   
   target_edgetpu = True
   
   # Note: replace "tracker" and 9191 with your tracker host and port name
   remote = autotvm.measure.request_remote("coral", "tracker", 9191, timeout=60)
   ctx = remote.cpu(0)
   
   tflite_fp = "mobilenet_v2_1.0_224_quant_edgetpu.tflite" if target_edgetpu else "mobilenet_v2_1.0_224_quant.tflite"
   input_data = np.random.rand(1,224,224,3).astype("uint8")
   with open(tflite_fp, 'rb') as f:
       runtime = tflite_runtime.create(f.read(), ctx, target_edgetpu=target_edgetpu)
       runtime.set_input(0, tvm.nd.array(input_data, ctx))
       ftimer = runtime.module.time_evaluator("invoke", ctx,
               number=10,
               repeat=3)
       times = np.array(ftimer().results) * 1000
       print("It took {0:.2f}ms to run mobilenet".format(np.mean(times)))
   ```
   
   Upon running it, you'll get:
   `It took 143.74ms to run mobilenet`
   
   Now, set `target_edgetpu = True` and you'll get:
   `It took 3.22ms to run mobilenet`
   
   ## Notable interface changes
   
   * The TFLite runtime API does not expose the `allocate()` method anymore, and tensor allocation is done as part of the initialization process.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
tmoreau89 commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r367197404
 
 

 ##########
 File path: python/tvm/contrib/tflite_runtime.py
 ##########
 @@ -18,7 +18,7 @@
 from .._ffi.function import get_global_func
 from ..rpc import base as rpc_base
 
-def create(tflite_model_bytes, ctx):
+def create(tflite_model_bytes, ctx, target_edgetpu=False):
 
 Review comment:
   thanks for the suggestion, I've made the changes

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r366016811
 
 

 ##########
 File path: python/tvm/contrib/tflite_runtime.py
 ##########
 @@ -27,16 +27,25 @@ def create(tflite_model_bytes, ctx):
     ctx : TVMContext
         The context to deploy the module. It can be local or remote when there
         is only one TVMContext.
+    target_edgetpu: bool
+        Targets execution on the edge TPU via tflite when running on the Coral board.
+        Set to False by default.
     Returns
     -------
     tflite_runtime : TFLiteModule
         Runtime tflite module that can be used to execute the tflite model.
     """
     device_type = ctx.device_type
     if device_type >= rpc_base.RPC_SESS_MASK:
-        fcreate = ctx._rpc_sess.get_function("tvm.tflite_runtime.create")
+        if target_edgetpu:
+            fcreate = ctx._rpc_sess.get_function("tvm.edgetpu_runtime.create")
+        else:
+            fcreate = ctx._rpc_sess.get_function("tvm.tflite_runtime.create")
         return TFLiteModule(fcreate(bytearray(tflite_model_bytes), ctx))
-    fcreate = get_global_func("tvm.tflite_runtime.create")
+    if target_edgetpu:
+        fcreate = get_global_func("tvm.edgetpu_runtime.create")
 
 Review comment:
   if these two create function share the same arguments, we can unify them as one create function with different returned runtime

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] ZihengJiang commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
ZihengJiang commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-573861790
 
 
   for allocate, does tflite runtime removed the `AllocateTensors` API or just EdgeTPU does not need it?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r366016186
 
 

 ##########
 File path: python/tvm/contrib/tflite_runtime.py
 ##########
 @@ -18,7 +18,7 @@
 from .._ffi.function import get_global_func
 from ..rpc import base as rpc_base
 
-def create(tflite_model_bytes, ctx):
+def create(tflite_model_bytes, ctx, target_edgetpu=False):
 
 Review comment:
    instead of a boolean argument, try to use a target string for future expansion: `target='edge_tpu'/'cpu'`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] ZihengJiang commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
ZihengJiang commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-575313115
 
 
   Looks good! Thanks! @tmoreau89 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r366016395
 
 

 ##########
 File path: python/tvm/contrib/tflite_runtime.py
 ##########
 @@ -50,20 +59,19 @@ class TFLiteModule(object):
     Parameters
     ----------
     module : Module
-        The interal tvm module that holds the actual tflite functions.
+        The internal tvm module that holds the actual tflite functions.
 
     Attributes
     ----------
     module : Module
-        The interal tvm module that holds the actual tflite functions.
+        The internal tvm module that holds the actual tflite functions.
     """
 
     def __init__(self, module):
         self.module = module
         self._set_input = module["set_input"]
         self._invoke = module["invoke"]
         self._get_output = module["get_output"]
-        self._allocate_tensors = module["allocate_tensors"]
 
 Review comment:
   why remove?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r366017494
 
 

 ##########
 File path: src/runtime/contrib/tflite/tflite_runtime.cc
 ##########
 @@ -92,20 +91,23 @@ DataType TfLiteDType2TVMDType(TfLiteType dtype) {
   }
 }
 
-
 void TFLiteRuntime::Init(const std::string& tflite_model_bytes,
                          TVMContext ctx) {
   const char* buffer = tflite_model_bytes.c_str();
   size_t buffer_size = tflite_model_bytes.size();
   std::unique_ptr<tflite::FlatBufferModel> model =
     tflite::FlatBufferModel::BuildFromBuffer(buffer, buffer_size);
   tflite::ops::builtin::BuiltinOpResolver resolver;
-  tflite::InterpreterBuilder(*model, resolver)(&interpreter_);
-  ctx_ = ctx;
-}
+  // Build interpreter
+  if (tflite::InterpreterBuilder(*model, resolver)(&interpreter_) != kTfLiteOk) {
 
 Review comment:
   we can define macro for TFLite error checking: `CHECK_STATUS(cond, msg)`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tqchen merged pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
tqchen merged pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tmoreau89 commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
tmoreau89 commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-574923602
 
 
   @ZihengJiang thanks for the feedback! The TFLite Interpreter still has `AllocateTensors`; however I wan't sure if we'd ever need to call it separately from interpreter initialization. If you believe that we need to decouple them, I can revert the interface change.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
tmoreau89 commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r367197362
 
 

 ##########
 File path: src/runtime/contrib/tflite/tflite_runtime.cc
 ##########
 @@ -92,20 +91,23 @@ DataType TfLiteDType2TVMDType(TfLiteType dtype) {
   }
 }
 
-
 void TFLiteRuntime::Init(const std::string& tflite_model_bytes,
                          TVMContext ctx) {
   const char* buffer = tflite_model_bytes.c_str();
   size_t buffer_size = tflite_model_bytes.size();
   std::unique_ptr<tflite::FlatBufferModel> model =
     tflite::FlatBufferModel::BuildFromBuffer(buffer, buffer_size);
   tflite::ops::builtin::BuiltinOpResolver resolver;
-  tflite::InterpreterBuilder(*model, resolver)(&interpreter_);
-  ctx_ = ctx;
-}
+  // Build interpreter
+  if (tflite::InterpreterBuilder(*model, resolver)(&interpreter_) != kTfLiteOk) {
 
 Review comment:
   thanks for the suggestion, this should be fixed by now

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
tmoreau89 commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r367186135
 
 

 ##########
 File path: python/tvm/contrib/tflite_runtime.py
 ##########
 @@ -27,16 +27,25 @@ def create(tflite_model_bytes, ctx):
     ctx : TVMContext
         The context to deploy the module. It can be local or remote when there
         is only one TVMContext.
+    target_edgetpu: bool
+        Targets execution on the edge TPU via tflite when running on the Coral board.
+        Set to False by default.
     Returns
     -------
     tflite_runtime : TFLiteModule
         Runtime tflite module that can be used to execute the tflite model.
     """
     device_type = ctx.device_type
     if device_type >= rpc_base.RPC_SESS_MASK:
-        fcreate = ctx._rpc_sess.get_function("tvm.tflite_runtime.create")
+        if target_edgetpu:
+            fcreate = ctx._rpc_sess.get_function("tvm.edgetpu_runtime.create")
+        else:
+            fcreate = ctx._rpc_sess.get_function("tvm.tflite_runtime.create")
         return TFLiteModule(fcreate(bytearray(tflite_model_bytes), ctx))
-    fcreate = get_global_func("tvm.tflite_runtime.create")
+    if target_edgetpu:
+        fcreate = get_global_func("tvm.edgetpu_runtime.create")
 
 Review comment:
   Unification here is less desired due to the fact that we won't always want to build the edgeTPU runtime when building the TFLite runtime. The limitation is that we need to build TVM with the edgeTPU library which comes in a separate repo; it's an extra software dependence that is not always wanted for users of vanilla TFLite.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
ZihengJiang commented on a change in pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#discussion_r366016395
 
 

 ##########
 File path: python/tvm/contrib/tflite_runtime.py
 ##########
 @@ -50,20 +59,19 @@ class TFLiteModule(object):
     Parameters
     ----------
     module : Module
-        The interal tvm module that holds the actual tflite functions.
+        The internal tvm module that holds the actual tflite functions.
 
     Attributes
     ----------
     module : Module
-        The interal tvm module that holds the actual tflite functions.
+        The internal tvm module that holds the actual tflite functions.
     """
 
     def __init__(self, module):
         self.module = module
         self._set_input = module["set_input"]
         self._invoke = module["invoke"]
         self._get_output = module["get_output"]
-        self._allocate_tensors = module["allocate_tensors"]
 
 Review comment:
   why remove?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] tmoreau89 commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
tmoreau89 commented on issue #4698: [Runtime] EdgeTPU runtime for Coral Boards
URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-574944892
 
 
   @ZihengJiang I should have addressed all of your comments by now; let me know if you're happy with the changes
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] Msabih edited a comment on pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
Msabih edited a comment on pull request #4698:
URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-669184211


   @tmoreau89 
   I have tried the setup with the same versions of tvm/tensorflow on the host and the board and the "cpu" part of the inference works fine.  But when I set the target to edge_tpu, I get this error on the rpc server
   
   ```
   ERROR: Internal: Unsupported data type: 0
   ERROR: Node number 0 (edgetpu-custom-op) failed to prepare
   ```
   
   And on the host machine, it says
   
   ```
    File "tvm_inference.py", line 21, in <module>
       runtime = tflite_runtime.create(f.read(), ctx, runtime_target=target)
   
     File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/contrib/tflite_runtime.py", line 49, in create
       return TFLiteModule(fcreate(bytearray(tflite_model_bytes), ctx))
   
     File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__
       raise get_last_ffi_error()
   
   tvm._ffi.base.TVMError: Traceback (most recent call last):
     [bt] (3) /tvm_env/tvm/build/libtvm.so(TVMFuncCall+0x69) [0x7f2fb63f8489]
     [bt] (2) /tvm_env/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::RPCModuleNode::WrapRemote(void*)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x46) [0x7f2fb644ad36]
     [bt] (1) /tvm_env/tvm/build/libtvm.so(tvm::runtime::RPCSession::CallFunc(void*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void* (*)(int, tvm::runtime::TVMArgValue const&), tvm::runtime::PackedFunc const*)+0x2c8) [0x7f2fb6454168]
     [bt] (0) /tvm_env/tvm/build/libtvm.so(+0xc21d6b) [0x7f2fb6450d6b]
     File "/tvm_env/tvm/src/runtime/rpc/rpc_session.cc", line 993
   TVMError: Check failed: code == RPCCode: :kReturn: code=4
   ```
   
   
   The inference directly on the edge TPU works fine.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] Msabih commented on pull request #4698: [Runtime] EdgeTPU runtime for Coral Boards

Posted by GitBox <gi...@apache.org>.
Msabih commented on pull request #4698:
URL: https://github.com/apache/incubator-tvm/pull/4698#issuecomment-669184211


   @tmoreau89 
   I have tried the setup with the same versions of tvm/tensorflow on the host and the board and the "cpu" part of the inference works fine.  But when I set the target to edge_tpu, I get this error on the rpc server
   
   ```
   ERROR: Internal: Unsupported data type: 0
   ERROR: Node number 0 (edgetpu-custom-op) failed to prepare
   ```
   
   And on the host machine, it says
   
   ```
    File "tvm_inference.py", line 21, in <module>
       runtime = tflite_runtime.create(f.read(), ctx, runtime_target=target)
   
     File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/contrib/tflite_runtime.py", line 49, in create
       return TFLiteModule(fcreate(bytearray(tflite_model_bytes), ctx))
   
     File "/home/sabih/Documents/phd_work/MAP_WORK/tvm_env/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__
       raise get_last_ffi_error()
   
   tvm._ffi.base.TVMError: Traceback (most recent call last):
     [bt] (3) /tvm_env/tvm/build/libtvm.so(TVMFuncCall+0x69) [0x7f2fb63f8489]
     [bt] (2) /tvm_env/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::RPCModuleNode::WrapRemote(void*)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x46) [0x7f2fb644ad36]
     [bt] (1) /tvm_env/tvm/build/libtvm.so(tvm::runtime::RPCSession::CallFunc(void*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void* (*)(int, tvm::runtime::TVMArgValue const&), tvm::runtime::PackedFunc const*)+0x2c8) [0x7f2fb6454168]
     [bt] (0) /tvm_env/tvm/build/libtvm.so(+0xc21d6b) [0x7f2fb6450d6b]
     File "/tvm_env/tvm/src/runtime/rpc/rpc_session.cc", line 993
   TVMError: Check failed: code == RPCCode: :kReturn: code=4
   ```
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org