You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/18 07:43:43 UTC

[GitHub] [tvm] dlexplorer edited a comment on issue #7104: Segmentation fault with ACL integration on onnx inception v1

dlexplorer edited a comment on issue #7104:
URL: https://github.com/apache/tvm/issues/7104#issuecomment-747878618


   I double checked it and still confirm that loading inception v1 been compiled with ACL crashes on device and the version been compiled with pure llvm approach works well
   
   this is how I compiled network
   ```
   import argparse
   import sys
   import onnx
   import tvm
   from tvm import relay
   from tvm.contrib import ndk
   
   parser = argparse.ArgumentParser(description=
       "Converts and compiles ONNX model")
   required = parser.add_argument_group('required arguments')
   required.add_argument('-m', '--input_model', required=True, type=str, help="path to ONNX model")
   args = parser.parse_args()
   
   onnx_model = onnx.load(args.input_model)
   mod, params = relay.frontend.from_onnx(onnx_model)
   
   target = "llvm -model=snapdragon835 -mtriple=arm-linux-android -mattr=+neon"
   target_host = "llvm -mtriple=aarch64-linux-android-g++"
   
   # next two line defines the ACL/pure llvm diffirentiation
   from tvm.relay.op.contrib.arm_compute_lib import partition_for_arm_compute_lib
   mod = partition_for_arm_compute_lib(mod)
   
   with tvm.transform.PassContext(opt_level=3):
       lib = relay.build(mod, target=target, target_host=target_host, params=params)
   
   print(args.input_model + ".so")
   lib.export_library(args.input_model + ".so", ndk.create_shared)
   
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org