You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by ckh via TVM Discuss <no...@discuss.tvm.ai> on 2020/04/21 02:46:55 UTC
[TVM Discuss] [Questions] Memory error when using graph debugger on
rk3399
Hello!
I am trying to use the graph debugger to measure the performance of the VGG16 on the rk3399 board.
I simply debugged it using the code below.
import numpy as np
from tvm import relay
from tvm.relay import testing
import tvm
from tvm import te
from tvm.contrib.debugger import debug_runtime as graph_runtime
batch_size = 1
num_class = 1000
image_shape = (3, 224, 224)
data_shape = (batch_size,) + image_shape
out_shape = (batch_size, num_class)
mod, params = relay.testing.vgg.get_workload(
num_layers=16, batch_size=batch_size, image_shape=image_shape)
opt_level = 3
target = tvm.target.create('llvm -device=arm_cpu -target=aarch64-linux-gnu')
with relay.build_config(opt_level=opt_level):
graph, lib, params = relay.build(mod, target, params=params)
ctx = tvm.cpu()
data = np.random.uniform(-1, 1, size=data_shape).astype("float32")
# create module
module = graph_runtime.create(graph, lib, ctx)
# set input and parameters
module.set_input("data", data)
module.set_input(**params)
# run
module.run()
However, the following memory error occurs in the rk3399 environment when running the corresponding code.
###when running code on arm cpu or mali
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
The issue does not occur on Nvidia GPU or x86 CPUs.
Is this simply a hardware problem with rk3399? Or is it tvm internal problem?
---
[Visit Topic](https://discuss.tvm.ai/t/memory-error-when-using-graph-debugger-on-rk3399/6440/1) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/a1aa8c93eaab1d45986f91808d2dcfcd34072fecd077487048fc8110e73d9bce).