You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by 张晨晨 via TVM Discuss <no...@discuss.tvm.ai> on 2020/04/09 03:39:01 UTC
[TVM Discuss] [Questions] Testonnx 很简单的代码,不知道什么错误
tvm 0.6 , onnx 1.6.0 ,python3.5 .llvm 4.0
先发个正确的版本,这应该能说明我的环境没有问题
使用from_onnx.py里面的模型,super_resolution_0.2.onnx
import onnx
import numpy as np
import tvm
import tvm.relay as relay
onnx_model = onnx.load('super_resolution_0.2.onnx')
target = tvm.target.create('llvm')
input_name = '1' # change '1' to '0'
shape_dict = {input_name: (1, 1, 224, 224)}
sym, params = relay.frontend.from_onnx(onnx_model, shape_dict)
print("onnx model files")
zcc@zcc-X10SRA:~/下载/tvm$ python3 testonnx.py
/home/zcc/.local/lib/python3.5/site-packages/xgboost/__init__.py:28: FutureWarning: Python 3.5 support is deprecated; XGBoost will require Python 3.6+ in the near future. Consider upgrading to Python 3.6+.
FutureWarning)
/home/zcc/App/incubator-tvm/incubator-tvm/python/tvm/relay/frontend/onnx.py:1487: UserWarning: Mismatched attribute type in ' : kernel_shape'
==> Context: Bad node spec: input: "1" input: "2" output: "11" op_type: "Conv" attribute { name: "kernel_shape" ints: 5 ints: 5 } attribute { name: "strides" ints: 1 ints: 1 } attribute { name: "pads" ints: 2 ints: 2 ints: 2 ints: 2 } attribute { name: "dilations" ints: 1 ints: 1 } attribute { name: "group" i: 1 }
warnings.warn(str(e))
onnx model files
下面是一个错误的示例,使用的是自己的模型,看了一个帖子,打算简化一下onnx
简单测试的代码:
import onnx
import numpy as np
import tvm
import tvm.relay as relay
onnx_model = onnx.load('mnas025.onnx')
target = tvm.target.create('llvm')
input_name = 'data' # change '1' to '0'
shape_dict = {input_name: (1, 3, 112, 112)}
sym, params = relay.frontend.from_onnx(onnx_model, shape_dict)
print("onnx model files")
#with relay.build_config(opt_level=2):
# graph, lib, params = relay.build_module.build(sym, target, params=params)
#dtype = 'float32'
#from tvm.contrib import graph_runtime
#print("Output model files")
#libpath = "./test.so"
#lib.export_library(libpath)
#graph_json_path = "./test.json"
#with open(graph_json_path, 'w') as fo:
# fo.write(graph)
#param_path = "./test.params"
#with open(param_path, 'wb') as fo:
# fo.write(relay.save_param_dict(params))
错误:
zcc@zcc-X10SRA:~/下载/tvm$ python3 testonnx.py
\/home/zcc/.local/lib/python3.5/site-packages/xgboost/__init__.py:28: FutureWarning: Python 3.5 support is deprecated; XGBoost will require Python 3.6+ in the near future. Consider upgrading to Python 3.6+.
FutureWarning)
WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm
WARNING:root:Attribute spatial is ignored in relay.sym.batch_norm
. ..
...
...
WARNING:root:Attribute spatial is ignored in relay.sym.batch_norm
WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm
WARNING:root:Attribute spatial is ignored in relay.sym.batch_norm
Traceback (most recent call last):
File "testonnx.py", line 13, in <module>
sym, params = relay.frontend.from_onnx(onnx_model, shape_dict)
File "/home/zcc/App/incubator-tvm/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 1497, in from_onnx
mod, params = g.from_onnx(graph, opset)
File "/home/zcc/App/incubator-tvm/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 1344, in from_onnx
return _module.Module.from_expr(func), self._params
File "/home/zcc/App/incubator-tvm/incubator-tvm/python/tvm/relay/module.py", line 233, in from_expr
return _module.Module_FromExpr(expr, funcs, defs)
File "/home/zcc/App/incubator-tvm/incubator-tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (7) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(TVMFuncCall+0x61) [0x7f6881245e71]
[bt] (6) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(+0xa888a1) [0x7f68811648a1]
[bt] (5) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(tvm::relay::ModuleNode::FromExpr(tvm::relay::Expr const&, tvm::Map<tvm::relay::GlobalVar, tvm::relay::Function, void, void> const&, tvm::Map<tvm::relay::GlobalTypeVar, tvm::relay::TypeData, void, void> const&)+0x1d5) [0x7f6881163815]
[bt] (4) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(tvm::relay::ModuleNode::Add(tvm::relay::GlobalVar const&, tvm::relay::Function const&, bool)+0x28c) [0x7f68811613bc]
[bt] (3) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x1d7) [0x7f6881084a97]
[bt] (2) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::relay::Expr)+0x86) [0x7f6881084316]
[bt] (1) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(tvm::relay::ErrorReporter::RenderErrors(tvm::relay::Module const&, bool)+0x230c) [0x7f688114174c]
[bt] (0) /home/zcc/App/incubator-tvm/incubator-tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7f6880a6fae2]
File "/home/zcc/App/incubator-tvm/incubator-tvm/src/relay/ir/error.cc", line 132
TVMError:
Error(s) have occurred. The program has been annotated with them:
In `main`:
v0.0.4
fn (%data: Tensor[(1, 3, 112, 112), float32], %scalar_op1: Tensor[(1), float32], %scalar_op2: Tensor[(1), float32], %mnasnet0_stage1_conv0_conv0_weight: Tensor[(8, 3, 3, 3), float32], %mnasnet0_stage1_conv0_batchnorm0_gamma: Tensor[(8), float32], %mnasnet0_stage1_conv0_batchnorm0_beta: Tensor[(8), float32],
...
...
...
%mnasnet0_stage5_3_batchnorm0_running_mean: Tensor[(256), float32], %mnasnet0_stage5_3_batchnorm0_running_var: Tensor[(256), float32], %mnasnet0_stage5_3_prelu0_alpha: Tensor[(1), float32], %conv_6dw7_7_conv2d_weight: Tensor[(256, 1, 7, 7), float32], %conv_6dw7_7_batchnorm_gamma: Tensor[(256), float32], %conv_6dw7_7_batchnorm_beta: Tensor[(256), float32], %conv_6dw7_7_batchnorm_moving_mean: Tensor[(256), float32], %conv_6dw7_7_batchnorm_moving_var: Tensor[(256), float32], %pre_fc1_weight: Tensor[(256, 256), float32], %pre_fc1_bias: Tensor[(256), float32], %fc1_gamma: Tensor[(256), float32], %fc1_beta: Tensor[(256), float32], %fc1_moving_mean: Tensor[(256), float32], %fc1_moving_var: Tensor[(256), float32]) {
%0 = subtract(%data, %scalar_op1);
%1 = multiply(%0, %scalar_op2);
%2 = nn.conv2d(%1, %mnasnet0_stage1_conv0_conv0_weight, padding=[1, 1], kernel_size=[3, 3]);
..下面还有
---
[Visit Topic](https://discuss.tvm.ai/t/testonnx/6296/1) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/2f4bf55dc9ad9c2cf25f4b535c73c9d35e717935927c866a928f15887d9176f9).
[TVM Discuss] [Questions] Testonnx 很简单的代码,不知道什么错误
Posted by 张晨晨 via TVM Discuss <no...@discuss.tvm.ai>.
%3 = nn.batch_norm(%2, %mnasnet0_stage1_conv0_batchnorm0_gamma, %mnasnet0_stage1_conv0_batchnorm0_beta, %mnasnet0_stage1_conv0_batchnorm0_running_mean, %mnasnet0_stage1_conv0_batchnorm0_running_var, epsilon=1e-05f);
%4 = %3.0;
%5 = nn.prelu(%4, %mnasnet0_stage1_conv0_prelu0_alpha) in particular dimension 0 conflicts 8 does not match 1; unable to unify: `Tensor[(8), float32]` and `Tensor[(1), float32]`; ;
%6 = nn.conv2d(%5, %mnasnet0_stage1_sepconv0_conv0_weight, padding=[1, 1], groups=8, kernel_size=[3, 3]);
%7 = nn.batch_norm(%6, %mnasnet0_stage1_sepconv0_batchnorm0_gamma, %mnasnet0_stage1_sepconv0_batchnorm0_beta, %mnasnet0_stage1_sepconv0_batchnorm0_running_mean, %mnasnet0_stage1_sepconv0_batchnorm0_running_var, epsilon=1e-05f);
%8 = %7.0;
%9 = nn.prelu(%8, %mnasnet0_stage1_sepconv0_prelu0_alpha) in particular dimension 0 conflicts 8 does not match 1; unable to unify: `Tensor[(8), float32]` and `Tensor[(1), float32]`; ;
%10 = nn.conv2d(%9, %mnasnet0_stage1_sepconv0_conv1_weight, kernel_size=[1, 1]);
%11 = nn.batch_norm(%10, %mnasnet0_stage1_sepconv0_batchnorm1_gamma, %mnasnet0_stage1_sepconv0_batchnorm1_beta, %mnasnet0_stage1_sepconv0_batchnorm1_running_mean, %mnasnet0_stage1_sepconv0_batchnorm1_running_var, epsilon=1e-05f);
%12 = %11.0;
%13 = nn.prelu(%12, %mnasnet0_stage1_sepconv0_prelu1_alpha) in particular dimension 0 conflicts 4 does not match 1; unable to unify: `Tensor[(4), float32]` and `Tensor[(1), float32]`; ;
%14 = nn.conv2d(%13, %mnasnet0_stage2_expandedconv0_expand_conv0_weight, kernel_size=[1, 1]);
%15 = nn.batch_norm(%14, %mnasnet0_stage2_expandedconv0_expand_batchnorm0_gamma, %mnasnet0_stage2_expandedconv0_expand_batchnorm0_beta, %mnasnet0_stage2_expandedconv0_expand_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv0_expand_batchnorm0_running_var, epsilon=1e-05f);
%16 = %15.0;
%17 = nn.prelu(%16, %mnasnet0_stage2_expandedconv0_expand_prelu0_alpha) in particular dimension 0 conflicts 12 does not match 1; unable to unify: `Tensor[(12), float32]` and `Tensor[(1), float32]`; ;
%18 = nn.conv2d(%17, %mnasnet0_stage2_expandedconv0_dwise_conv0_weight, strides=[2, 2], padding=[1, 1], groups=12, kernel_size=[3, 3]);
%19 = nn.batch_norm(%18, %mnasnet0_stage2_expandedconv0_dwise_batchnorm0_gamma, %mnasnet0_stage2_expandedconv0_dwise_batchnorm0_beta, %mnasnet0_stage2_expandedconv0_dwise_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv0_dwise_batchnorm0_running_var, epsilon=1e-05f);
%20 = %19.0;
%21 = nn.prelu(%20, %mnasnet0_stage2_expandedconv0_dwise_prelu0_alpha) in particular dimension 0 conflicts 12 does not match 1; unable to unify: `Tensor[(12), float32]` and `Tensor[(1), float32]`; ;
%22 = nn.conv2d(%21, %mnasnet0_stage2_expandedconv0_linear_conv0_weight, kernel_size=[1, 1]);
%23 = nn.batch_norm(%22, %mnasnet0_stage2_expandedconv0_linear_batchnorm0_gamma, %mnasnet0_stage2_expandedconv0_linear_batchnorm0_beta, %mnasnet0_stage2_expandedconv0_linear_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv0_linear_batchnorm0_running_var, epsilon=1e-05f);
%24 = %23.0;
%25 = nn.conv2d(%24, %mnasnet0_stage2_expandedconv1_expand_conv0_weight, kernel_size=[1, 1]);
%26 = nn.batch_norm(%25, %mnasnet0_stage2_expandedconv1_expand_batchnorm0_gamma, %mnasnet0_stage2_expandedconv1_expand_batchnorm0_beta, %mnasnet0_stage2_expandedconv1_expand_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv1_expand_batchnorm0_running_var, epsilon=1e-05f);
%27 = %26.0;
%28 = nn.prelu(%27, %mnasnet0_stage2_expandedconv1_expand_prelu0_alpha) in particular dimension 0 conflicts 18 does not match 1; unable to unify: `Tensor[(18), float32]` and `Tensor[(1), float32]`; ;
%29 = nn.conv2d(%28, %mnasnet0_stage2_expandedconv1_dwise_conv0_weight, padding=[1, 1], groups=18, kernel_size=[3, 3]);
%30 = nn.batch_norm(%29, %mnasnet0_stage2_expandedconv1_dwise_batchnorm0_gamma, %mnasnet0_stage2_expandedconv1_dwise_batchnorm0_beta, %mnasnet0_stage2_expandedconv1_dwise_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv1_dwise_batchnorm0_running_var, epsilon=1e-05f);
%31 = %30.0;
%32 = nn.prelu(%31, %mnasnet0_stage2_expandedconv1_dwise_prelu0_alpha) in particular dimension 0 conflicts 18 does not match 1; unable to unify: `Tensor[(18), float32]` and `Tensor[(1), float32]`; ;
%33 = nn.conv2d(%32, %mnasnet0_stage2_expandedconv1_linear_conv0_weight, kernel_size=[1, 1]);
%34 = nn.batch_norm(%33, %mnasnet0_stage2_expandedconv1_linear_batchnorm0_gamma, %mnasnet0_stage2_expandedconv1_linear_batchnorm0_beta, %mnasnet0_stage2_expandedconv1_linear_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv1_linear_batchnorm0_running_var, epsilon=1e-05f);
%35 = %34.0;
%36 = add(%35, %24);
%37 = nn.conv2d(%36, %mnasnet0_stage2_expandedconv2_expand_conv0_weight, kernel_size=[1, 1]);
%38 = nn.batch_norm(%37, %mnasnet0_stage2_expandedconv2_expand_batchnorm0_gamma, %mnasnet0_stage2_expandedconv2_expand_batchnorm0_beta, %mnasnet0_stage2_expandedconv2_expand_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv2_expand_batchnorm0_running_var, epsilon=1e-05f);
%39 = %38.0;
%40 = nn.prelu(%39, %mnasnet0_stage2_expandedconv2_expand_prelu0_alpha) in particular dimension 0 conflicts 18 does not match 1; unable to unify: `Tensor[(18), float32]` and `Tensor[(1), float32]`; ;
%41 = nn.conv2d(%40, %mnasnet0_stage2_expandedconv2_dwise_conv0_weight, padding=[1, 1], groups=18, kernel_size=[3, 3]);
%42 = nn.batch_norm(%41, %mnasnet0_stage2_expandedconv2_dwise_batchnorm0_gamma, %mnasnet0_stage2_expandedconv2_dwise_batchnorm0_beta, %mnasnet0_stage2_expandedconv2_dwise_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv2_dwise_batchnorm0_running_var, epsilon=1e-05f);
%43 = %42.0;
%44 = nn.prelu(%43, %mnasnet0_stage2_expandedconv2_dwise_prelu0_alpha) in particular dimension 0 conflicts 18 does not match 1; unable to unify: `Tensor[(18), float32]` and `Tensor[(1), float32]`; ;
%45 = nn.conv2d(%44, %mnasnet0_stage2_expandedconv2_linear_conv0_weight, kernel_size=[1, 1]);
%46 = nn.batch_norm(%45, %mnasnet0_stage2_expandedconv2_linear_batchnorm0_gamma, %mnasnet0_stage2_expandedconv2_linear_batchnorm0_beta, %mnasnet0_stage2_expandedconv2_linear_batchnorm0_running_mean, %mnasnet0_stage2_expandedconv2_linear_batchnorm0_running_var, epsilon=1e-05f);
%47 = %46.0;
%48 = add(%47, %36);
%49 = nn.conv2d(%48, %mnasnet0_stage3_expandedconv0_expand_conv0_weight, kernel_size=[1, 1]);
%50 = nn.batch_norm(%49, %mnasnet0_stage3_expandedconv0_expand_batchnorm0_gamma, %mnasnet0_stage3_expandedconv0_expand_batchnorm0_beta, %mnasnet0_stage3_expandedconv0_expand_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv0_expand_batchnorm0_running_var, epsilon=1e-05f);
%51 = %50.0;
%52 = nn.prelu(%51, %mnasnet0_stage3_expandedconv0_expand_prelu0_alpha) in particular dimension 0 conflicts 18 does not match 1; unable to unify: `Tensor[(18), float32]` and `Tensor[(1), float32]`; ;
%53 = nn.conv2d(%52, %mnasnet0_stage3_expandedconv0_dwise_conv0_weight, strides=[2, 2], padding=[2, 2], groups=18, kernel_size=[5, 5]);
%54 = nn.batch_norm(%53, %mnasnet0_stage3_expandedconv0_dwise_batchnorm0_gamma, %mnasnet0_stage3_expandedconv0_dwise_batchnorm0_beta, %mnasnet0_stage3_expandedconv0_dwise_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv0_dwise_batchnorm0_running_var, epsilon=1e-05f);
%55 = %54.0;
%56 = nn.prelu(%55, %mnasnet0_stage3_expandedconv0_dwise_prelu0_alpha) in particular dimension 0 conflicts 18 does not match 1; unable to unify: `Tensor[(18), float32]` and `Tensor[(1), float32]`; ;
%57 = nn.conv2d(%56, %mnasnet0_stage3_expandedconv0_linear_conv0_weight, kernel_size=[1, 1]);
%58 = nn.batch_norm(%57, %mnasnet0_stage3_expandedconv0_linear_batchnorm0_gamma, %mnasnet0_stage3_expandedconv0_linear_batchnorm0_beta, %mnasnet0_stage3_expandedconv0_linear_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv0_linear_batchnorm0_running_var, epsilon=1e-05f);
%59 = %58.0;
%60 = nn.conv2d(%59, %mnasnet0_stage3_expandedconv1_expand_conv0_weight, kernel_size=[1, 1]);
%61 = nn.batch_norm(%60, %mnasnet0_stage3_expandedconv1_expand_batchnorm0_gamma, %mnasnet0_stage3_expandedconv1_expand_batchnorm0_beta, %mnasnet0_stage3_expandedconv1_expand_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv1_expand_batchnorm0_running_var, epsilon=1e-05f);
%62 = %61.0;
%63 = nn.prelu(%62, %mnasnet0_stage3_expandedconv1_expand_prelu0_alpha) in particular dimension 0 conflicts 30 does not match 1; unable to unify: `Tensor[(30), float32]` and `Tensor[(1), float32]`; ;
%64 = nn.conv2d(%63, %mnasnet0_stage3_expandedconv1_dwise_conv0_weight, padding=[1, 1], groups=30, kernel_size=[3, 3]);
%65 = nn.batch_norm(%64, %mnasnet0_stage3_expandedconv1_dwise_batchnorm0_gamma, %mnasnet0_stage3_expandedconv1_dwise_batchnorm0_beta, %mnasnet0_stage3_expandedconv1_dwise_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv1_dwise_batchnorm0_running_var, epsilon=1e-05f);
%66 = %65.0;
%67 = nn.prelu(%66, %mnasnet0_stage3_expandedconv1_dwise_prelu0_alpha) in particular dimension 0 conflicts 30 does not match 1; unable to unify: `Tensor[(30), float32]` and `Tensor[(1), float32]`; ;
%68 = nn.conv2d(%67, %mnasnet0_stage3_expandedconv1_linear_conv0_weight, kernel_size=[1, 1]);
%69 = nn.batch_norm(%68, %mnasnet0_stage3_expandedconv1_linear_batchnorm0_gamma, %mnasnet0_stage3_expandedconv1_linear_batchnorm0_beta, %mnasnet0_stage3_expandedconv1_linear_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv1_linear_batchnorm0_running_var, epsilon=1e-05f);
%70 = %69.0;
%71 = add(%70, %59);
%72 = nn.conv2d(%71, %mnasnet0_stage3_expandedconv2_expand_conv0_weight, kernel_size=[1, 1]);
%73 = nn.batch_norm(%72, %mnasnet0_stage3_expandedconv2_expand_batchnorm0_gamma, %mnasnet0_stage3_expandedconv2_expand_batchnorm0_beta, %mnasnet0_stage3_expandedconv2_expand_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv2_expand_batchnorm0_running_var, epsilon=1e-05f);
%74 = %73.0;
%75 = nn.prelu(%74, %mnasnet0_stage3_expandedconv2_expand_prelu0_alpha) in particular dimension 0 conflicts 30 does not match 1; unable to unify: `Tensor[(30), float32]` and `Tensor[(1), float32]`; ;
%76 = nn.conv2d(%75, %mnasnet0_stage3_expandedconv2_dwise_conv0_weight, padding=[1, 1], groups=30, kernel_size=[3, 3]);
%77 = nn.batch_norm(%76, %mnasnet0_stage3_expandedconv2_dwise_batchnorm0_gamma, %mnasnet0_stage3_expandedconv2_dwise_batchnorm0_beta, %mnasnet0_stage3_expandedconv2_dwise_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv2_dwise_batchnorm0_running_var, epsilon=1e-05f);
%78 = %77.0;
%79 = nn.prelu(%78, %mnasnet0_stage3_expandedconv2_dwise_prelu0_alpha) in particular dimension 0 conflicts 30 does not match 1; unable to unify: `Tensor[(30), float32]` and `Tensor[(1), float32]`; ;
%80 = nn.conv2d(%79, %mnasnet0_stage3_expandedconv2_linear_conv0_weight, kernel_size=[1, 1]);
%81 = nn.batch_norm(%80, %mnasnet0_stage3_expandedconv2_linear_batchnorm0_gamma, %mnasnet0_stage3_expandedconv2_linear_batchnorm0_beta, %mnasnet0_stage3_expandedconv2_linear_batchnorm0_running_mean, %mnasnet0_stage3_expandedconv2_linear_batchnorm0_running_var, epsilon=1e-05f);
%82 = %81.0;
%83 = add(%82, %71);
%84 = nn.conv2d(%83, %mnasnet0_stage4_1_expandedconv0_expand_conv0_weight, kernel_size=[1, 1]);
%85 = nn.batch_norm(%84, %mnasnet0_stage4_1_expandedconv0_expand_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv0_expand_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv0_expand_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv0_expand_batchnorm0_running_var, epsilon=1e-05f);
%86 = %85.0;
%87 = nn.prelu(%86, %mnasnet0_stage4_1_expandedconv0_expand_prelu0_alpha) in particular dimension 0 conflicts 60 does not match 1; unable to unify: `Tensor[(60), float32]` and `Tensor[(1), float32]`; ;
%88 = nn.conv2d(%87, %mnasnet0_stage4_1_expandedconv0_dwise_conv0_weight, strides=[2, 2], padding=[2, 2], groups=60, kernel_size=[5, 5]);
%89 = nn.batch_norm(%88, %mnasnet0_stage4_1_expandedconv0_dwise_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv0_dwise_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv0_dwise_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv0_dwise_batchnorm0_running_var, epsilon=1e-05f);
%90 = %89.0;
%91 = nn.prelu(%90, %mnasnet0_stage4_1_expandedconv0_dwise_prelu0_alpha) in particular dimension 0 conflicts 60 does not match 1; unable to unify: `Tensor[(60), float32]` and `Tensor[(1), float32]`; ;
%92 = nn.conv2d(%91, %mnasnet0_stage4_1_expandedconv0_linear_conv0_weight, kernel_size=[1, 1]);
%93 = nn.batch_norm(%92, %mnasnet0_stage4_1_expandedconv0_linear_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv0_linear_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv0_linear_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv0_linear_batchnorm0_running_var, epsilon=1e-05f);
%94 = %93.0;
%95 = nn.conv2d(%94, %mnasnet0_stage4_1_expandedconv1_expand_conv0_weight, kernel_size=[1, 1]);
%96 = nn.batch_norm(%95, %mnasnet0_stage4_1_expandedconv1_expand_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv1_expand_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv1_expand_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv1_expand_batchnorm0_running_var, epsilon=1e-05f);
%97 = %96.0;
%98 = nn.prelu(%97, %mnasnet0_stage4_1_expandedconv1_expand_prelu0_alpha) in particular dimension 0 conflicts 120 does not match 1; unable to unify: `Tensor[(120), float32]` and `Tensor[(1), float32]`; ;
%99 = nn.conv2d(%98, %mnasnet0_stage4_1_expandedconv1_dwise_conv0_weight, padding=[1, 1], groups=120, kernel_size=[3, 3]);
%100 = nn.batch_norm(%99, %mnasnet0_stage4_1_expandedconv1_dwise_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv1_dwise_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv1_dwise_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv1_dwise_batchnorm0_running_var, epsilon=1e-05f);
%101 = %100.0;
%102 = nn.prelu(%101, %mnasnet0_stage4_1_expandedconv1_dwise_prelu0_alpha) in particular dimension 0 conflicts 120 does not match 1; unable to unify: `Tensor[(120), float32]` and `Tensor[(1), float32]`; ;
%103 = nn.conv2d(%102, %mnasnet0_stage4_1_expandedconv1_linear_conv0_weight, kernel_size=[1, 1]);
%104 = nn.batch_norm(%103, %mnasnet0_stage4_1_expandedconv1_linear_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv1_linear_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv1_linear_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv1_linear_batchnorm0_running_var, epsilon=1e-05f);
%105 = %104.0;
%106 = add(%105, %94);
%107 = nn.conv2d(%106, %mnasnet0_stage4_1_expandedconv2_expand_conv0_weight, kernel_size=[1, 1]);
%108 = nn.batch_norm(%107, %mnasnet0_stage4_1_expandedconv2_expand_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv2_expand_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv2_expand_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv2_expand_batchnorm0_running_var, epsilon=1e-05f);
%109 = %108.0;
%110 = nn.prelu(%109, %mnasnet0_stage4_1_expandedconv2_expand_prelu0_alpha) in particular dimension 0 conflicts 120 does not match 1; unable to unify: `Tensor[(120), float32]` and `Tensor[(1), float32]`; ;
%111 = nn.conv2d(%110, %mnasnet0_stage4_1_expandedconv2_dwise_conv0_weight, padding=[1, 1], groups=120, kernel_size=[3, 3]);
%112 = nn.batch_norm(%111, %mnasnet0_stage4_1_expandedconv2_dwise_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv2_dwise_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv2_dwise_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv2_dwise_batchnorm0_running_var, epsilon=1e-05f);
%113 = %112.0;
%114 = nn.prelu(%113, %mnasnet0_stage4_1_expandedconv2_dwise_prelu0_alpha) in particular dimension 0 conflicts 120 does not match 1; unable to unify: `Tensor[(120), float32]` and `Tensor[(1), float32]`; ;
%115 = nn.conv2d(%114, %mnasnet0_stage4_1_expandedconv2_linear_conv0_weight, kernel_size=[1, 1]);
%116 = nn.batch_norm(%115, %mnasnet0_stage4_1_expandedconv2_linear_batchnorm0_gamma, %mnasnet0_stage4_1_expandedconv2_linear_batchnorm0_beta, %mnasnet0_stage4_1_expandedconv2_linear_batchnorm0_running_mean, %mnasnet0_stage4_1_expandedconv2_linear_batchnorm0_running_var, epsilon=1e-05f);
%117 = %116.0;
%118 = add(%117, %106);
%119 = nn.conv2d(%118, %mnasnet0_stage4_2_expandedconv0_expand_conv0_weight, kernel_size=[1, 1]);
%120 = nn.batch_norm(%119, %mnasnet0_stage4_2_expandedconv0_expand_batchnorm0_gamma, %mnasnet0_stage4_2_expandedconv0_expand_batchnorm0_beta, %mnasnet0_stage4_2_expandedconv0_expand_batchnorm0_running_mean, %mnasnet0_stage4_2_expandedconv0_expand_batchnorm0_running_var, epsilon=1e-05f);
%121 = %120.0;
%122 = nn.prelu(%121, %mnasnet0_stage4_2_expandedconv0_expand_prelu0_alpha) in particular dimension 0 conflicts 120 does not match 1; unable to unify: `Tensor[(120), float32]` and `Tensor[(1), float32]`; ;
%123 = nn.conv2d(%122, %mnasnet0_stage4_2_expandedconv0_dwise_conv0_weight, padding=[1, 1], groups=120, kernel_size=[3, 3]);
%124 = nn.batch_norm(%123, %mnasnet0_stage4_2_expandedconv0_dwise_batchnorm0_gamma, %mnasnet0_stage4_2_expandedconv0_dwise_batchnorm0_beta, %mnasnet0_stage4_2_expandedconv0_dwise_batchnorm0_running_mean, %mnasnet0_stage4_2_expandedconv0_dwise_batchnorm0_running_var, epsilon=1e-05f);
%125 = %124.0;
%126 = nn.prelu(%125, %mnasnet0_stage4_2_expandedconv0_dwise_prelu0_alpha) in particular dimension 0 conflicts 120 does not match 1; unable to unify: `Tensor[(120), float32]` and `Tensor[(1), float32]`; ;
%127 = nn.conv2d(%126, %mnasnet0_stage4_2_expandedconv0_linear_conv0_weight, kernel_size=[1, 1]);
%128 = nn.batch_norm(%127, %mnasnet0_stage4_2_expandedconv0_linear_batchnorm0_gamma, %mnasnet0_stage4_2_expandedconv0_linear_batchnorm0_beta, %mnasnet0_stage4_2_expandedconv0_linear_batchnorm0_running_mean, %mnasnet0_stage4_2_expandedconv0_linear_batchnorm0_running_var, epsilon=1e-05f);
%129 = %128.0;
%130 = nn.conv2d(%129, %mnasnet0_stage4_2_expandedconv1_expand_conv0_weight, kernel_size=[1, 1]);
%131 = nn.batch_norm(%130, %mnasnet0_stage4_2_expandedconv1_expand_batchnorm0_gamma, %mnasnet0_stage4_2_expandedconv1_expand_batchnorm0_beta, %mnasnet0_stage4_2_expandedconv1_expand_batchnorm0_running_mean, %mnasnet0_stage4_2_expandedconv1_expand_batchnorm0_running_var, epsilon=1e-05f);
%132 = %131.0;
%133 = nn.prelu(%132, %mnasnet0_stage4_2_expandedconv1_expand_prelu0_alpha) in particular dimension 0 conflicts 144 does not match 1; unable to unify: `Tensor[(144), float32]` and `Tensor[(1), float32]`; ;
%134 = nn.conv2d(%133, %mnasnet0_stage4_2_expandedconv1_dwise_conv0_weight, padding=[1, 1], groups=144, kernel_size=[3, 3]);
%135 = nn.batch_norm(%134, %mnasnet0_stage4_2_expandedconv1_dwise_batchnorm0_gamma, %mnasnet0_stage4_2_expandedconv1_dwise_batchnorm0_beta, %mnasnet0_stage4_2_expandedconv1_dwise_batchnorm0_running_mean, %mnasnet0_stage4_2_expandedconv1_dwise_batchnorm0_running_var, epsilon=1e-05f);
%136 = %135.0;
%137 = nn.prelu(%136, %mnasnet0_stage4_2_expandedconv1_dwise_prelu0_alpha) in particular dimension 0 conflicts 144 does not match 1; unable to unify: `Tensor[(144), float32]` and `Tensor[(1), float32]`; ;
%138 = nn.conv2d(%137, %mnasnet0_stage4_2_expandedconv1_linear_conv0_weight, kernel_size=[1, 1]);
%139 = nn.batch_norm(%138, %mnasnet0_stage4_2_expandedconv1_linear_batchnorm0_gamma, %mnasnet0_stage4_2_expandedconv1_linear_batchnorm0_beta, %mnasnet0_stage4_2_expandedconv1_linear_batchnorm0_running_mean, %mnasnet0_stage4_2_expandedconv1_linear_batchnorm0_running_var, epsilon=1e-05f);
%140 = %139.0;
%141 = add(%140, %129);
%142 = nn.conv2d(%141, %mnasnet0_stage5_1_expandedconv0_expand_conv0_weight, kernel_size=[1, 1]);
%143 = nn.batch_norm(%142, %mnasnet0_stage5_1_expandedconv0_expand_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv0_expand_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv0_expand_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv0_expand_batchnorm0_running_var, epsilon=1e-05f);
%144 = %143.0;
%145 = nn.prelu(%144, %mnasnet0_stage5_1_expandedconv0_expand_prelu0_alpha) in particular dimension 0 conflicts 144 does not match 1; unable to unify: `Tensor[(144), float32]` and `Tensor[(1), float32]`; ;
%146 = nn.conv2d(%145, %mnasnet0_stage5_1_expandedconv0_dwise_conv0_weight, strides=[2, 2], padding=[2, 2], groups=144, kernel_size=[5, 5]);
%147 = nn.batch_norm(%146, %mnasnet0_stage5_1_expandedconv0_dwise_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv0_dwise_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv0_dwise_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv0_dwise_batchnorm0_running_var, epsilon=1e-05f);
%148 = %147.0;
%149 = nn.prelu(%148, %mnasnet0_stage5_1_expandedconv0_dwise_prelu0_alpha) in particular dimension 0 conflicts 144 does not match 1; unable to unify: `Tensor[(144), float32]` and `Tensor[(1), float32]`; ;
%150 = nn.conv2d(%149, %mnasnet0_stage5_1_expandedconv0_linear_conv0_weight, kernel_size=[1, 1]);
%151 = nn.batch_norm(%150, %mnasnet0_stage5_1_expandedconv0_linear_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv0_linear_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv0_linear_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv0_linear_batchnorm0_running_var, epsilon=1e-05f);
%152 = %151.0;
%153 = nn.conv2d(%152, %mnasnet0_stage5_1_expandedconv1_expand_conv0_weight, kernel_size=[1, 1]);
%154 = nn.batch_norm(%153, %mnasnet0_stage5_1_expandedconv1_expand_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv1_expand_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv1_expand_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv1_expand_batchnorm0_running_var, epsilon=1e-05f);
%155 = %154.0;
%156 = nn.prelu(%155, %mnasnet0_stage5_1_expandedconv1_expand_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%157 = nn.conv2d(%156, %mnasnet0_stage5_1_expandedconv1_dwise_conv0_weight, padding=[1, 1], groups=288, kernel_size=[3, 3]);
%158 = nn.batch_norm(%157, %mnasnet0_stage5_1_expandedconv1_dwise_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv1_dwise_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv1_dwise_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv1_dwise_batchnorm0_running_var, epsilon=1e-05f);
%159 = %158.0;
%160 = nn.prelu(%159, %mnasnet0_stage5_1_expandedconv1_dwise_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%161 = nn.conv2d(%160, %mnasnet0_stage5_1_expandedconv1_linear_conv0_weight, kernel_size=[1, 1]);
%162 = nn.batch_norm(%161, %mnasnet0_stage5_1_expandedconv1_linear_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv1_linear_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv1_linear_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv1_linear_batchnorm0_running_var, epsilon=1e-05f);
%163 = %162.0;
%164 = add(%163, %152);
%165 = nn.conv2d(%164, %mnasnet0_stage5_1_expandedconv2_expand_conv0_weight, kernel_size=[1, 1]);
%166 = nn.batch_norm(%165, %mnasnet0_stage5_1_expandedconv2_expand_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv2_expand_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv2_expand_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv2_expand_batchnorm0_running_var, epsilon=1e-05f);
%167 = %166.0;
%168 = nn.prelu(%167, %mnasnet0_stage5_1_expandedconv2_expand_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%169 = nn.conv2d(%168, %mnasnet0_stage5_1_expandedconv2_dwise_conv0_weight, padding=[1, 1], groups=288, kernel_size=[3, 3]);
%170 = nn.batch_norm(%169, %mnasnet0_stage5_1_expandedconv2_dwise_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv2_dwise_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv2_dwise_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv2_dwise_batchnorm0_running_var, epsilon=1e-05f);
%171 = %170.0;
%172 = nn.prelu(%171, %mnasnet0_stage5_1_expandedconv2_dwise_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%173 = nn.conv2d(%172, %mnasnet0_stage5_1_expandedconv2_linear_conv0_weight, kernel_size=[1, 1]);
%174 = nn.batch_norm(%173, %mnasnet0_stage5_1_expandedconv2_linear_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv2_linear_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv2_linear_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv2_linear_batchnorm0_running_var, epsilon=1e-05f);
%175 = %174.0;
%176 = add(%175, %164);
%177 = nn.conv2d(%176, %mnasnet0_stage5_1_expandedconv3_expand_conv0_weight, kernel_size=[1, 1]);
%178 = nn.batch_norm(%177, %mnasnet0_stage5_1_expandedconv3_expand_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv3_expand_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv3_expand_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv3_expand_batchnorm0_running_var, epsilon=1e-05f);
%179 = %178.0;
%180 = nn.prelu(%179, %mnasnet0_stage5_1_expandedconv3_expand_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%181 = nn.conv2d(%180, %mnasnet0_stage5_1_expandedconv3_dwise_conv0_weight, padding=[1, 1], groups=288, kernel_size=[3, 3]);
%182 = nn.batch_norm(%181, %mnasnet0_stage5_1_expandedconv3_dwise_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv3_dwise_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv3_dwise_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv3_dwise_batchnorm0_running_var, epsilon=1e-05f);
%183 = %182.0;
%184 = nn.prelu(%183, %mnasnet0_stage5_1_expandedconv3_dwise_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%185 = nn.conv2d(%184, %mnasnet0_stage5_1_expandedconv3_linear_conv0_weight, kernel_size=[1, 1]);
%186 = nn.batch_norm(%185, %mnasnet0_stage5_1_expandedconv3_linear_batchnorm0_gamma, %mnasnet0_stage5_1_expandedconv3_linear_batchnorm0_beta, %mnasnet0_stage5_1_expandedconv3_linear_batchnorm0_running_mean, %mnasnet0_stage5_1_expandedconv3_linear_batchnorm0_running_var, epsilon=1e-05f);
%187 = %186.0;
%188 = add(%187, %176);
%189 = nn.conv2d(%188, %mnasnet0_stage5_2_expandedconv0_expand_conv0_weight, kernel_size=[1, 1]);
%190 = nn.batch_norm(%189, %mnasnet0_stage5_2_expandedconv0_expand_batchnorm0_gamma, %mnasnet0_stage5_2_expandedconv0_expand_batchnorm0_beta, %mnasnet0_stage5_2_expandedconv0_expand_batchnorm0_running_mean, %mnasnet0_stage5_2_expandedconv0_expand_batchnorm0_running_var, epsilon=1e-05f);
%191 = %190.0;
%192 = nn.prelu(%191, %mnasnet0_stage5_2_expandedconv0_expand_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%193 = nn.conv2d(%192, %mnasnet0_stage5_2_expandedconv0_dwise_conv0_weight, padding=[1, 1], groups=288, kernel_size=[3, 3]);
%194 = nn.batch_norm(%193, %mnasnet0_stage5_2_expandedconv0_dwise_batchnorm0_gamma, %mnasnet0_stage5_2_expandedconv0_dwise_batchnorm0_beta, %mnasnet0_stage5_2_expandedconv0_dwise_batchnorm0_running_mean, %mnasnet0_stage5_2_expandedconv0_dwise_batchnorm0_running_var, epsilon=1e-05f);
%195 = %194.0;
%196 = nn.prelu(%195, %mnasnet0_stage5_2_expandedconv0_dwise_prelu0_alpha) in particular dimension 0 conflicts 288 does not match 1; unable to unify: `Tensor[(288), float32]` and `Tensor[(1), float32]`; ;
%197 = nn.conv2d(%196, %mnasnet0_stage5_2_expandedconv0_linear_conv0_weight, kernel_size=[1, 1]);
%198 = nn.batch_norm(%197, %mnasnet0_stage5_2_expandedconv0_linear_batchnorm0_gamma, %mnasnet0_stage5_2_expandedconv0_linear_batchnorm0_beta, %mnasnet0_stage5_2_expandedconv0_linear_batchnorm0_running_mean, %mnasnet0_stage5_2_expandedconv0_linear_batchnorm0_running_var, epsilon=1e-05f);
%199 = %198.0;
%200 = nn.conv2d(%199, %mnasnet0_stage5_3_conv0_weight, kernel_size=[1, 1]);
%201 = nn.batch_norm(%200, %mnasnet0_stage5_3_batchnorm0_gamma, %mnasnet0_stage5_3_batchnorm0_beta, %mnasnet0_stage5_3_batchnorm0_running_mean, %mnasnet0_stage5_3_batchnorm0_running_var, epsilon=1e-05f);
%202 = %201.0;
%203 = nn.prelu(%202, %mnasnet0_stage5_3_prelu0_alpha) in particular dimension 0 conflicts 256 does not match 1; unable to unify: `Tensor[(256), float32]` and `Tensor[(1), float32]`; ;
%204 = nn.conv2d(%203, %conv_6dw7_7_conv2d_weight, groups=256, kernel_size=[7, 7]);
%205 = nn.batch_norm(%204, %conv_6dw7_7_batchnorm_gamma, %conv_6dw7_7_batchnorm_beta, %conv_6dw7_7_batchnorm_moving_mean, %conv_6dw7_7_batchnorm_moving_var, epsilon=0.001f);
%206 = %205.0;
%207 = nn.batch_flatten(%206);
%208 = nn.batch_flatten(%207);
%209 = multiply(1f, %208);
%210 = nn.dense(%209, %pre_fc1_weight, units=256);
%211 = multiply(1f, %pre_fc1_bias);
%212 = nn.bias_add(%210, %211);
%213 = nn.batch_norm(%212, %fc1_gamma, %fc1_beta, %fc1_moving_mean, %fc1_moving_var, epsilon=2e-05f);
%213.0
}
---
[Visit Topic](https://discuss.tvm.ai/t/testonnx/6296/2) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/7199e8876ac3c175a92b80c7d548694e80d637fa14aa89b1863607cf53f6264c).