You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/11/19 02:11:01 UTC
[GitHub] KellenSunderland edited a comment on issue #12931: How do define
onnx path for tensorrt integration?
KellenSunderland edited a comment on issue #12931: How do define onnx path for tensorrt integration?
URL: https://github.com/apache/incubator-mxnet/issues/12931#issuecomment-439750388
@vandanavk #13310 might take some iterating to get right, but this is also addressed in https://github.com/apache/incubator-mxnet/pull/12469 which is ready to be merged if someone can have a look.
@nchafni: First of all at the moment ONNX's proto client needs to be generated before compiling. Make sure you're doing a
```bash
# Build ONNX
pushd .
echo "Installing ONNX."
cd 3rdparty/onnx-tensorrt/third_party/onnx
rm -rf build
mkdir -p build
cd build
cmake \
-DCMAKE_CXX_FLAGS=-I/usr/include/python${PYVER}\
-DBUILD_SHARED_LIBS=ON ..\
-G Ninja
ninja -j 1 -v onnx/onnx.proto
ninja -j 1 -v
export LIBRARY_PATH=`pwd`:`pwd`/onnx/:$LIBRARY_PATH
export CPLUS_INCLUDE_PATH=`pwd`:$CPLUS_INCLUDE_PATH
popd
# Build ONNX-TensorRT
pushd .
cd 3rdparty/onnx-tensorrt/
mkdir -p build
cd build
cmake ..
make -j$(nproc)
export LIBRARY_PATH=`pwd`:$LIBRARY_PATH
popd
```
before compilation.
I'f you've already done that and the onnx.pb.h file is present you can try and add it to your cpp include path so the compiler can pick it up. You could try building using cmake (just add -DUSE_TENSORRT).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services