You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/18 11:11:30 UTC

[GitHub] idealboy opened a new issue #14191: About mxnet-tensorrt's problem between different mxnet version?

idealboy opened a new issue #14191: About mxnet-tensorrt's problem between different mxnet version?
URL: https://github.com/apache/incubator-mxnet/issues/14191
 
 
   Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io 
   
   ## Description
   
   when I use mxnet 1.3.0 with USE_TENSORRT, I found it has some problem when loading model(generate from 0.9.3).The executor with tensorrt always give the same output whatever is the input.
   
   I re-save the old version model within mxnet1.3.0 framework, but it is still such case when doing inference. 
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   ----------Python Info----------
   ('Version      :', '2.7.15')
   ('Compiler     :', 'GCC 7.3.0')
   ('Build        :', ('default', 'Dec 14 2018 19:04:19'))
   ('Arch         :', ('64bit', ''))
   ------------Pip Info-----------
   ('Version      :', '18.1')
   ('Directory    :', '$ANACONDA_HOME/lib/python2.7/site-packages/pip')
   ----------MXNet Info-----------
   ('Version      :', '1.3.0')
   ('Directory    :', '/**/python2.7/site-packages/mxnet-1.3.0-py2.7.egg/mxnet')
   Hashtag not found. Not installed from pre-built package.
   ----------System Info----------
   ('Platform     :', 'Linux-3.18.6-2.el7.centos.x86_64-x86_64-with-centos-7.3.1611-Core')
   ('system       :', 'Linux')
   ('node         :', '**')
   ('release      :', '3.18.6-2.el7.centos.x86_64')
   ('version      :', '#1 SMP Mon Oct 24 13:01:33 CST 2016')
   ----------Hardware Info----------
   ('machine      :', 'x86_64')
   ('processor    :', 'x86_64')
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                56
   On-line CPU(s) list:   0-55
   Thread(s) per core:    2
   Core(s) per socket:    14
   Socket(s):             2
   NUMA node(s):          2
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 79
   Model name:            Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
   Stepping:              1
   CPU MHz:               2599.725
   BogoMIPS:              5205.75
   Virtualization:        VT-x
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              35840K
   NUMA node0 CPU(s):     0-13,28-41
   NUMA node1 CPU(s):     14-27,42-55
   ----------Network Test----------
   
   
   Package used (Python/R/Scala/Julia):
   (I'm using Python and c++ package)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):gcc
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ADD_LDFLAGS=-L/usr/lib64 -lgfortran -L/usr/local/lib -lopenblas
   
   # the additional compile flags you want to add
   ADD_CFLAGS =
   
   #---------------------------------------------
   # matrix computation libraries for CPU/GPU
   #---------------------------------------------
   
   # whether use CUDA during compile
   USE_CUDA = 1
   
   # add the path to CUDA library to link and compile flag
   # if you have already add them to environment variable, leave it as NONE
   # USE_CUDA_PATH = /usr/local/cuda
   USE_CUDA_PATH = /usr/local/cuda
   
   # whether to enable CUDA runtime compilation
   ENABLE_CUDA_RTC = 1
   
   # whether use CuDNN R3 library
   USE_CUDNN = 1
   
   #whether to use NCCL library
   USE_NCCL = 0
   #add the path to NCCL library
   USE_NCCL_PATH = NONE
   
   # whether use opencv during compilation
   # you can disable it, however, you will not able to use
   # imbin iterator
   USE_OPENCV = 1
   
   #whether use libjpeg-turbo for image decode without OpenCV wrapper
   USE_LIBJPEG_TURBO = 0
   #add the path to libjpeg-turbo library
   USE_LIBJPEG_TURBO_PATH = NONE
   
   # use openmp for parallelization
   USE_OPENMP = 1
   USE_OPERATOR_TUNING = 1
   # Use gperftools if found
   USE_GPERFTOOLS = 1
   # path to gperftools (tcmalloc) library in case of a non-standard installation
   USE_GPERFTOOLS_PATH =
   # Link gperftools statically
   USE_GPERFTOOLS_STATIC =
   # Use JEMalloc if found, and not using gperftools
   USE_JEMALLOC = 1
   # path to jemalloc library in case of a non-standard installation
   USE_JEMALLOC_PATH =
   # Link jemalloc statically
   USE_JEMALLOC_STATIC =
   # Create C++ interface package
   USE_CPP_PACKAGE = 1
   # Create C++ interface package
   USE_CPP_PACKAGE = 1
   
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1. re-save by mxnet1.3.0, then it has the same problem
   2. re-train by mxnet-1.3.0, then it seems work.
   3. At first, I think it's my fault in re-implementing the MXPredCreate and MxSetInputEx( the two functions are going to support tensorrt in c++ interface). but I think they are right after debuging and comparing with the result from "pip install mxnet-tensorrt-cu90" .
   
   
   Thank you very much for your explanation and review, so that the mxnet-tensorrt has a better compatibility wit hthe old verions model. Thank you!
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services