You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/07/29 07:59:21 UTC

[GitHub] [incubator-mxnet] Kh4L commented on a change in pull request #18574: Update the onnx-tensorrt submodule

Kh4L commented on a change in pull request #18574:
URL: https://github.com/apache/incubator-mxnet/pull/18574#discussion_r461260930



##########
File path: ci/docker/docker-compose.yml
##########
@@ -108,6 +108,16 @@ services:
         BASE_IMAGE: nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
       cache_from:
         - ${DOCKER_CACHE_REGISTRY}/build.ubuntu_gpu_cu101:latest
+  ubuntu_gpu_cu102:

Review comment:
       It's safer to move only this test on 10.2 for now, I agreed with @ChaiBapchya 

##########
File path: ci/docker/Dockerfile.build.ubuntu
##########
@@ -110,6 +108,15 @@ COPY install/requirements /work/
 RUN python3 -m pip install cmake==3.16.6 && \
     python3 -m pip install -r /work/requirements
 
+RUN git clone --recursive -b 3.5.1.1 https://github.com/google/protobuf.git && \
+    cd protobuf && \
+    ./autogen.sh && \
+    ./configure && \
+    make -j$(nproc) install && \
+    cd .. && \
+    rm -rf protobuf && \
+    ldconfig

Review comment:
       I am not sure, but onnx is failing and the error seems to be related to the protobuf version.
   
   This is the version we are using in our internal containers

##########
File path: ci/docker/docker-compose.yml
##########
@@ -108,6 +108,16 @@ services:
         BASE_IMAGE: nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
       cache_from:
         - ${DOCKER_CACHE_REGISTRY}/build.ubuntu_gpu_cu101:latest
+  ubuntu_gpu_cu102:

Review comment:
       Ok if you guys at Amazon agree, I guess it is fine for us! (actually I pinged you on slack @leezu )

##########
File path: ci/docker/Dockerfile.build.ubuntu
##########
@@ -137,17 +137,27 @@ RUN cd /usr/local && \
 # https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
 ARG BASE_IMAGE
 RUN export SHORT_CUDA_VERSION=${CUDA_VERSION%.*} && \
+    wget -O nvidia-ml.deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb && \
+    dpkg -i nvidia-ml.deb && \

Review comment:
       This method guarantees us to have the right nvinfer version no matter what happens to the container in the future




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org