You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/05/06 04:54:44 UTC

[GitHub] [incubator-mxnet] stu1130 opened a new pull request #14887: [Dependency Update] CUDA10.1 Support

stu1130 opened a new pull request #14887: [Dependency Update] CUDA10.1 Support
URL: https://github.com/apache/incubator-mxnet/pull/14887
 
 
   ## Description ##
   Upgrade the CUDA 10.1 with latest cuDNN **7.5.1** & NCCL **2.4.2**
   
   ## Checklist ##
   Run three models ResNet50 with ImageNet & LSTM with PTB & MLP with MNIST
   Performance shown below
   Environment: P3.16xlarge Deep Learning Base AMI
   Codebase: commit 1540a84f1eca937235c51b507ea716c614f40805
   I also applied the #14837 PR change
   The unit of thoughput is **samples/per second**
   Each throughput is calcuated by average of 5 runs
   ### ResNet ###
   **model**: Resnet50
   **dataset**: Imagenet
   **number of gpu**: 8
   **epochs**: 3 (only to test throughput)
   **preprocess command**: sudo pip install gluoncv==0.2.0b20180625
   **command**: python mxnet_benchmark/train_imagenet.py --use-rec --batch-size 128 --dtype float32 —num-data-workers 40 —num-epochs 3 —gpus 0,1,2,3,4,5,6,7 --lr 0.05 --last-gamma —mode symbolic —model resnet50_v1b —rec-train /home/ubuntu/data/train-passthrough.rec —rec-train-idx /home/ubuntu/data/train-passthrough.idx —rec-val /home/ubuntu/data/val-passthrough.rec —rec-val-idx /home/ubuntu/data/val-passthrough.idx
   **github repo**: https://github.com/rahul003/deep-learning-benchmark-mirror.git*
   
   |    |  CUDA10.1 cuDNN 7.5.1/NCCL 2.4.2     | CUDA 10 cuDNN 7.3.1/NCCL 2.3.4 | Perforamnce Difference|
   |:----------|:------------------------:|:--------------------:|:---------------------:|
   | Throughput | 2853.89469 | 2831.54405 | 0.789%  |
   
   
   ### LSTM ###
   **model**: LSTM
   **dataset**: PTB(Penn Treebank)
   **number of gpu**: 1
   **epochs**: 10
   **command**:
   python2 benchmark_driver.py --framework mxnet --task-name mkl_lstm_ptb_symbolic --num-gpus 1 --epochs 10 --metrics-suffix test --kvstore local
   python word_language_model/lstm_bucketing.py —num-hidden 650 —num-embed 650 —gpus 0 --epochs 10 --kv-store local
   
   |    |  CUDA10.1 cuDNN 7.5.1/NCCL 2.4.2     | CUDA 10 cuDNN 7.3.1/NCCL 2.3.4 | Perforamnce Difference|
   |:----------|:------------------------:|:--------------------:|:---------------------:|
   | Throughput | 1027.52625 | 847.98222(Performance Regression) | 21.173%  |
   
   **The CUDA 10 have a performance regression issue, please see #14725 to find more details.**
   
   ### MLP ###
   **model**: 3 dense layers with num_hidden=64 and relu as activation
   **dataset**: MNIST
   **number of gpu**: 1
   **epochs**: 10
   **command**:
   python2 benchmark_runner.py —framework mxnet —metrics-policy mlp —task-name mlp —metrics-suffix test —num-gpus 1 —command-to-execute 'python3 mlp.py' —data-set mnist
   
   |    |  CUDA10.1 cuDNN 7.5.1/NCCL 2.4.2     | CUDA 10 cuDNN 7.3.1/NCCL 2.3.4 | Perforamnce Difference|
   |:----------|:------------------------:|:--------------------:|:---------------------:|
   | Throughput | 4386.67838 | 4192.20685 | 4.639%  |
   
   ## Comments ##
   @szha @lanking520 @eric-haibin-lin 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services