You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/28 01:43:42 UTC

[GitHub] safrooze opened a new issue #9214: CTC example trains very slowly (~250 samples/sec)

safrooze opened a new issue #9214: CTC example trains very slowly (~250 samples/sec)
URL: https://github.com/apache/incubator-mxnet/issues/9214
 
 
   ## Description
   Running the lstm_ocr.py script in example/ctc doesn't train anywhere new the speed shown in the README logs. I get around 250 samples per second, while the example shows ~4200 samples per second. 
   
   ## Environment info (Required)
   ```
   ----------Python Info----------
   Version      : 3.6.3
   Compiler     : GCC 7.2.0
   Build        : ('default', 'Nov 20 2017 20:41:42')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 9.0.1
   Directory    : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip
   ----------MXNet Info-----------
   Version      : 1.0.0
   Directory    : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet
   Commit Hash   : 2b67436802b750e15b9fbfdf275958c1000be6a8
   ----------System Info----------
   Platform     : Linux-4.4.0-1044-aws-x86_64-with-debian-stretch-sid
   system       : Linux
   node         : ip-172-31-32-80
   release      : 4.4.0-1044-aws
   version      : #53-Ubuntu SMP Mon Dec 11 13:49:57 UTC 2017
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                32
   On-line CPU(s) list:   0-31
   Thread(s) per core:    2
   Core(s) per socket:    16
   Socket(s):             1
   NUMA node(s):          1
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 79
   Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
   Stepping:              1
   CPU MHz:               2692.707
   CPU max MHz:           3000.0000
   CPU min MHz:           1200.0000
   BogoMIPS:              4600.08
   Hypervisor vendor:     Xen
   Virtualization type:   full
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              46080K
   NUMA node0 CPU(s):     0-31
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0023 sec, LOAD: 0.4079 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1372 sec, LOAD: 0.1955 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1189 sec, LOAD: 0.3464 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0170 sec, LOAD: 0.3851 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0037 sec, LOAD: 0.2626 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0098 sec, LOAD: 0.0693 sec.
   ```
   
   Package used: Python 3.6
   
   ## Steps to reproduce
   ```
   python lstm_ocr.py
   ```
   
   ## What have you tried to solve it?
   The bottleneck is the captcha image generation. I can get to ~4200 samples per second speed using one k80 GPU by feeding the same image. By modifying the script and using multiprocessing library to generate images using 16 processes on a P2.8x EC2 instance with 16 CPUs, I can generate ~3000 images per second.
   - What configuration was used for the the training session shown in the README?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services