You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/26 18:32:37 UTC

[GitHub] satyakrishnagorti opened a new issue #14256: MXNet hangs when creating an NDArray on gpu in Scala

satyakrishnagorti opened a new issue #14256: MXNet hangs when creating an NDArray on gpu in Scala
URL: https://github.com/apache/incubator-mxnet/issues/14256
 
 
   ## Description
   MXNet hangs when I create an NDArray on a GPU context while using Scala bindings, every time the computer is restarted. But gets fixed as soon as I initialise an NDArray on GPU using python bindings.
   
   Do the python bindings do anything more (in terms of cuda initialisation etc.) compared to Scala bindings when we create an NDArray on gpu?
   
   This issue seems very strange, and happens to quite a few people I came across.
   
   We are using MXNet 1.3.1 installed from source.
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   Version      : 3.7.1
   Compiler     : GCC 7.3.0
   Build        : ('default', 'Dec 14 2018 19:28:38')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 18.1
   Directory    : /home/satya/anaconda3/lib/python3.7/site-packages/pip
   ----------MXNet Info-----------
   Version      : 1.3.1
   Directory    : /home/satya/Documents/workspace/mxnet_1.3.x/python/mxnet
   Hashtag not found. Not installed from pre-built package.
   ----------System Info----------
   Platform     : Linux-4.4.0-142-generic-x86_64-with-debian-stretch-sid
   system       : Linux
   node         : DS5
   release      : 4.4.0-142-generic
   version      : #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                40
   On-line CPU(s) list:   0-39
   Thread(s) per core:    2
   Core(s) per socket:    10
   Socket(s):             2
   NUMA node(s):          2
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 85
   Model name:            Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
   Stepping:              4
   CPU MHz:               1201.406
   CPU max MHz:           3000.0000
   CPU min MHz:           800.0000
   BogoMIPS:              4391.30
   Virtualization:        VT-x
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              1024K
   L3 cache:              14080K
   NUMA node0 CPU(s):     0-9,20-29
   NUMA node1 CPU(s):     10-19,30-39
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx avx512f rdseed adx smap clflushopt clwb avx512cd xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku flush_l1d
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0026 sec, LOAD: 0.5413 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0016 sec, LOAD: 0.3943 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0019 sec, LOAD: 0.3664 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0014 sec, LOAD: 0.5715 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0048 sec, LOAD: 0.1934 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0013 sec, LOAD: 0.0341 sec.
   ```
   
   Package used (Python/R/Scala/Julia): Scala and Python
   
   For Scala user, please provide:
   1. Java version: `1.8.0_201`
   2. Maven version: `3.6.0`
   3. Scala runtime if applicable: `2.11.6`
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash: `96b4b6ef3c60c63644a7c4d672109b97561b839d`
   
   Build MXNet from source command:
   `make -j32 USE_BLAS=mkl USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_OPENCV=1`
   
   Build Scala bindings command:
   `make USE_BLAS=mkl USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_OPENCV=1 scalainstall`
   
   ## Minimum reproducible example
   ```scala
   val ctx = Context.gpu(0)
   val arr = NDArray.zeros(ctx = ctx, shape = Shape(2,2))
   // hangs ...
   ```
   
   I go try something similiar in python
   ```python
   import mxnet as mx
   arr = mx.ndarray.array([1., 2., 3.], ctx = mx.gpu(0))
   ```
   above python code works fine.
   
   I go back and try the same in Scala and it starts working. Seems very strange. Any help is appreciated.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services