You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/09/01 08:16:34 UTC

[GitHub] [incubator-mxnet] yifeim opened a new issue #16060: [bug] mxnet.ndarray.sparse.norm fallback regression in 1.5.0 and master

yifeim opened a new issue #16060: [bug] mxnet.ndarray.sparse.norm fallback regression in 1.5.0 and master
URL: https://github.com/apache/incubator-mxnet/issues/16060
 
 
   ## Description
   mxnet.ndarray.sparse.norm causes sparse fallback in CSRNDArray in 1.5.0 and master. Additionally, that the regression passed unit tests suggests deeper issues. For example, all sparse regression fallbacks happen silently in the background, instead of being surfaced to the caller. This makes it difficult to id the root cause.
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   ('Version      :', '2.7.15')
   ('Compiler     :', 'GCC 7.3.0')
   ('Build        :', ('default', 'Feb 28 2019 04:00:11'))
   ('Arch         :', ('64bit', ''))
   ------------Pip Info-----------
   ('Version      :', '10.0.1')
   ('Directory    :', '/home/ec2-user/anaconda3/envs/mxnet_p27/lib/python2.7/site-packages/pip')
   ----------MXNet Info-----------
   ('Version      :', '1.6.0')
   ('Directory    :', '/home/ec2-user/anaconda3/envs/mxnet_p27/lib/python2.7/site-packages/mxnet')
   ('Commit Hash   :', '3f7b6ee57865b79634c82a8f58e3551fc95e4dda')
   ('Library      :', ['/home/ec2-user/anaconda3/envs/mxnet_p27/lib/python2.7/site-packages/mxnet/libmxnet.so'])
   Build features:
   ✔ CUDA
   ✔ CUDNN
   ✔ NCCL
   ✖ CUDA_RTC
   ✖ TENSORRT
   ✔ CPU_SSE
   ✔ CPU_SSE2
   ✔ CPU_SSE3
   ✔ CPU_SSE4_1
   ✔ CPU_SSE4_2
   ✖ CPU_SSE4A
   ✔ CPU_AVX
   ✖ CPU_AVX2
   ✔ OPENMP
   ✖ SSE
   ✔ F16C
   ✖ JEMALLOC
   ✔ BLAS_OPEN
   ✖ BLAS_ATLAS
   ✖ BLAS_MKL
   ✖ BLAS_APPLE
   ✔ LAPACK
   ✔ MKLDNN
   ✔ OPENCV
   ✖ CAFFE
   ✖ PROFILER
   ✔ DIST_KVSTORE
   ✖ CXX14
   ✖ INT64_TENSOR_SIZE
   ✔ SIGNAL_HANDLER
   ✖ DEBUG
   ✖ TVM_OP
   ----------System Info----------
   ('Platform     :', 'Linux-4.14.133-88.112.amzn1.x86_64-x86_64-with-glibc2.2.5')
   ('system       :', 'Linux')
   ('node         :', 'ip-172-16-12-219')
   ('release      :', '4.14.133-88.112.amzn1.x86_64')
   ('version      :', '#1 SMP Tue Jul 30 21:21:30 UTC 2019')
   ----------Hardware Info----------
   ('machine      :', 'x86_64')
   ('processor    :', 'x86_64')
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                32
   On-line CPU(s) list:   0-31
   Thread(s) per core:    2
   Core(s) per socket:    16
   Socket(s):             1
   NUMA node(s):          1
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 79
   Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
   Stepping:              1
   CPU MHz:               2709.117
   BogoMIPS:              4600.14
   Hypervisor vendor:     Xen
   Virtualization type:   full
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              46080K
   NUMA node0 CPU(s):     0-31
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0016 sec, LOAD: 0.5764 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0019 sec, LOAD: 0.3843 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0115 sec, LOAD: 0.1455 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0112 sec, LOAD: 0.1902 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1908 sec, LOAD: 0.0881 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1898 sec, LOAD: 0.0980 sec.
   ----------Environment----------
   ```
   
   Package used (Python/R/Scala/Julia): Python27 and Python36
   
   ## Error Message:
   ```
   [08:08:56] src/operator/contrib/../tensor/./../../common/utils.h:463:
   Storage type fallback detected:
   operator = norm
   input storage types = [csr, ]
   output storage types = [default, ]
   params = {}
   context.dev_mask = gpu
   The operator with default storage type will be dispatched for execution. You're seeing this warning message because the operator above is unable to process the given ndarrays with specified storage types, context and parameter. Temporary dense ndarrays are generated in order to execute the operator. This does not affect the correctness of the programme. You can set environment variable MXNET_STORAGE_FALLBACK_LOG_VERBOSE to 0 to suppress this warning.
   Out[3]:
   
   [0.]
   <NDArray 1 @gpu(0)>
   ```
   
   ## Minimum reproducible example
   ```
   import mxnet as mx
   data = mx.nd.sparse.csr_matrix((3,4), ctx=mx.gpu())
   data.norm()
   ```
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. The function works in mxnet-cu100mkl==1.4.1 (no warning is generated)
   2. The function fails in mxnet-cu100mkl==1.5.0 and nightly build.
   
   ## What have you tried to solve it?
   Downgrade
   1.
   2.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services