You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/01 20:40:50 UTC

[GitHub] yifeim opened a new issue #14048: `take` does not support `grad_stype='row_sparse'`

yifeim opened a new issue #14048: `take` does not support `grad_stype='row_sparse'`
URL: https://github.com/apache/incubator-mxnet/issues/14048
 
 
   ## Description
   The `take` operator currently does not support `grad_stype='row_sparse'`, which can impact performance. This post describes the issue and possible work-arounds until it is implemented.
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   Version      : 3.6.5
   Compiler     : GCC 7.2.0
   Build        : ('default', 'Apr 29 2018 16:14:56')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 10.0.1
   Directory    : /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip
   ----------MXNet Info-----------
   Version      : 1.3.1
   Directory    : /home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet
   Commit Hash   : 19c501680183237d52a862e6ae1dc4ddc296305b
   ----------System Info----------
   Platform     : Linux-4.14.77-70.82.amzn1.x86_64-x86_64-with-glibc2.9
   system       : Linux
   node         : ip-172-16-72-155
   release      : 4.14.77-70.82.amzn1.x86_64
   version      : #1 SMP Mon Dec 3 20:01:27 UTC 2018
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                8
   On-line CPU(s) list:   0-7
   Thread(s) per core:    2
   Core(s) per socket:    4
   Socket(s):             1
   NUMA node(s):          1
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 79
   Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
   Stepping:              1
   CPU MHz:               2700.116
   BogoMIPS:              4600.14
   Hypervisor vendor:     Xen
   Virtualization type:   full
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              46080K
   NUMA node0 CPU(s):     0-7
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0022 sec, LOAD: 0.6019 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1398 sec, LOAD: 0.1484 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.4542 sec, LOAD: 0.2610 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0148 sec, LOAD: 0.0896 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0104 sec, LOAD: 0.4013 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0102 sec, LOAD: 0.0597 sec.
   ```
   
   Package used (Python/R/Scala/Julia): Python
   
   ## Error Message:
   ```
   Storage type fallback detected:
   operator = _backward_take
   input storage types = [default, default, ]
   output storage types = [row_sparse, default, ]
   params = {}
   context.dev_mask = gpu
   The operator with default storage type will be dispatched for execution. You're seeing this warning message because the operator above is unable to process the given ndarrays with specified storage types, context and parameter. Temporary dense ndarrays are generated in order to execute the operator. This does not affect the correctness of the programme. You can set environment variable MXNET_STORAGE_FALLBACK_LOG_VERBOSE to 0 to suppress this warning.
   ```
   
   ## Minimum reproducible example
   This is the original approach that triggers the warning
   ```
   import mxnet as mx
   from mxnet import gluon
   
   class Model(gluon.HybridBlock):
       def __init__(self, **kwargs):
           super().__init__(**kwargs)
           with self.name_scope():
               self.weight = self.params.get('weight', shape=(100,2), grad_stype='row_sparse')
       
       def hybrid_forward(self, F, data, weight):
           return weight.take(data)
   
   model = Model()
   
   model.collect_params().initialize()
   trainer = gluon.Trainer(model.collect_params(), 'sgd')
   
   for _ in range(10):
       data = mx.nd.zeros((20,2))
       with mx.autograd.record():
           L = model(data)
       L.backward()
       trainer.step(1)
   ```
   This is the work-around that preserves `grad_stype='row_sparse'`:
   ```
   import mxnet as mx
   from mxnet import gluon
   
   model = gluon.nn.Embedding(100, 2, sparse_grad=True)
   
   model.collect_params().initialize()
   trainer = gluon.Trainer(model.collect_params(), 'sgd')
   
   for _ in range(10):
       data = mx.nd.zeros((20,2))
       with mx.autograd.record():
           L = model(data)
       L.backward()
       trainer.step(1)
   ```
   
   ## What have you tried to solve it?
   
   Use the alternative example in case row_sparse gradients are desired.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services