You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/29 18:43:04 UTC

[GitHub] dingran commented on issue #9211: mx.io.NDArrayIter with scipy.sparse, last batch becomes dense and with no warning upfront

dingran commented on issue #9211: mx.io.NDArrayIter with scipy.sparse, last batch becomes dense and with no warning upfront
URL: https://github.com/apache/incubator-mxnet/issues/9211#issuecomment-354484501
 
 
   Hi @eric-haibin-lin, by the way, I have a related observation regarding sparse. It seems as though the operator is slower (less optimized?) compared to handling dense matrix, at least on GPU. 
   
   I don't have a simple example readily available, but you can take a look here https://github.com/dingran/nvdm-mxnet/blob/master/nvdm.ipynb
   
   In cell [3], the flag `use_dense` controls whether to use sparse or dense matrix. Even counting the cost of converting mini batches of CSR into dense, the training speed when using dense matrix is still about 15~20% faster for this particular example. 
   
   Would be good to have some benchmark between CSR and dense operators on CPU and GPU as guidelines (do we have them somewhere?) Thanks!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services