You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/03/28 08:52:30 UTC

[GitHub] [incubator-mxnet] triplekings edited a comment on issue #14492: use mkl sparse matrix to improve performance

triplekings edited a comment on issue #14492: use mkl sparse matrix to improve performance
URL: https://github.com/apache/incubator-mxnet/pull/14492#issuecomment-477504457
 
 
   > Great! Thanks for your contribution!
   > Is there any test case for the sparse matrix?
   > Could you please provide the comparison on performance?
   
   Thanks for your comments.
   
   testcase:  tests/python/unittest/test_sparse_ndarray.py 
                  tests/python/unittest/test_sparse_operator.py 
   
   using mkl , sparse dot will speedup 1.3X
   
   With mkl sparse
   INFO:logger:Run [7812] Batchs              Speed: 941008.89 samples/sec
   (mxnet_venv) [root@VNNI-SDP86 scripts_zhenlin]# python myProfileParser.py 
   Time of each OP:
   _contrib_quantized_fully_connected,  4162.701, ms, 0.1776199436763953   , ms/call, 23436, calls,        54.45 %
   Concat                            ,  870.905 , ms,     0.11148297491039426  , ms/call,                 7812 , calls,          11.39 %
   dot                               ,  554.544 , ms,         0.07098617511520737  , ms/call,                 7812 , calls,          7.25 %
   elemwise_add                      ,  394.928 , ms,           0.05055401945724526  , ms/call,                 7812 , calls,          5.17 %
   _contrib_quantize                 ,  321.782 , ms,        0.04119073220686124  , ms/call,                 7812 , calls,          4.21 %
   SoftmaxOutput                     ,  309.333 , ms,          0.03959715821812596  , ms/call,                 7812 , calls,          4.05 %
   ParallelEmbedding                 ,  266.09  , ms,         0.03406169994879672  , ms/call,                 7812 , calls,          3.48 %
   CopyCPU2CPU                       ,  244.173 , ms,          0.010418714797747057 , ms/call,               23436, calls,        3.19 %
   broadcast_add                     ,  177.62  , ms,             0.022736815156169994 , ms/call,               7812 , calls,          2.32 %
   SliceChannel                      ,  151.811 , ms,               0.01943305171530978  , ms/call,                 7812 , calls,          1.99 %
   slice                             ,  124.944 , ms,         0.007996927803379416 , ms/call,               15624, calls,        1.63 %
   DeleteVariable                    ,  66.723  , ms,              0.0021352726574500767, ms/call,              31248, calls,        0.87 %
   
   
   With mxnet sparse
   INFO:logger:Run [7812] Batchs              Speed: 915862.99 samples/sec
   (mxnet_venv) [root@VNNI-SDP86 scripts_zhenlin]# python myProfileParser.py 
   Time of each OP:
   _contrib_quantized_fully_connected,  4235.11, ms,    0.1807095920805598   , ms/call, 23436, calls,        53.72 %
   Concat                            ,  879.878, ms,      0.112631592421915    , ms/call,   7812 , calls,          11.16 %
   dot                               ,  724.61 , ms,           0.09275601638504864  , ms/call,                 7812 , calls,          9.19 %
   elemwise_add                      ,  404.946, ms,            0.05183640552995392  , ms/call,                 7812 , calls,          5.14 %
   _contrib_quantize                 ,  332.088, ms,         0.0425099846390169   , ms/call, 7812 , calls,          4.21 %
   SoftmaxOutput                     ,  280.663, ms,           0.03592716333845366  , ms/call,                 7812 , calls,          3.56 %
   ParallelEmbedding                 ,  271.335, ms,         0.034733102918586785 , ms/call,               7812 , calls,          3.44 %
   CopyCPU2CPU                       ,  244.757, ms,           0.01044363372589179  , ms/call,                 23436, calls,        3.10 %
   broadcast_add                     ,  181.36 , ms,              0.02321556579621096  , ms/call,                 7812 , calls,          2.30 %
   SliceChannel                      ,  153.897, ms,                0.019700076804915513 , ms/call,               7812 , calls,          1.95 %
   slice                             ,  124.163, ms,          0.007946940604198668 , ms/call,               15624, calls,        1.57 %
   DeleteVariable                    ,  50.809 , ms,               0.0021679894179894178, ms/call,              23436, calls,        0.64 %
   
   Total OP Time: 7883.61600000 ms
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services