You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/01/25 08:52:28 UTC

[GitHub] TaoLv commented on issue #9545: Profiling discussion

TaoLv commented on issue #9545: Profiling discussion
URL: https://github.com/apache/incubator-mxnet/issues/9545#issuecomment-360400949
 
 
   Hi Chris, thank you for creating this discussion issue. Some questions from my side:
   
   1. Can I get some statistical  infomation from profiling results, like what I did in my naive tool:
   
       * by op: total time consumed by conv/fc/pooling ...
       * by layer: total time consumed by RNN/LSTM (maybe there is no layer in mxnet)
       * by iteration: total time/memory consumed by one iteration
   
   2. Which is the most time consuming op instance? 
   
       * input shape, parameters, output shape
       * in which iteration?
   
   3. How to profile memory for CPU?
   
       * maybe not all memory is allocated in mxnet::storage.
   
   4. Operator tuning: I saw you have contributed to the operator tuning of mxnet. Seems it will create many omp threads before executing graph. How does this function affect the performance of mxnet? If I have set cpu affinity in my environment, these omp threads will be binded to each core. Then when executing computation graph, many other omp threads will be created and binded again. Do you think that will impact the performance?
   
   Thanks.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services