You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/08/18 10:26:37 UTC

[GitHub] pengzhao-intel edited a comment on issue #12239: Scale to many CPU cores

pengzhao-intel edited a comment on issue #12239: Scale to many CPU cores
URL: https://github.com/apache/incubator-mxnet/issues/12239#issuecomment-414047581
 
 
   One practical approach is to launch many instances (mxnet itself) and each one uses several cores, say 4,8,16, for your inference.  So you can get maximize the overall throughput a lot.
   
   BTW, MKLDNN backend will be much faster now. You need to specify the thread number and bind thread to physical cores explicitly.
   
   https://github.com/apache/incubator-mxnet/blob/master/MKLDNN_README.md
   
   Some of out-of -data performance in below link. You can see this method works very well. 
   Currently, the RN50 is about 200 images/sec w/ MKLDNN backend for the inference in the same machine of below link. The much better perf is on the way :) 
   
   https://issues.apache.org/jira/browse/MXNET-11?focusedCommentId=16394829&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16394829
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services