You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/03/14 22:36:39 UTC

[GitHub] [incubator-mxnet] leleamol commented on issue #14408: Multi-threaded execution leads to high CPU load

leleamol commented on issue #14408: Multi-threaded execution leads to high CPU load
URL: https://github.com/apache/incubator-mxnet/issues/14408#issuecomment-473092272
 
 
   @songziqin 
   This is based on my understanding of C++ API.
   It is possible to create Executor engine in one thread and share it across many thread. However, the executor engine will run the forward pass on only one input at a time.  That is, for a given input,  the following 3 operations should be performed atomically for the correct inference
   1.  Setting the input for the Executor
   2. Running the forward pass . Executor->Forward()
   3. Retrieving the output from the executor.
   
   With this approach your application will be able to process multiple inputs but the inference operation will be serialized.
   
   With some modifications the [inception_inference.cpp](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/inception_inference.cpp) example can be used to process multiple images. The object of Predictor class can be created by a single thread and shared across multiple threads.  The calls to "PredictImage()" need to be synchronized so that Executor in Predictor object processes one image at a time.
   
   I hope this helps.
   
   @mxnet-label-bot add [Pending Requester Info]
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services