You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/04/26 16:51:11 UTC

[GitHub] [incubator-mxnet] andreas-solti opened a new issue #20220: Dynamic Batching during Inference / Runtime

andreas-solti opened a new issue #20220:
URL: https://github.com/apache/incubator-mxnet/issues/20220


   First, thanks for creating this great and high performant framework! I've looked in the open and closed issues and couldn't find this one. 
   ## Description
   It would be really cool to be able to enable automatic batching of inference requests in the engine. The feature would dynamically wrap and unwrap similar-sized inputs in the engine based on configured max wait time and preferred batch size. 
   - Instead of adding each item to the queue for the engine to individually process them, the engine would wrap data items in a batch and unwrap them after computation
   - The API is unchanged, but configuration settings are exposed to control batch size and max wait time per batching instance.
   
   ## References
   - dynamic batching example implementation: https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md#dynamic-batcher
   
   ## Expected Value
   A large speedup is expected for practical use in high-load inference settings, where many users need to be served.
   When batching is implemented in the engine directly, it would be much faster than the currently available (best?) solution with the multi-model-server. Latter includes the overhead of a Java server + HTTP calls + Python-based batching.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org


[GitHub] [incubator-mxnet] github-actions[bot] commented on issue #20220: Dynamic Batching during Inference / Runtime

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on issue #20220:
URL: https://github.com/apache/incubator-mxnet/issues/20220#issuecomment-826994122


   Welcome to Apache MXNet (incubating)! We are on a mission to democratize AI, and we are glad that you are contributing to it by opening this issue.
   Please make sure to include all the relevant context, and one of the @apache/mxnet-committers will be here shortly.
   If you are interested in contributing to our project, let us know! Also, be sure to check out our guide on [contributing to MXNet](https://mxnet.apache.org/community/contribute) and our [development guides wiki](https://cwiki.apache.org/confluence/display/MXNET/Developments).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org