You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/12/11 20:08:03 UTC

[GitHub] eric-haibin-lin commented on issue #13598: More fine-grained operator implementation dispatch & memory planning flow

eric-haibin-lin commented on issue #13598: More fine-grained operator implementation dispatch & memory planning flow 
URL: https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446343940
 
 
   @marcoabreu thanks for the comments. True that the existing infer_storage interface and the proposed infer_storage_ex interface both need to write backend specific logics. What kind of abstraction would you like to see? Let's say each backend provides one implementation which only concerns that backend itself. Now how does MXNet provide a general guide to select/prioritize these implementations if it is built with MKLDNN+CUDA+AMDHIP? What order would you propose to invoke these functions, and what if one of them conflicts with other backends? How does MXNet resolve these conflicts? 
   I do want to limit the discussion to memory planning itself so that @DickJC123 's work on NHWC can be unblocked as soon as possible. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services