You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/10/16 05:20:54 UTC

[GitHub] azai91 commented on a change in pull request #12746: Add env flag to disable MKLDNN cache (MXNET_MKLDNN_CACHE_ENABLED)

azai91 commented on a change in pull request #12746: Add env flag to disable MKLDNN cache (MXNET_MKLDNN_CACHE_ENABLED)
URL: https://github.com/apache/incubator-mxnet/pull/12746#discussion_r225400392
 
 

 ##########
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##########
 @@ -147,6 +147,23 @@ static inline bool MKLDNNEnvSet() {
   return is_mkldnn_enabled;
 }
 
+static inline int GetMKLDNNCacheSize() {
+  static int mkldnn_cache_size = dmlc::GetEnv("MXNET_MKLDNN_CACHE_SIZE", -1);
+  return mkldnn_cache_size;
+}
+
+// TODO(alex): (MXNET-1075) Will remove env variable and calculate cache size during runtime
+template<typename S, typename I, typename H>
+static typename std::unordered_map<S, I, H>::iterator AddToCache(
+    std::unordered_map<S, I, H>* cache, const S &key, const I &item) {
+  int mkldnn_cache_size = GetMKLDNNCacheSize();
+  if (mkldnn_cache_size != -1 && static_cast<int>(cache.size()) > mkldnn_cache_size)
+    cache->erase(cache.begin());
 
 Review comment:
   i don't mind implementing that. but in response @ZhennanQin to not clearing the entire cache (which I addressed) and for using LRU, all of these "optimizations" really depend on the training input.
   
   randomly dropping one element from the cache as opposed to clearing the entire cache is only optimal if we expect a lot of the future inputs to be in the cache. if the future inputs are consistently not in the cache then we need to constant remove one element every time which is expensive.
   
   in regards the using LRU (over dropping a random item in cache), that is making the assumption that older items in the cache are less likely to be seen in the future than the more recent ones. if the input data does not follow that assumption then we are again creating unnecessary overhead. 
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services