You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/04/16 01:34:17 UTC

[GitHub] [hadoop-ozone] fapifta commented on issue #726: HDDS-3267. replace ContainerCache in BlockUtils by LoadingCache

fapifta commented on issue #726: HDDS-3267. replace ContainerCache in BlockUtils by LoadingCache
URL: https://github.com/apache/hadoop-ozone/pull/726#issuecomment-614362868
 
 
   I can not add much on the guava/no-guava question besides the fact that we already use it, and this code seems to be much more simpler with LoadingCache.
   
   @elek let me address a few things:
   1. in the current implementation if you try to run the test you get an NPE with, then it should throw an exception when the reference for container 4L is obtained, but it does not only because in the ContainerCache code there is the potential to easily violate the cache max size constraint...
   The ContainerCache code in line 131 uses this.put() to put the value to the LRUMap, but in LRUMap put is not overridden, it provides addMapping() which checks properly for boundaries, and since the ContainerCache code does not check for boundaries, basically in the current implementation we have an unlimited cache which lies about itself and says it is limited by maxSize.
   2. If the current implementation works correctly within boundaries, then in those cases when eviction poses a possible problem that needs proper handling though, we should see issues because the cache full and no items can be evicted.
   
   
   I agree, weak values are not solving the problem when the cache is full, but in that case we either need to do not load a new ReferenceDB, or we should not close the db connection of the evicted ReferenceDB.
   
   The problem to solve as I understand now is the following:
   - we want to cache db connections and keep them live
   - we want to keep the size restriction on the cache
   - we want to close the db connection in case an unused cached connection is evicted from the cache
   - we want to close the db connection in case it is used and got evicted from the cache only at the end of the current use, or - god forbid us - uses on multiple threads.
   
   I don't have a good solution for all of these at the moment, and I am seeing this for a while now, let me get back to this later this week, in the meantime I am open to ideas and further discussion, but for now it seem for me that we can not spare the reference counting somewhere due to the nature of what we are caching and how we are using the cached stuff now.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org