You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/09/21 03:31:48 UTC

[GitHub] [hadoop-ozone] rakeshadr opened a new pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

rakeshadr opened a new pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437


   ## What changes were proposed in this pull request?
   
   With the new file system HDDS-2939 like semantics design it requires multiple DB lookups to traverse the path component in top-down fashion. This patch provides a metadata caching service to cache directories for faster lookup. This patch is developed as an independent module and needs to be integrated once HDDS-2949 is pushed in.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2949
   
   ## How was this patch tested?
   
   Added unit test cases to verify cache init and other ops.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] linyiqun commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
linyiqun commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696588382


   > > Hi @rakeshadr , some initial review comments below.
   > > In additional, one question from me: this is the first task of dir cache, there will be other further subtasks. But this part of work depends on [HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) be completed. So that means [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) tasks cannot be merged immediately and be blocked for a long time? How do we plan to coordinate with [HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) and [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) feature development works?
   > 
   > Good comment. yes, cache should be integrated eventually(case-by-case) to get full benefit. But [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) can be upstreamed separately and not blocked, I feel.
   > 
   > Cache can be integrated once [HDDS-2949](https://issues.apache.org/jira/browse/HDDS-2949) is finished. During dir creation, it checks the [DIR_EXISTS](https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java#L185) and cache can be integrated into this call path.
   > 
   > Later, while implementing File, lookups tasks, delete, rename etc will integrate into that areas.
   
   Get it, sounds good to me.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr edited a comment on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr edited a comment on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!
   Updated another commit addressing the comments.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492480005



##########
File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
##########
@@ -246,4 +246,15 @@ private OMConfigKeys() {
       "ozone.om.enable.filesystem.paths";
   public static final boolean OZONE_OM_ENABLE_FILESYSTEM_PATHS_DEFAULT =
       false;
+
+  public static final String OZONE_OM_CACHE_DIR_POLICY =
+          "ozone.om.metadata.cache.directory";
+  public static final String OZONE_OM_CACHE_DIR_DEFAULT = "DIR_LRU";

Review comment:
       OK, will update.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheEntity.java
##########
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Entities that are to be cached.
+ */
+public enum CacheEntity {
+
+  DIR("directory");
+  // This is extendable and one can add more entities for
+  // caching based on demand. For example, define new entities like FILE
+  // ("file"), LISTING("listing") cache etc.
+
+  CacheEntity(String entity) {
+    this.entityName = entity;
+  }
+
+  private String entityName;
+
+  public String getName() {
+    return entityName;
+  }
+
+  public static CacheEntity getEntity(String entityStr) {
+    for (CacheEntity entity : CacheEntity.values()) {
+      if (entityStr.equalsIgnoreCase(entity.getName())) {

Review comment:
       OK, will update.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheStore.java
##########
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Cache used for traversing path components from parent node to the leaf node.
+ * <p>
+ * Basically, its a write-through cache and ensures that no-stale entries in
+ * the cache.
+ * <p>
+ * TODO: can define specific 'CacheLoader' to handle the OM restart and
+ *       define cache loading strategies. It can be NullLoader, LazyLoader,
+ *       LevelLoader etc.
+ *
+ * @param <CACHEKEY>
+ * @param <CACHEVALUE>
+ */
+public interface CacheStore<CACHEKEY extends OMCacheKey,

Review comment:
       Good point. I thought of this and how about to do this via followup task.
   For now, I will add a TODO item. Hope thats fine for you.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);

Review comment:
       Sure, will update it

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);
+  }
+
+  @Override
+  public OMCacheValue get(OMCacheKey key) {
+    return mCache.getIfPresent(key);
+  }
+
+  @Override
+  public void remove(OMCacheKey key) {

Review comment:
       OK

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/OMMetadataCacheFactory.java
##########
@@ -0,0 +1,120 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Provides different caching policies for cache entities. This can be
+ * extended by adding more entities and their caching policies into it.
+ * <p>
+ * For example, for the directory cache user has to configure following
+ * property with cache type. OM will creates specific cache store for the
+ * directory based on the configured cache policy.
+ * ozone.om.metadata.cache.directory = DIR_LRU
+ * <p>
+ * One can add new directory policy to OM by defining new cache type say
+ * "DIR_LFU" and implements new CacheStore as DirectoryLFUCacheStore.
+ * <p>
+ * One can add new entity to OM, let's say file to be cached by configuring the
+ * property like below and implement specific provider to instantiate the
+ * fileCacheStore.
+ * ozone.om.metadata.cache.file = FILE_LRU
+ */
+public final class OMMetadataCacheFactory {
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMMetadataCacheFactory.class);
+
+  /**
+   * Private constructor, class is not meant to be initialized.
+   */
+  private OMMetadataCacheFactory() {
+  }
+
+  public static CacheStore getCache(String configuredCachePolicy,
+                                    String defaultValue,
+                                    OzoneConfiguration config) {
+    String cachePolicy = config.get(configuredCachePolicy, defaultValue);
+    LOG.info("Configured {} with {}", configuredCachePolicy, cachePolicy);
+    CacheEntity entity = getCacheEntity(configuredCachePolicy);
+
+    switch (entity) {
+    case DIR:
+      OMMetadataCacheProvider provider = new OMDirectoryCacheProvider(config,
+              cachePolicy);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("CacheStore initialized with {}:" + provider.getEntity());
+      }
+      return provider.getCache();
+    default:
+      return null;

Review comment:
       OK, will add UTs to cover this case.

##########
File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##########
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(100000);
+
+  @Before
+  public void setup() {
+    //initialize config
+    conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+    //1. Verify disabling cache
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+    CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    //2. Invalid cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+            CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+    //3. Directory LRU cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    OMCacheKey<String> dirB = new OMCacheKey<>(dirAObjID + "/b");
+    OMCacheValue<Long> dirBObjID = new OMCacheValue<>(1026L);
+    dirCacheStore.put(dirA, dirAObjID);
+    dirCacheStore.put(dirB, dirBObjID);
+    // Step1. Cached Entries => {a, b}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+
+    // Step2. Verify eviction
+    // Cached Entries {frontEntry, rearEntry} => {c, b}
+    OMCacheKey<String> dirC = new OMCacheKey<>(dirBObjID + "/c");
+    OMCacheValue<Long> dirCObjID = new OMCacheValue<>(1027L);
+    dirCacheStore.put(dirC, dirCObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step3. Adding 'a' again. Now 'b' will be evicted.
+    dirCacheStore.put(dirA, dirAObjID);
+    // Cached Entries {frontEntry, rearEntry} => {a, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirB));
+
+    // Step4. Cached Entries {frontEntry, rearEntry} => {c, a}
+    // Access 'c' so that the recently used entry will be 'c'. Now the entry
+    // eligible for eviction will be 'a'.
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+
+    // Step4. Recently accessed entry will be retained.
+    dirCacheStore.put(dirB, dirBObjID);
+    // Cached Entries {frontEntry, rearEntry} => {b, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step5. Add duplicate entries shouldn't make any eviction.
+    dirCacheStore.put(dirB, dirBObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertEquals("Incorrect cache size", 2, dirCacheStore.size());
+
+    // Step6. Verify entry removal. Remove recently accessed entry.
+    dirCacheStore.remove(dirC);
+    // duplicate removal shouldn't cause any issues
+    dirCacheStore.remove(dirC);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirC));
+    Assert.assertEquals("Incorrect cache size", 1, dirCacheStore.size());
+
+    // Step7. Make it empty
+    dirCacheStore.remove(dirB);
+    Assert.assertEquals("Incorrect cache size", 0, dirCacheStore.size());
+  }
+
+  @Test
+  public void testNullKeysAndValuesToLRUCacheDirectoryPolicy() throws IOException {

Review comment:
       Done!

##########
File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##########
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(100000);
+
+  @Before
+  public void setup() {
+    //initialize config
+    conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+    //1. Verify disabling cache
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+    CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    //2. Invalid cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+            CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+    //3. Directory LRU cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    OMCacheKey<String> dirB = new OMCacheKey<>(dirAObjID + "/b");
+    OMCacheValue<Long> dirBObjID = new OMCacheValue<>(1026L);
+    dirCacheStore.put(dirA, dirAObjID);
+    dirCacheStore.put(dirB, dirBObjID);
+    // Step1. Cached Entries => {a, b}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+
+    // Step2. Verify eviction
+    // Cached Entries {frontEntry, rearEntry} => {c, b}
+    OMCacheKey<String> dirC = new OMCacheKey<>(dirBObjID + "/c");
+    OMCacheValue<Long> dirCObjID = new OMCacheValue<>(1027L);
+    dirCacheStore.put(dirC, dirCObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step3. Adding 'a' again. Now 'b' will be evicted.
+    dirCacheStore.put(dirA, dirAObjID);
+    // Cached Entries {frontEntry, rearEntry} => {a, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirB));
+
+    // Step4. Cached Entries {frontEntry, rearEntry} => {c, a}
+    // Access 'c' so that the recently used entry will be 'c'. Now the entry
+    // eligible for eviction will be 'a'.
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+
+    // Step4. Recently accessed entry will be retained.
+    dirCacheStore.put(dirB, dirBObjID);
+    // Cached Entries {frontEntry, rearEntry} => {b, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step5. Add duplicate entries shouldn't make any eviction.
+    dirCacheStore.put(dirB, dirBObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertEquals("Incorrect cache size", 2, dirCacheStore.size());
+
+    // Step6. Verify entry removal. Remove recently accessed entry.
+    dirCacheStore.remove(dirC);
+    // duplicate removal shouldn't cause any issues
+    dirCacheStore.remove(dirC);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirC));
+    Assert.assertEquals("Incorrect cache size", 1, dirCacheStore.size());
+
+    // Step7. Make it empty
+    dirCacheStore.remove(dirB);
+    Assert.assertEquals("Incorrect cache size", 0, dirCacheStore.size());
+  }
+
+  @Test
+  public void testNullKeysAndValuesToLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1026L);
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.put(null, dirAObjID);
+    dirCacheStore.put(dirA, null);
+
+    // shouldn't throw NPE, just skip null key
+    Assert.assertNull("Unexpected value!", dirCacheStore.get(null));
+
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.remove(null);
+  }
+
+  @Test
+  public void testNoCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    // Verify caching
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    dirCacheStore.put(dirA, dirAObjID);
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+  }
+
+  @Test
+  public void testDefaultCacheDirectoryPolicy() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_LRU
+    conf.unset(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY);
+    Assert.assertNull("Unexpected CachePolicy, it should be null!",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyValue() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_NOCACHE
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "INVALID");
+    Assert.assertEquals("Unexpected CachePolicy!", "INVALID",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyConfigurationName() throws IOException {

Review comment:
       Done!

##########
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##########
@@ -2521,4 +2521,32 @@
       filesystem semantics.
     </description>
   </property>
+
+  <property>
+    <name>ozone.om.metadata.cache.directory</name>

Review comment:
       Fixed test failure




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492481300



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheStore.java
##########
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Cache used for traversing path components from parent node to the leaf node.
+ * <p>
+ * Basically, its a write-through cache and ensures that no-stale entries in
+ * the cache.
+ * <p>
+ * TODO: can define specific 'CacheLoader' to handle the OM restart and
+ *       define cache loading strategies. It can be NullLoader, LazyLoader,
+ *       LevelLoader etc.
+ *
+ * @param <CACHEKEY>
+ * @param <CACHEVALUE>
+ */
+public interface CacheStore<CACHEKEY extends OMCacheKey,

Review comment:
       Good point. I thought of this and how about to do this via followup task.
   For now, I will add a TODO item. Hope thats fine for you.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
linyiqun commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492061634



##########
File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
##########
@@ -246,4 +246,15 @@ private OMConfigKeys() {
       "ozone.om.enable.filesystem.paths";
   public static final boolean OZONE_OM_ENABLE_FILESYSTEM_PATHS_DEFAULT =
       false;
+
+  public static final String OZONE_OM_CACHE_DIR_POLICY =
+          "ozone.om.metadata.cache.directory";
+  public static final String OZONE_OM_CACHE_DIR_DEFAULT = "DIR_LRU";

Review comment:
       I prefer rename these two:
   
   * ozone.om.metadata.cache.directory -> ozone.om.metadata.cache.directory.policy
   * OZONE_OM_CACHE_DIR_DEFAULT  -> OZONE_OM_CACHE_DIR_POLICY_DEFAULT

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheEntity.java
##########
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Entities that are to be cached.
+ */
+public enum CacheEntity {
+
+  DIR("directory");
+  // This is extendable and one can add more entities for
+  // caching based on demand. For example, define new entities like FILE
+  // ("file"), LISTING("listing") cache etc.
+
+  CacheEntity(String entity) {
+    this.entityName = entity;
+  }
+
+  private String entityName;
+
+  public String getName() {
+    return entityName;
+  }
+
+  public static CacheEntity getEntity(String entityStr) {
+    for (CacheEntity entity : CacheEntity.values()) {
+      if (entityStr.equalsIgnoreCase(entity.getName())) {

Review comment:
       Can you change the order of the comparison (to **entity.getName().equalsIgnoreCase(entityStr)**) in case entityStr passed as null that will lead NPE error.
   

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheStore.java
##########
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Cache used for traversing path components from parent node to the leaf node.
+ * <p>
+ * Basically, its a write-through cache and ensures that no-stale entries in
+ * the cache.
+ * <p>
+ * TODO: can define specific 'CacheLoader' to handle the OM restart and
+ *       define cache loading strategies. It can be NullLoader, LazyLoader,
+ *       LevelLoader etc.
+ *
+ * @param <CACHEKEY>
+ * @param <CACHEVALUE>
+ */
+public interface CacheStore<CACHEKEY extends OMCacheKey,

Review comment:
       Besides basical put/get/remove interface, we should also have other interface defined like cache hit/miss count for cache store in the future..

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/OMMetadataCacheFactory.java
##########
@@ -0,0 +1,120 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Provides different caching policies for cache entities. This can be
+ * extended by adding more entities and their caching policies into it.
+ * <p>
+ * For example, for the directory cache user has to configure following
+ * property with cache type. OM will creates specific cache store for the
+ * directory based on the configured cache policy.
+ * ozone.om.metadata.cache.directory = DIR_LRU
+ * <p>
+ * One can add new directory policy to OM by defining new cache type say
+ * "DIR_LFU" and implements new CacheStore as DirectoryLFUCacheStore.
+ * <p>
+ * One can add new entity to OM, let's say file to be cached by configuring the
+ * property like below and implement specific provider to instantiate the
+ * fileCacheStore.
+ * ozone.om.metadata.cache.file = FILE_LRU
+ */
+public final class OMMetadataCacheFactory {
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMMetadataCacheFactory.class);
+
+  /**
+   * Private constructor, class is not meant to be initialized.
+   */
+  private OMMetadataCacheFactory() {
+  }
+
+  public static CacheStore getCache(String configuredCachePolicy,
+                                    String defaultValue,
+                                    OzoneConfiguration config) {
+    String cachePolicy = config.get(configuredCachePolicy, defaultValue);
+    LOG.info("Configured {} with {}", configuredCachePolicy, cachePolicy);
+    CacheEntity entity = getCacheEntity(configuredCachePolicy);
+
+    switch (entity) {
+    case DIR:
+      OMMetadataCacheProvider provider = new OMDirectoryCacheProvider(config,
+              cachePolicy);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("CacheStore initialized with {}:" + provider.getEntity());
+      }
+      return provider.getCache();
+    default:
+      return null;

Review comment:
       How about throw error when cache store cannot be initialized with given cache policy? We would be better not let cache store as null returned. If cache store returned null, that means all operations from this cache store will be failed. Otherwise, we have to do empty check for each operation when cache store is used.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);
+  }
+
+  @Override
+  public OMCacheValue get(OMCacheKey key) {
+    return mCache.getIfPresent(key);
+  }
+
+  @Override
+  public void remove(OMCacheKey key) {

Review comment:
       Same comment like above.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);

Review comment:
       Can we add the null check for key before putting it into cache?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492582620



##########
File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##########
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(100000);
+
+  @Before
+  public void setup() {
+    //initialize config
+    conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+    //1. Verify disabling cache
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+    CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    //2. Invalid cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+            CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+    //3. Directory LRU cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    OMCacheKey<String> dirB = new OMCacheKey<>(dirAObjID + "/b");
+    OMCacheValue<Long> dirBObjID = new OMCacheValue<>(1026L);
+    dirCacheStore.put(dirA, dirAObjID);
+    dirCacheStore.put(dirB, dirBObjID);
+    // Step1. Cached Entries => {a, b}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+
+    // Step2. Verify eviction
+    // Cached Entries {frontEntry, rearEntry} => {c, b}
+    OMCacheKey<String> dirC = new OMCacheKey<>(dirBObjID + "/c");
+    OMCacheValue<Long> dirCObjID = new OMCacheValue<>(1027L);
+    dirCacheStore.put(dirC, dirCObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step3. Adding 'a' again. Now 'b' will be evicted.
+    dirCacheStore.put(dirA, dirAObjID);
+    // Cached Entries {frontEntry, rearEntry} => {a, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirB));
+
+    // Step4. Cached Entries {frontEntry, rearEntry} => {c, a}
+    // Access 'c' so that the recently used entry will be 'c'. Now the entry
+    // eligible for eviction will be 'a'.
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+
+    // Step4. Recently accessed entry will be retained.
+    dirCacheStore.put(dirB, dirBObjID);
+    // Cached Entries {frontEntry, rearEntry} => {b, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step5. Add duplicate entries shouldn't make any eviction.
+    dirCacheStore.put(dirB, dirBObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertEquals("Incorrect cache size", 2, dirCacheStore.size());
+
+    // Step6. Verify entry removal. Remove recently accessed entry.
+    dirCacheStore.remove(dirC);
+    // duplicate removal shouldn't cause any issues
+    dirCacheStore.remove(dirC);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirC));
+    Assert.assertEquals("Incorrect cache size", 1, dirCacheStore.size());
+
+    // Step7. Make it empty
+    dirCacheStore.remove(dirB);
+    Assert.assertEquals("Incorrect cache size", 0, dirCacheStore.size());
+  }
+
+  @Test
+  public void testNullKeysAndValuesToLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1026L);
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.put(null, dirAObjID);
+    dirCacheStore.put(dirA, null);
+
+    // shouldn't throw NPE, just skip null key
+    Assert.assertNull("Unexpected value!", dirCacheStore.get(null));
+
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.remove(null);
+  }
+
+  @Test
+  public void testNoCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    // Verify caching
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    dirCacheStore.put(dirA, dirAObjID);
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+  }
+
+  @Test
+  public void testDefaultCacheDirectoryPolicy() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_LRU
+    conf.unset(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY);
+    Assert.assertNull("Unexpected CachePolicy, it should be null!",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyValue() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_NOCACHE
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "INVALID");
+    Assert.assertEquals("Unexpected CachePolicy!", "INVALID",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyConfigurationName() throws IOException {

Review comment:
       Done!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492481705



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/OMMetadataCacheFactory.java
##########
@@ -0,0 +1,120 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Provides different caching policies for cache entities. This can be
+ * extended by adding more entities and their caching policies into it.
+ * <p>
+ * For example, for the directory cache user has to configure following
+ * property with cache type. OM will creates specific cache store for the
+ * directory based on the configured cache policy.
+ * ozone.om.metadata.cache.directory = DIR_LRU
+ * <p>
+ * One can add new directory policy to OM by defining new cache type say
+ * "DIR_LFU" and implements new CacheStore as DirectoryLFUCacheStore.
+ * <p>
+ * One can add new entity to OM, let's say file to be cached by configuring the
+ * property like below and implement specific provider to instantiate the
+ * fileCacheStore.
+ * ozone.om.metadata.cache.file = FILE_LRU
+ */
+public final class OMMetadataCacheFactory {
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMMetadataCacheFactory.class);
+
+  /**
+   * Private constructor, class is not meant to be initialized.
+   */
+  private OMMetadataCacheFactory() {
+  }
+
+  public static CacheStore getCache(String configuredCachePolicy,
+                                    String defaultValue,
+                                    OzoneConfiguration config) {
+    String cachePolicy = config.get(configuredCachePolicy, defaultValue);
+    LOG.info("Configured {} with {}", configuredCachePolicy, cachePolicy);
+    CacheEntity entity = getCacheEntity(configuredCachePolicy);
+
+    switch (entity) {
+    case DIR:
+      OMMetadataCacheProvider provider = new OMDirectoryCacheProvider(config,
+              cachePolicy);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("CacheStore initialized with {}:" + provider.getEntity());
+      }
+      return provider.getCache();
+    default:
+      return null;

Review comment:
       OK, will add UTs to cover this case.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492480005



##########
File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
##########
@@ -246,4 +246,15 @@ private OMConfigKeys() {
       "ozone.om.enable.filesystem.paths";
   public static final boolean OZONE_OM_ENABLE_FILESYSTEM_PATHS_DEFAULT =
       false;
+
+  public static final String OZONE_OM_CACHE_DIR_POLICY =
+          "ozone.om.metadata.cache.directory";
+  public static final String OZONE_OM_CACHE_DIR_DEFAULT = "DIR_LRU";

Review comment:
       OK, will update.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheEntity.java
##########
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Entities that are to be cached.
+ */
+public enum CacheEntity {
+
+  DIR("directory");
+  // This is extendable and one can add more entities for
+  // caching based on demand. For example, define new entities like FILE
+  // ("file"), LISTING("listing") cache etc.
+
+  CacheEntity(String entity) {
+    this.entityName = entity;
+  }
+
+  private String entityName;
+
+  public String getName() {
+    return entityName;
+  }
+
+  public static CacheEntity getEntity(String entityStr) {
+    for (CacheEntity entity : CacheEntity.values()) {
+      if (entityStr.equalsIgnoreCase(entity.getName())) {

Review comment:
       OK, will update.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr edited a comment on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr edited a comment on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!
   Updated another commit addressing the comments.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] linyiqun commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
linyiqun commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696588382


   > > Hi @rakeshadr , some initial review comments below.
   > > In additional, one question from me: this is the first task of dir cache, there will be other further subtasks. But this part of work depends on [HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) be completed. So that means [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) tasks cannot be merged immediately and be blocked for a long time? How do we plan to coordinate with [HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) and [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) feature development works?
   > 
   > Good comment. yes, cache should be integrated eventually(case-by-case) to get full benefit. But [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) can be upstreamed separately and not blocked, I feel.
   > 
   > Cache can be integrated once [HDDS-2949](https://issues.apache.org/jira/browse/HDDS-2949) is finished. During dir creation, it checks the [DIR_EXISTS](https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java#L185) and cache can be integrated into this call path.
   > 
   > Later, while implementing File, lookups tasks, delete, rename etc will integrate into that areas.
   
   Get it, sounds good to me.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
linyiqun commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492061634



##########
File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
##########
@@ -246,4 +246,15 @@ private OMConfigKeys() {
       "ozone.om.enable.filesystem.paths";
   public static final boolean OZONE_OM_ENABLE_FILESYSTEM_PATHS_DEFAULT =
       false;
+
+  public static final String OZONE_OM_CACHE_DIR_POLICY =
+          "ozone.om.metadata.cache.directory";
+  public static final String OZONE_OM_CACHE_DIR_DEFAULT = "DIR_LRU";

Review comment:
       I prefer rename these two:
   
   * ozone.om.metadata.cache.directory -> ozone.om.metadata.cache.directory.policy
   * OZONE_OM_CACHE_DIR_DEFAULT  -> OZONE_OM_CACHE_DIR_POLICY_DEFAULT

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheEntity.java
##########
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Entities that are to be cached.
+ */
+public enum CacheEntity {
+
+  DIR("directory");
+  // This is extendable and one can add more entities for
+  // caching based on demand. For example, define new entities like FILE
+  // ("file"), LISTING("listing") cache etc.
+
+  CacheEntity(String entity) {
+    this.entityName = entity;
+  }
+
+  private String entityName;
+
+  public String getName() {
+    return entityName;
+  }
+
+  public static CacheEntity getEntity(String entityStr) {
+    for (CacheEntity entity : CacheEntity.values()) {
+      if (entityStr.equalsIgnoreCase(entity.getName())) {

Review comment:
       Can you change the order of the comparison (to **entity.getName().equalsIgnoreCase(entityStr)**) in case entityStr passed as null that will lead NPE error.
   

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheStore.java
##########
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Cache used for traversing path components from parent node to the leaf node.
+ * <p>
+ * Basically, its a write-through cache and ensures that no-stale entries in
+ * the cache.
+ * <p>
+ * TODO: can define specific 'CacheLoader' to handle the OM restart and
+ *       define cache loading strategies. It can be NullLoader, LazyLoader,
+ *       LevelLoader etc.
+ *
+ * @param <CACHEKEY>
+ * @param <CACHEVALUE>
+ */
+public interface CacheStore<CACHEKEY extends OMCacheKey,

Review comment:
       Besides basical put/get/remove interface, we should also have other interface defined like cache hit/miss count for cache store in the future..

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/OMMetadataCacheFactory.java
##########
@@ -0,0 +1,120 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Provides different caching policies for cache entities. This can be
+ * extended by adding more entities and their caching policies into it.
+ * <p>
+ * For example, for the directory cache user has to configure following
+ * property with cache type. OM will creates specific cache store for the
+ * directory based on the configured cache policy.
+ * ozone.om.metadata.cache.directory = DIR_LRU
+ * <p>
+ * One can add new directory policy to OM by defining new cache type say
+ * "DIR_LFU" and implements new CacheStore as DirectoryLFUCacheStore.
+ * <p>
+ * One can add new entity to OM, let's say file to be cached by configuring the
+ * property like below and implement specific provider to instantiate the
+ * fileCacheStore.
+ * ozone.om.metadata.cache.file = FILE_LRU
+ */
+public final class OMMetadataCacheFactory {
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMMetadataCacheFactory.class);
+
+  /**
+   * Private constructor, class is not meant to be initialized.
+   */
+  private OMMetadataCacheFactory() {
+  }
+
+  public static CacheStore getCache(String configuredCachePolicy,
+                                    String defaultValue,
+                                    OzoneConfiguration config) {
+    String cachePolicy = config.get(configuredCachePolicy, defaultValue);
+    LOG.info("Configured {} with {}", configuredCachePolicy, cachePolicy);
+    CacheEntity entity = getCacheEntity(configuredCachePolicy);
+
+    switch (entity) {
+    case DIR:
+      OMMetadataCacheProvider provider = new OMDirectoryCacheProvider(config,
+              cachePolicy);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("CacheStore initialized with {}:" + provider.getEntity());
+      }
+      return provider.getCache();
+    default:
+      return null;

Review comment:
       How about throw error when cache store cannot be initialized with given cache policy? We would be better not let cache store as null returned. If cache store returned null, that means all operations from this cache store will be failed. Otherwise, we have to do empty check for each operation when cache store is used.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);
+  }
+
+  @Override
+  public OMCacheValue get(OMCacheKey key) {
+    return mCache.getIfPresent(key);
+  }
+
+  @Override
+  public void remove(OMCacheKey key) {

Review comment:
       Same comment like above.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);

Review comment:
       Can we add the null check for key before putting it into cache?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
linyiqun commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492562181



##########
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##########
@@ -2521,4 +2521,32 @@
       filesystem semantics.
     </description>
   </property>
+
+  <property>
+    <name>ozone.om.metadata.cache.directory</name>

Review comment:
       This name also needs to be updated, current unit test was broken by this
   > TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml:493 class org.apache.hadoop.ozone.OzoneConfigKeys class org.apache.hadoop.hdds.scm.ScmConfigKeys class org.apache.hadoop.ozone.om.OMConfigKeys class org.apache.hadoop.hdds.HddsConfigKeys class org.apache.hadoop.ozone.recon.ReconServerConfigKeys class org.apache.hadoop.ozone.s3.S3GatewayConfigKeys class org.apache.hadoop.hdds.scm.server.SCMHTTPServerConfig has 1 variables missing in ozone-default.xml Entries:   ozone.om.metadata.cache.directory.policy expected:<0> but was:<1>
   [ERROR]   TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass

##########
File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##########
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(100000);
+
+  @Before
+  public void setup() {
+    //initialize config
+    conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+    //1. Verify disabling cache
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+    CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    //2. Invalid cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+            CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+    //3. Directory LRU cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    OMCacheKey<String> dirB = new OMCacheKey<>(dirAObjID + "/b");
+    OMCacheValue<Long> dirBObjID = new OMCacheValue<>(1026L);
+    dirCacheStore.put(dirA, dirAObjID);
+    dirCacheStore.put(dirB, dirBObjID);
+    // Step1. Cached Entries => {a, b}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+
+    // Step2. Verify eviction
+    // Cached Entries {frontEntry, rearEntry} => {c, b}
+    OMCacheKey<String> dirC = new OMCacheKey<>(dirBObjID + "/c");
+    OMCacheValue<Long> dirCObjID = new OMCacheValue<>(1027L);
+    dirCacheStore.put(dirC, dirCObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step3. Adding 'a' again. Now 'b' will be evicted.
+    dirCacheStore.put(dirA, dirAObjID);
+    // Cached Entries {frontEntry, rearEntry} => {a, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirB));
+
+    // Step4. Cached Entries {frontEntry, rearEntry} => {c, a}
+    // Access 'c' so that the recently used entry will be 'c'. Now the entry
+    // eligible for eviction will be 'a'.
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+
+    // Step4. Recently accessed entry will be retained.
+    dirCacheStore.put(dirB, dirBObjID);
+    // Cached Entries {frontEntry, rearEntry} => {b, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step5. Add duplicate entries shouldn't make any eviction.
+    dirCacheStore.put(dirB, dirBObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertEquals("Incorrect cache size", 2, dirCacheStore.size());
+
+    // Step6. Verify entry removal. Remove recently accessed entry.
+    dirCacheStore.remove(dirC);
+    // duplicate removal shouldn't cause any issues
+    dirCacheStore.remove(dirC);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirC));
+    Assert.assertEquals("Incorrect cache size", 1, dirCacheStore.size());
+
+    // Step7. Make it empty
+    dirCacheStore.remove(dirB);
+    Assert.assertEquals("Incorrect cache size", 0, dirCacheStore.size());
+  }
+
+  @Test
+  public void testNullKeysAndValuesToLRUCacheDirectoryPolicy() throws IOException {

Review comment:
       Checkstyle issue of long line length:
   >[ERROR] src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java:[170] (sizes) LineLength: Line is longer than 80 characters (found 83).

##########
File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##########
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(100000);
+
+  @Before
+  public void setup() {
+    //initialize config
+    conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+    //1. Verify disabling cache
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+    CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    //2. Invalid cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+            CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+    //3. Directory LRU cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    OMCacheKey<String> dirB = new OMCacheKey<>(dirAObjID + "/b");
+    OMCacheValue<Long> dirBObjID = new OMCacheValue<>(1026L);
+    dirCacheStore.put(dirA, dirAObjID);
+    dirCacheStore.put(dirB, dirBObjID);
+    // Step1. Cached Entries => {a, b}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+
+    // Step2. Verify eviction
+    // Cached Entries {frontEntry, rearEntry} => {c, b}
+    OMCacheKey<String> dirC = new OMCacheKey<>(dirBObjID + "/c");
+    OMCacheValue<Long> dirCObjID = new OMCacheValue<>(1027L);
+    dirCacheStore.put(dirC, dirCObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step3. Adding 'a' again. Now 'b' will be evicted.
+    dirCacheStore.put(dirA, dirAObjID);
+    // Cached Entries {frontEntry, rearEntry} => {a, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirB));
+
+    // Step4. Cached Entries {frontEntry, rearEntry} => {c, a}
+    // Access 'c' so that the recently used entry will be 'c'. Now the entry
+    // eligible for eviction will be 'a'.
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+
+    // Step4. Recently accessed entry will be retained.
+    dirCacheStore.put(dirB, dirBObjID);
+    // Cached Entries {frontEntry, rearEntry} => {b, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step5. Add duplicate entries shouldn't make any eviction.
+    dirCacheStore.put(dirB, dirBObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertEquals("Incorrect cache size", 2, dirCacheStore.size());
+
+    // Step6. Verify entry removal. Remove recently accessed entry.
+    dirCacheStore.remove(dirC);
+    // duplicate removal shouldn't cause any issues
+    dirCacheStore.remove(dirC);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirC));
+    Assert.assertEquals("Incorrect cache size", 1, dirCacheStore.size());
+
+    // Step7. Make it empty
+    dirCacheStore.remove(dirB);
+    Assert.assertEquals("Incorrect cache size", 0, dirCacheStore.size());
+  }
+
+  @Test
+  public void testNullKeysAndValuesToLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1026L);
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.put(null, dirAObjID);
+    dirCacheStore.put(dirA, null);
+
+    // shouldn't throw NPE, just skip null key
+    Assert.assertNull("Unexpected value!", dirCacheStore.get(null));
+
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.remove(null);
+  }
+
+  @Test
+  public void testNoCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    // Verify caching
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    dirCacheStore.put(dirA, dirAObjID);
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+  }
+
+  @Test
+  public void testDefaultCacheDirectoryPolicy() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_LRU
+    conf.unset(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY);
+    Assert.assertNull("Unexpected CachePolicy, it should be null!",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyValue() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_NOCACHE
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "INVALID");
+    Assert.assertEquals("Unexpected CachePolicy!", "INVALID",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyConfigurationName() throws IOException {

Review comment:
       Checkstyle issue of long line length
   >[ERROR] src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java:[258] (sizes) LineLength: Line is longer than 80 characters (found 85).

##########
File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
##########
@@ -246,4 +246,15 @@ private OMConfigKeys() {
       "ozone.om.enable.filesystem.paths";
   public static final boolean OZONE_OM_ENABLE_FILESYSTEM_PATHS_DEFAULT =
       false;
+
+  public static final String OZONE_OM_CACHE_DIR_POLICY =
+          "ozone.om.metadata.cache.directory";
+  public static final String OZONE_OM_CACHE_DIR_DEFAULT = "DIR_LRU";

Review comment:
       Looks like OZONE_OM_CACHE_DIR_DEFAULT was not updated to OZONE_OM_CACHE_DIR_POLICY_DEFAULT.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
linyiqun commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492562181



##########
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##########
@@ -2521,4 +2521,32 @@
       filesystem semantics.
     </description>
   </property>
+
+  <property>
+    <name>ozone.om.metadata.cache.directory</name>

Review comment:
       This name also needs to be updated, current unit test was broken by this
   > TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml:493 class org.apache.hadoop.ozone.OzoneConfigKeys class org.apache.hadoop.hdds.scm.ScmConfigKeys class org.apache.hadoop.ozone.om.OMConfigKeys class org.apache.hadoop.hdds.HddsConfigKeys class org.apache.hadoop.ozone.recon.ReconServerConfigKeys class org.apache.hadoop.ozone.s3.S3GatewayConfigKeys class org.apache.hadoop.hdds.scm.server.SCMHTTPServerConfig has 1 variables missing in ozone-default.xml Entries:   ozone.om.metadata.cache.directory.policy expected:<0> but was:<1>
   [ERROR]   TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass

##########
File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##########
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(100000);
+
+  @Before
+  public void setup() {
+    //initialize config
+    conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+    //1. Verify disabling cache
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+    CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    //2. Invalid cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+            CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+    //3. Directory LRU cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    OMCacheKey<String> dirB = new OMCacheKey<>(dirAObjID + "/b");
+    OMCacheValue<Long> dirBObjID = new OMCacheValue<>(1026L);
+    dirCacheStore.put(dirA, dirAObjID);
+    dirCacheStore.put(dirB, dirBObjID);
+    // Step1. Cached Entries => {a, b}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+
+    // Step2. Verify eviction
+    // Cached Entries {frontEntry, rearEntry} => {c, b}
+    OMCacheKey<String> dirC = new OMCacheKey<>(dirBObjID + "/c");
+    OMCacheValue<Long> dirCObjID = new OMCacheValue<>(1027L);
+    dirCacheStore.put(dirC, dirCObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step3. Adding 'a' again. Now 'b' will be evicted.
+    dirCacheStore.put(dirA, dirAObjID);
+    // Cached Entries {frontEntry, rearEntry} => {a, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirB));
+
+    // Step4. Cached Entries {frontEntry, rearEntry} => {c, a}
+    // Access 'c' so that the recently used entry will be 'c'. Now the entry
+    // eligible for eviction will be 'a'.
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+
+    // Step4. Recently accessed entry will be retained.
+    dirCacheStore.put(dirB, dirBObjID);
+    // Cached Entries {frontEntry, rearEntry} => {b, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step5. Add duplicate entries shouldn't make any eviction.
+    dirCacheStore.put(dirB, dirBObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertEquals("Incorrect cache size", 2, dirCacheStore.size());
+
+    // Step6. Verify entry removal. Remove recently accessed entry.
+    dirCacheStore.remove(dirC);
+    // duplicate removal shouldn't cause any issues
+    dirCacheStore.remove(dirC);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirC));
+    Assert.assertEquals("Incorrect cache size", 1, dirCacheStore.size());
+
+    // Step7. Make it empty
+    dirCacheStore.remove(dirB);
+    Assert.assertEquals("Incorrect cache size", 0, dirCacheStore.size());
+  }
+
+  @Test
+  public void testNullKeysAndValuesToLRUCacheDirectoryPolicy() throws IOException {

Review comment:
       Checkstyle issue of long line length:
   >[ERROR] src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java:[170] (sizes) LineLength: Line is longer than 80 characters (found 83).

##########
File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##########
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(100000);
+
+  @Before
+  public void setup() {
+    //initialize config
+    conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+    //1. Verify disabling cache
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+    CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    //2. Invalid cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+            CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+    //3. Directory LRU cache policy
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+    dirCacheStore = OMMetadataCacheFactory.getCache(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+    Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    OMCacheKey<String> dirB = new OMCacheKey<>(dirAObjID + "/b");
+    OMCacheValue<Long> dirBObjID = new OMCacheValue<>(1026L);
+    dirCacheStore.put(dirA, dirAObjID);
+    dirCacheStore.put(dirB, dirBObjID);
+    // Step1. Cached Entries => {a, b}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+
+    // Step2. Verify eviction
+    // Cached Entries {frontEntry, rearEntry} => {c, b}
+    OMCacheKey<String> dirC = new OMCacheKey<>(dirBObjID + "/c");
+    OMCacheValue<Long> dirCObjID = new OMCacheValue<>(1027L);
+    dirCacheStore.put(dirC, dirCObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step3. Adding 'a' again. Now 'b' will be evicted.
+    dirCacheStore.put(dirA, dirAObjID);
+    // Cached Entries {frontEntry, rearEntry} => {a, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirAObjID.getCacheValue(), dirCacheStore.get(dirA).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirB));
+
+    // Step4. Cached Entries {frontEntry, rearEntry} => {c, a}
+    // Access 'c' so that the recently used entry will be 'c'. Now the entry
+    // eligible for eviction will be 'a'.
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+
+    // Step4. Recently accessed entry will be retained.
+    dirCacheStore.put(dirB, dirBObjID);
+    // Cached Entries {frontEntry, rearEntry} => {b, c}
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+
+    // Step5. Add duplicate entries shouldn't make any eviction.
+    dirCacheStore.put(dirB, dirBObjID);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertEquals("Unexpected Cache Value",
+            dirCObjID.getCacheValue(), dirCacheStore.get(dirC).getCacheValue());
+    Assert.assertEquals("Incorrect cache size", 2, dirCacheStore.size());
+
+    // Step6. Verify entry removal. Remove recently accessed entry.
+    dirCacheStore.remove(dirC);
+    // duplicate removal shouldn't cause any issues
+    dirCacheStore.remove(dirC);
+    Assert.assertEquals("Unexpected Cache Value",
+            dirBObjID.getCacheValue(), dirCacheStore.get(dirB).getCacheValue());
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirC));
+    Assert.assertEquals("Incorrect cache size", 1, dirCacheStore.size());
+
+    // Step7. Make it empty
+    dirCacheStore.remove(dirB);
+    Assert.assertEquals("Incorrect cache size", 0, dirCacheStore.size());
+  }
+
+  @Test
+  public void testNullKeysAndValuesToLRUCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_LRU.getPolicy());
+    conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+    conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1026L);
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.put(null, dirAObjID);
+    dirCacheStore.put(dirA, null);
+
+    // shouldn't throw NPE, just skip null key
+    Assert.assertNull("Unexpected value!", dirCacheStore.get(null));
+
+    // shouldn't throw NPE, just skip null key
+    dirCacheStore.remove(null);
+  }
+
+  @Test
+  public void testNoCacheDirectoryPolicy() throws IOException {
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+            CachePolicy.DIR_NOCACHE.getPolicy());
+
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+
+    // Verify caching
+    OMCacheKey<String> dirA = new OMCacheKey<>("512/a");
+    OMCacheValue<Long> dirAObjID = new OMCacheValue<>(1025L);
+    dirCacheStore.put(dirA, dirAObjID);
+    Assert.assertNull("Unexpected Cache Value", dirCacheStore.get(dirA));
+  }
+
+  @Test
+  public void testDefaultCacheDirectoryPolicy() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_LRU
+    conf.unset(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY);
+    Assert.assertNull("Unexpected CachePolicy, it should be null!",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyValue() throws IOException {
+    File testDir = GenericTestUtils.getRandomizedTestDir();
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+            testDir.toString());
+
+    //1. Verify default dir cache policy. Defaulting to DIR_NOCACHE
+    conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "INVALID");
+    Assert.assertEquals("Unexpected CachePolicy!", "INVALID",
+            conf.get(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY));
+
+    omMetadataManager = new OmMetadataManagerImpl(conf);
+    CacheStore dirCacheStore =
+            omMetadataManager.getOMCacheManager().getDirCache();
+    Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_NOCACHE,
+            dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testInvalidCacheDirectoryPolicyConfigurationName() throws IOException {

Review comment:
       Checkstyle issue of long line length
   >[ERROR] src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java:[258] (sizes) LineLength: Line is longer than 80 characters (found 85).

##########
File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
##########
@@ -246,4 +246,15 @@ private OMConfigKeys() {
       "ozone.om.enable.filesystem.paths";
   public static final boolean OZONE_OM_ENABLE_FILESYSTEM_PATHS_DEFAULT =
       false;
+
+  public static final String OZONE_OM_CACHE_DIR_POLICY =
+          "ozone.om.metadata.cache.directory";
+  public static final String OZONE_OM_CACHE_DIR_DEFAULT = "DIR_LRU";

Review comment:
       Looks like OZONE_OM_CACHE_DIR_DEFAULT was not updated to OZONE_OM_CACHE_DIR_POLICY_DEFAULT.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492481338



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);

Review comment:
       Sure, will update it

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/DirectoryLRUCacheStore.java
##########
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * <p>
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache<OMCacheKey, OMCacheValue> mCache;
+
+  /**
+   * @param configuration ozone config
+   */
+  public DirectoryLRUCacheStore(OzoneConfiguration configuration) {
+    LOG.info("Initializing DirectoryLRUCacheStore..");
+    // defaulting to 1000,00
+    int initSize = configuration.getInt(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY_DEFAULT);
+    // defaulting to 5000,000
+    long maxSize = configuration.getLong(
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY,
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY_DEFAULT);
+    LOG.info("Configured {} with {}",
+            OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, maxSize);
+    mCache = CacheBuilder.newBuilder()
+            .initialCapacity(initSize)
+            .maximumSize(maxSize)
+            .build();
+  }
+
+  @Override
+  public void put(OMCacheKey key, OMCacheValue value) {
+    mCache.put(key, value);
+  }
+
+  @Override
+  public OMCacheValue get(OMCacheKey key) {
+    return mCache.getIfPresent(key);
+  }
+
+  @Override
+  public void remove(OMCacheKey key) {

Review comment:
       OK




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

Posted by GitBox <gi...@apache.org>.
rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492582953



##########
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##########
@@ -2521,4 +2521,32 @@
       filesystem semantics.
     </description>
   </property>
+
+  <property>
+    <name>ozone.om.metadata.cache.directory</name>

Review comment:
       Fixed test failure




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org