You are viewing a plain text version of this content. The canonical link for it is here.
Posted to pr@cassandra.apache.org by GitBox <gi...@apache.org> on 2021/12/08 19:01:51 UTC

[GitHub] [cassandra] nvharikrishna commented on a change in pull request #1290: [CASSANDRA-14898] Fix performance issue of keycache not loading fast.

nvharikrishna commented on a change in pull request #1290:
URL: https://github.com/apache/cassandra/pull/1290#discussion_r765072125



##########
File path: test/unit/org/apache/cassandra/db/KeyCacheTest.java
##########
@@ -231,6 +258,111 @@ public void testKeyCache() throws ExecutionException, InterruptedException
         assertKeyCacheSize(noEarlyOpen ? 4 : 2, KEYSPACE1, COLUMN_FAMILY1);
     }
 
+    @Test
+    public void testKeyCacheLoadNegativeCacheLoadTime() throws Exception
+    {
+        DatabaseDescriptor.setMaxCacheLoadTime(-1);
+        String cf = COLUMN_FAMILY7;
+
+        createAndInvalidateCache(Collections.singletonList(Pair.create(KEYSPACE1, cf)), 100);
+
+        CacheService.instance.keyCache.loadSaved();
+
+        // Here max time to load cache is negative which means no time left to load cache. So the keyCache size should
+        // be zero after loadSaved().
+        assertKeyCacheSize(0, KEYSPACE1, cf);
+        assertEquals(0, CacheService.instance.keyCache.size());
+    }
+
+    @Test
+    public void testKeyCacheLoadTwoTablesTime() throws Exception
+    {
+        DatabaseDescriptor.setMaxCacheLoadTime(60);
+        String columnFamily1 = COLUMN_FAMILY8;
+        String columnFamily2 = COLUMN_FAMILY_K2_1;
+        int numberOfRows = 100;
+        List<Pair<String, String>> tables = new ArrayList<>(2);
+        tables.add(Pair.create(KEYSPACE1, columnFamily1));
+        tables.add(Pair.create(KEYSPACE2, columnFamily2));
+
+        createAndInvalidateCache(tables, numberOfRows);
+
+        CacheService.instance.keyCache.loadSaved();
+
+        // Here max time to load cache is negative which means no time left to load cache. So the keyCache size should
+        // be zero after load.
+        assertKeyCacheSize(numberOfRows, KEYSPACE1, columnFamily1);
+        assertKeyCacheSize(numberOfRows, KEYSPACE2, columnFamily2);
+        assertEquals(numberOfRows * tables.size(), CacheService.instance.keyCache.size());
+    }
+
+    @SuppressWarnings({ "unchecked", "rawtypes" })
+    @Test
+    public void testKeyCacheLoadCacheLoadTimeExceedingLimit() throws Exception
+    {
+        DatabaseDescriptor.setMaxCacheLoadTime(2);
+        int delayMillis = 1000;
+        int numberOfRows = 100;
+
+        String cf = COLUMN_FAMILY9;
+
+        createAndInvalidateCache(Collections.singletonList(Pair.create(KEYSPACE1, cf)), numberOfRows);
+
+        // Testing cache load. Here using custom built AutoSavingCache instance as simulating delay is not possible with
+        // 'CacheService.instance.keyCache'. 'AutoSavingCache.loadSaved()' is returning no.of entries loaded so we don't need
+        // to instantiate ICache.class.
+        CacheService.KeyCacheSerializer keyCacheSerializer = new CacheService.KeyCacheSerializer();
+        CacheService.KeyCacheSerializer keyCacheSerializerSpy = Mockito.spy(keyCacheSerializer);
+        AutoSavingCache autoSavingCache = new AutoSavingCache(mock(ICache.class),
+                                                              CacheService.CacheType.KEY_CACHE,
+                                                              keyCacheSerializerSpy);
+
+        doAnswer(new AnswersWithDelay(delayMillis, answer -> keyCacheSerializer.deserialize(answer.getArgument(0),
+                                                                                            answer.getArgument(1)) ))
+               .when(keyCacheSerializerSpy).deserialize(any(DataInputPlus.class), any(ColumnFamilyStore.class));
+
+        long maxExpectedKeyCache = Math.min(numberOfRows,
+                                            1 + TimeUnit.SECONDS.toMillis(DatabaseDescriptor.getMaxCacheLoadTime()) / delayMillis);
+
+        long keysLoaded = autoSavingCache.loadSaved();

Review comment:
       done.

##########
File path: conf/cassandra.yaml
##########
@@ -286,7 +286,11 @@ counter_cache_save_period: 7200
 # If not set, the default directory is $CASSANDRA_HOME/data/saved_caches.
 # saved_caches_directory: /var/lib/cassandra/saved_caches
 
-# commitlog_sync may be either "periodic" or "batch." 
+# Max amount of time in seconds server has wait for cache to load while starting the Cassandra process.

Review comment:
       This one is sounding better. Updated.

##########
File path: conf/cassandra.yaml
##########
@@ -286,7 +286,11 @@ counter_cache_save_period: 7200
 # If not set, the default directory is $CASSANDRA_HOME/data/saved_caches.
 # saved_caches_directory: /var/lib/cassandra/saved_caches
 
-# commitlog_sync may be either "periodic" or "batch." 
+# Max amount of time in seconds server has wait for cache to load while starting the Cassandra process.
+# Cache will not load (during startup) if this value is negative as there is no time left to load.
+# max_cache_load_time: 30

Review comment:
       I am fine with both names. Picking the shorter one 'cache_load_timeout_seconds'.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: pr-unsubscribe@cassandra.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: pr-unsubscribe@cassandra.apache.org
For additional commands, e-mail: pr-help@cassandra.apache.org