You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "vcrfxia (via GitHub)" <gi...@apache.org> on 2023/02/02 01:50:05 UTC

[GitHub] [kafka] vcrfxia commented on a diff in pull request #13143: KAFKA-14491: [3/N] Add logical key value segments

vcrfxia commented on code in PR #13143:
URL: https://github.com/apache/kafka/pull/13143#discussion_r1093924959


##########
streams/src/test/java/org/apache/kafka/streams/state/internals/LogicalKeyValueSegmentTest.java:
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+
+import java.util.ArrayList;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.stream.Collectors;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.Serializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.common.utils.Bytes;
+import org.apache.kafka.common.utils.Utils;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.errors.InvalidStateStoreException;
+import org.apache.kafka.streams.processor.StateStoreContext;
+import org.apache.kafka.streams.state.KeyValueIterator;
+import org.apache.kafka.streams.state.internals.metrics.RocksDBMetricsRecorder;
+import org.apache.kafka.test.InternalMockProcessorContext;
+import org.apache.kafka.test.StreamsTestUtils;
+import org.apache.kafka.test.TestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+public class LogicalKeyValueSegmentTest {
+
+    private static final String STORE_NAME = "physical-rocks";
+    private static final String METRICS_SCOPE = "metrics-scope";
+    private static final String DB_FILE_DIR = "rocksdb";
+    private static final Serializer<String> STRING_SERIALIZER = new StringSerializer();
+    private static final Deserializer<String> STRING_DESERIALIZER = new StringDeserializer();
+
+    private RocksDBStore physicalStore;
+
+    private LogicalKeyValueSegment segment1;
+    private LogicalKeyValueSegment segment2;
+
+    @Before
+    public void setUp() {
+        physicalStore = new RocksDBStore(STORE_NAME, DB_FILE_DIR, new RocksDBMetricsRecorder(METRICS_SCOPE, STORE_NAME), false);
+        physicalStore.init((StateStoreContext) new InternalMockProcessorContext<>(
+            TestUtils.tempDirectory(),
+            Serdes.String(),
+            Serdes.String(),
+            new StreamsConfig(StreamsTestUtils.getStreamsConfig())
+        ), physicalStore);
+
+        segment1 = new LogicalKeyValueSegment(1, "segment-1", physicalStore);
+        segment2 = new LogicalKeyValueSegment(2, "segment-2", physicalStore);
+    }
+
+    @After
+    public void tearDown() {
+        segment1.close();
+        segment2.close();
+        physicalStore.close();
+    }
+
+    @Test
+    public void shouldPut() {
+        final KeyValue<String, String> kv0 = new KeyValue<>("1", "a");
+        final KeyValue<String, String> kv1 = new KeyValue<>("2", "b");
+
+        segment1.put(new Bytes(kv0.key.getBytes(UTF_8)), kv0.value.getBytes(UTF_8));
+        segment1.put(new Bytes(kv1.key.getBytes(UTF_8)), kv1.value.getBytes(UTF_8));
+        segment2.put(new Bytes(kv0.key.getBytes(UTF_8)), kv0.value.getBytes(UTF_8));
+        segment2.put(new Bytes(kv1.key.getBytes(UTF_8)), kv1.value.getBytes(UTF_8));
+
+        assertEquals("a", getAndDeserialize(segment1, "1"));

Review Comment:
   I was on the fence about this because it requires testing the internals of the class (i.e., specifically how the segment prefixes are serialized) rather than just the public-facing methods. In the end I opted to test indirectly instead, by inserting the same keys into different segments and checking that their values do not collide.
   
   If you prefer checking the contents of the physical store itself, I can make the update. 



##########
streams/src/test/java/org/apache/kafka/streams/state/internals/LogicalKeyValueSegmentTest.java:
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+
+import java.util.ArrayList;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.stream.Collectors;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.Serializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.common.utils.Bytes;
+import org.apache.kafka.common.utils.Utils;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.errors.InvalidStateStoreException;
+import org.apache.kafka.streams.processor.StateStoreContext;
+import org.apache.kafka.streams.state.KeyValueIterator;
+import org.apache.kafka.streams.state.internals.metrics.RocksDBMetricsRecorder;
+import org.apache.kafka.test.InternalMockProcessorContext;
+import org.apache.kafka.test.StreamsTestUtils;
+import org.apache.kafka.test.TestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+public class LogicalKeyValueSegmentTest {
+
+    private static final String STORE_NAME = "physical-rocks";
+    private static final String METRICS_SCOPE = "metrics-scope";
+    private static final String DB_FILE_DIR = "rocksdb";
+    private static final Serializer<String> STRING_SERIALIZER = new StringSerializer();
+    private static final Deserializer<String> STRING_DESERIALIZER = new StringDeserializer();
+
+    private RocksDBStore physicalStore;
+
+    private LogicalKeyValueSegment segment1;
+    private LogicalKeyValueSegment segment2;
+
+    @Before
+    public void setUp() {
+        physicalStore = new RocksDBStore(STORE_NAME, DB_FILE_DIR, new RocksDBMetricsRecorder(METRICS_SCOPE, STORE_NAME), false);
+        physicalStore.init((StateStoreContext) new InternalMockProcessorContext<>(
+            TestUtils.tempDirectory(),
+            Serdes.String(),
+            Serdes.String(),
+            new StreamsConfig(StreamsTestUtils.getStreamsConfig())
+        ), physicalStore);
+
+        segment1 = new LogicalKeyValueSegment(1, "segment-1", physicalStore);
+        segment2 = new LogicalKeyValueSegment(2, "segment-2", physicalStore);
+    }
+
+    @After
+    public void tearDown() {
+        segment1.close();
+        segment2.close();
+        physicalStore.close();
+    }
+
+    @Test
+    public void shouldPut() {
+        final KeyValue<String, String> kv0 = new KeyValue<>("1", "a");
+        final KeyValue<String, String> kv1 = new KeyValue<>("2", "b");
+
+        segment1.put(new Bytes(kv0.key.getBytes(UTF_8)), kv0.value.getBytes(UTF_8));

Review Comment:
   No particular reason. I copied from RocksDBStoreTest.java which also uses both.
   
   `StringSerializer` handles nulls while `getBytes` doesn't, so if we unify on one of them it'll have to be `StringSerializer`. `StringSerializer` is a bit longer / harder to read. Let me pull it out into a helper method to preserve readability.



##########
streams/src/test/java/org/apache/kafka/streams/state/internals/LogicalKeyValueSegmentsTest.java:
##########
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+import static org.hamcrest.CoreMatchers.is;
+import static org.hamcrest.CoreMatchers.nullValue;
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.util.List;
+import org.apache.kafka.common.metrics.Metrics;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.utils.LogContext;
+import org.apache.kafka.streams.processor.internals.MockStreamsMetrics;
+import org.apache.kafka.streams.state.internals.metrics.RocksDBMetricsRecorder;
+import org.apache.kafka.test.InternalMockProcessorContext;
+import org.apache.kafka.test.MockRecordCollector;
+import org.apache.kafka.test.TestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+public class LogicalKeyValueSegmentsTest {
+
+    private static final long SEGMENT_INTERVAL = 100L;
+    private static final long RETENTION_PERIOD = 4 * SEGMENT_INTERVAL;
+    private static final String STORE_NAME = "logical-segments";
+    private static final String METRICS_SCOPE = "metrics-scope";
+    private static final String DB_FILE_DIR = "rocksdb";
+
+    private InternalMockProcessorContext context;
+
+    private LogicalKeyValueSegments segments;
+
+    @Before
+    public void setUp() {
+        context = new InternalMockProcessorContext<>(
+            TestUtils.tempDirectory(),
+            Serdes.String(),
+            Serdes.Long(),
+            new MockRecordCollector(),
+            new ThreadCache(new LogContext("testCache "), 0, new MockStreamsMetrics(new Metrics()))
+        );
+        segments = new LogicalKeyValueSegments(
+            STORE_NAME,
+            DB_FILE_DIR,
+            RETENTION_PERIOD,
+            SEGMENT_INTERVAL,
+            new RocksDBMetricsRecorder(METRICS_SCOPE, STORE_NAME)
+        );
+        segments.openExisting(context, -1L);
+    }
+
+    @After
+    public void tearDown() {
+        segments.close();
+    }
+
+    @Test
+    public void shouldGetSegmentIdsFromTimestamp() {
+        assertEquals(0, segments.segmentId(0));
+        assertEquals(1, segments.segmentId(SEGMENT_INTERVAL));
+        assertEquals(2, segments.segmentId(2 * SEGMENT_INTERVAL));
+        assertEquals(3, segments.segmentId(3 * SEGMENT_INTERVAL));
+    }
+
+    @Test
+    public void shouldCreateSegments() {
+        final LogicalKeyValueSegment segment1 = segments.getOrCreateSegmentIfLive(0, context, -1L);
+        final LogicalKeyValueSegment segment2 = segments.getOrCreateSegmentIfLive(1, context, -1L);
+        final LogicalKeyValueSegment segment3 = segments.getOrCreateSegmentIfLive(2, context, -1L);
+
+        final File rocksdbDir = new File(new File(context.stateDir(), DB_FILE_DIR), STORE_NAME);
+        assertTrue(rocksdbDir.isDirectory());
+
+        assertTrue(segment1.isOpen());
+        assertTrue(segment2.isOpen());
+        assertTrue(segment3.isOpen());
+    }
+
+    @Test
+    public void shouldNotCreateSegmentThatIsAlreadyExpired() {
+        final long streamTime = updateStreamTimeAndCreateSegment(7);
+        assertNull(segments.getOrCreateSegmentIfLive(0, context, streamTime));
+    }
+
+    @Test
+    public void shouldCleanupSegmentsThatHaveExpired() {
+        final LogicalKeyValueSegment segment1 = segments.getOrCreateSegmentIfLive(0, context, 0);
+        final LogicalKeyValueSegment segment2 = segments.getOrCreateSegmentIfLive(0, context, SEGMENT_INTERVAL * 2L);
+        final LogicalKeyValueSegment segment3 = segments.getOrCreateSegmentIfLive(3, context, SEGMENT_INTERVAL * 3L);
+        final LogicalKeyValueSegment segment4 = segments.getOrCreateSegmentIfLive(7, context, SEGMENT_INTERVAL * 7L);
+
+        final List<LogicalKeyValueSegment> allSegments = segments.allSegments(true);
+        assertEquals(2, allSegments.size());
+        assertEquals(segment3, allSegments.get(0));
+        assertEquals(segment4, allSegments.get(1));
+    }
+
+    @Test
+    public void shouldGetSegmentForTimestamp() {
+        final LogicalKeyValueSegment segment = segments.getOrCreateSegmentIfLive(0, context, -1L);

Review Comment:
   Fixed. FWIW the existing KeyValueSegmentsTest.java does the same thing 🤷 



##########
streams/src/test/java/org/apache/kafka/streams/state/internals/LogicalKeyValueSegmentsTest.java:
##########
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+import static org.hamcrest.CoreMatchers.is;
+import static org.hamcrest.CoreMatchers.nullValue;
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.util.List;
+import org.apache.kafka.common.metrics.Metrics;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.utils.LogContext;
+import org.apache.kafka.streams.processor.internals.MockStreamsMetrics;
+import org.apache.kafka.streams.state.internals.metrics.RocksDBMetricsRecorder;
+import org.apache.kafka.test.InternalMockProcessorContext;
+import org.apache.kafka.test.MockRecordCollector;
+import org.apache.kafka.test.TestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+public class LogicalKeyValueSegmentsTest {
+
+    private static final long SEGMENT_INTERVAL = 100L;
+    private static final long RETENTION_PERIOD = 4 * SEGMENT_INTERVAL;
+    private static final String STORE_NAME = "logical-segments";
+    private static final String METRICS_SCOPE = "metrics-scope";
+    private static final String DB_FILE_DIR = "rocksdb";
+
+    private InternalMockProcessorContext context;
+
+    private LogicalKeyValueSegments segments;
+
+    @Before
+    public void setUp() {
+        context = new InternalMockProcessorContext<>(
+            TestUtils.tempDirectory(),
+            Serdes.String(),
+            Serdes.Long(),
+            new MockRecordCollector(),
+            new ThreadCache(new LogContext("testCache "), 0, new MockStreamsMetrics(new Metrics()))
+        );
+        segments = new LogicalKeyValueSegments(
+            STORE_NAME,
+            DB_FILE_DIR,
+            RETENTION_PERIOD,
+            SEGMENT_INTERVAL,
+            new RocksDBMetricsRecorder(METRICS_SCOPE, STORE_NAME)
+        );
+        segments.openExisting(context, -1L);
+    }
+
+    @After
+    public void tearDown() {
+        segments.close();
+    }
+
+    @Test
+    public void shouldGetSegmentIdsFromTimestamp() {
+        assertEquals(0, segments.segmentId(0));
+        assertEquals(1, segments.segmentId(SEGMENT_INTERVAL));
+        assertEquals(2, segments.segmentId(2 * SEGMENT_INTERVAL));
+        assertEquals(3, segments.segmentId(3 * SEGMENT_INTERVAL));
+    }
+
+    @Test
+    public void shouldCreateSegments() {
+        final LogicalKeyValueSegment segment1 = segments.getOrCreateSegmentIfLive(0, context, -1L);
+        final LogicalKeyValueSegment segment2 = segments.getOrCreateSegmentIfLive(1, context, -1L);
+        final LogicalKeyValueSegment segment3 = segments.getOrCreateSegmentIfLive(2, context, -1L);
+
+        final File rocksdbDir = new File(new File(context.stateDir(), DB_FILE_DIR), STORE_NAME);
+        assertTrue(rocksdbDir.isDirectory());
+
+        assertTrue(segment1.isOpen());
+        assertTrue(segment2.isOpen());
+        assertTrue(segment3.isOpen());
+    }
+
+    @Test
+    public void shouldNotCreateSegmentThatIsAlreadyExpired() {
+        final long streamTime = updateStreamTimeAndCreateSegment(7);
+        assertNull(segments.getOrCreateSegmentIfLive(0, context, streamTime));
+    }
+
+    @Test
+    public void shouldCleanupSegmentsThatHaveExpired() {
+        final LogicalKeyValueSegment segment1 = segments.getOrCreateSegmentIfLive(0, context, 0);
+        final LogicalKeyValueSegment segment2 = segments.getOrCreateSegmentIfLive(0, context, SEGMENT_INTERVAL * 2L);
+        final LogicalKeyValueSegment segment3 = segments.getOrCreateSegmentIfLive(3, context, SEGMENT_INTERVAL * 3L);
+        final LogicalKeyValueSegment segment4 = segments.getOrCreateSegmentIfLive(7, context, SEGMENT_INTERVAL * 7L);
+
+        final List<LogicalKeyValueSegment> allSegments = segments.allSegments(true);
+        assertEquals(2, allSegments.size());
+        assertEquals(segment3, allSegments.get(0));
+        assertEquals(segment4, allSegments.get(1));
+    }
+
+    @Test
+    public void shouldGetSegmentForTimestamp() {
+        final LogicalKeyValueSegment segment = segments.getOrCreateSegmentIfLive(0, context, -1L);
+        segments.getOrCreateSegmentIfLive(1, context, -1L);

Review Comment:
   This line differs from the line above because it creates a different segment. The test checks that we get the expected segment, and not just the only one that exists. Let me rewrite this to clarify, and also add upper and lower bounds tests as you suggested.



##########
streams/src/test/java/org/apache/kafka/streams/state/internals/LogicalKeyValueSegmentsTest.java:
##########
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+import static org.hamcrest.CoreMatchers.is;
+import static org.hamcrest.CoreMatchers.nullValue;
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.util.List;
+import org.apache.kafka.common.metrics.Metrics;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.utils.LogContext;
+import org.apache.kafka.streams.processor.internals.MockStreamsMetrics;
+import org.apache.kafka.streams.state.internals.metrics.RocksDBMetricsRecorder;
+import org.apache.kafka.test.InternalMockProcessorContext;
+import org.apache.kafka.test.MockRecordCollector;
+import org.apache.kafka.test.TestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+public class LogicalKeyValueSegmentsTest {
+
+    private static final long SEGMENT_INTERVAL = 100L;
+    private static final long RETENTION_PERIOD = 4 * SEGMENT_INTERVAL;
+    private static final String STORE_NAME = "logical-segments";
+    private static final String METRICS_SCOPE = "metrics-scope";
+    private static final String DB_FILE_DIR = "rocksdb";
+
+    private InternalMockProcessorContext context;
+
+    private LogicalKeyValueSegments segments;
+
+    @Before
+    public void setUp() {
+        context = new InternalMockProcessorContext<>(
+            TestUtils.tempDirectory(),
+            Serdes.String(),
+            Serdes.Long(),
+            new MockRecordCollector(),
+            new ThreadCache(new LogContext("testCache "), 0, new MockStreamsMetrics(new Metrics()))
+        );
+        segments = new LogicalKeyValueSegments(
+            STORE_NAME,
+            DB_FILE_DIR,
+            RETENTION_PERIOD,
+            SEGMENT_INTERVAL,
+            new RocksDBMetricsRecorder(METRICS_SCOPE, STORE_NAME)
+        );
+        segments.openExisting(context, -1L);
+    }
+
+    @After
+    public void tearDown() {
+        segments.close();
+    }
+
+    @Test
+    public void shouldGetSegmentIdsFromTimestamp() {
+        assertEquals(0, segments.segmentId(0));
+        assertEquals(1, segments.segmentId(SEGMENT_INTERVAL));
+        assertEquals(2, segments.segmentId(2 * SEGMENT_INTERVAL));
+        assertEquals(3, segments.segmentId(3 * SEGMENT_INTERVAL));
+    }
+
+    @Test
+    public void shouldCreateSegments() {
+        final LogicalKeyValueSegment segment1 = segments.getOrCreateSegmentIfLive(0, context, -1L);
+        final LogicalKeyValueSegment segment2 = segments.getOrCreateSegmentIfLive(1, context, -1L);
+        final LogicalKeyValueSegment segment3 = segments.getOrCreateSegmentIfLive(2, context, -1L);
+
+        final File rocksdbDir = new File(new File(context.stateDir(), DB_FILE_DIR), STORE_NAME);
+        assertTrue(rocksdbDir.isDirectory());
+
+        assertTrue(segment1.isOpen());
+        assertTrue(segment2.isOpen());
+        assertTrue(segment3.isOpen());
+    }
+
+    @Test
+    public void shouldNotCreateSegmentThatIsAlreadyExpired() {
+        final long streamTime = updateStreamTimeAndCreateSegment(7);
+        assertNull(segments.getOrCreateSegmentIfLive(0, context, streamTime));
+    }
+
+    @Test
+    public void shouldCleanupSegmentsThatHaveExpired() {

Review Comment:
   You're right that these tests are testing logic from AbstractSegments and not anything specific about LogicalKeyValueSegments. The thing is, AbstractSegments doesn't have its own test file at the moment (I assume because it's abstract). If you think it's worth it, I can remove these tests from here and also from KeyValueSegmentsTest.java, and create a dummy AbstractSegments implementation to add an AbstractSegmentsTest.java. I'd like to do that as a follow-up PR instead of as part of this change, though.
   
   (Also, for this specific test, I would like to have it here because I plan to refactor the cleanup logic in AbstractSegments in a follow-up PR. The current approach (cleanup as part of `getOrCreateSegmentIfLive()`) is not very efficient for the versioned store use case because this method is called multiple times during a single put operation. It will be better to only perform cleanup once per put.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscribe@kafka.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org