You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by GitBox <gi...@apache.org> on 2020/04/28 04:36:40 UTC

[GitHub] [kafka] vvcephei commented on a change in pull request #8254: KIP-557: Add Emit On Change Support

vvcephei commented on a change in pull request #8254:
URL: https://github.com/apache/kafka/pull/8254#discussion_r416313567



##########
File path: streams/src/main/java/org/apache/kafka/streams/kstream/internals/KTableSource.java
##########
@@ -108,7 +126,9 @@ public void process(final K key, final V value) {
             }
 
             if (queryableName != null) {
-                final ValueAndTimestamp<V> oldValueAndTimestamp = store.get(key);
+                final RawAndDeserializedValue<V> tuple = store.getWithBinary(key);
+                System.out.println("Old value found to be: " + tuple.value);

Review comment:
       Ah, we'd better get rid of all the printlns before merging.

##########
File path: streams/src/test/java/org/apache/kafka/streams/processor/internals/metrics/ProcessorNodeMetricsTest.java
##########
@@ -97,6 +97,27 @@ public void shouldGetSuppressionEmitSensor() {
             () -> ProcessorNodeMetrics.suppressionEmitSensor(THREAD_ID, TASK_ID, PROCESSOR_NODE_ID, streamsMetrics));
     }
 
+    @Test
+    public void shouldGetIdempotentUpdateSkipSensor() {
+        final String metricNamePrefix = "idempotent-update-skip";
+        final String descriptionOfCount = "The total number of skipped idempotent updates";
+        final String descriptionOfRate = "The average number of skipped idempotent updates per second";
+        expect(streamsMetrics.nodeLevelSensor(THREAD_ID, TASK_ID, PROCESSOR_NODE_ID, metricNamePrefix, RecordingLevel.DEBUG))
+            .andReturn(expectedSensor);
+        expect(streamsMetrics.nodeLevelTagMap(THREAD_ID, TASK_ID, PROCESSOR_NODE_ID)).andReturn(tagMap);
+        StreamsMetricsImpl.addInvocationRateAndCountToSensor(
+            expectedSensor,
+            StreamsMetricsImpl.PROCESSOR_NODE_LEVEL_GROUP,
+            tagMap,
+            metricNamePrefix,
+            descriptionOfRate,
+            descriptionOfCount
+        );
+
+        verifySensor(
+            () -> ProcessorNodeMetrics.skippedIdempotentUpdatesSensor(THREAD_ID, TASK_ID, PROCESSOR_NODE_ID, streamsMetrics));

Review comment:
       That last `);` should go on a new line.

##########
File path: streams/src/test/java/org/apache/kafka/streams/state/internals/MeteredTimestampedKeyValueStoreTest.java
##########
@@ -181,6 +183,41 @@ public void shouldWriteBytesToInnerStoreAndRecordPutMetric() {
         verify(inner);
     }
 
+    @Test
+    public void shouldGetWithBinary() {
+        expect(inner.get(keyBytes)).andReturn(valueAndTimestampBytes);
+
+        inner.put(eq(keyBytes), aryEq(valueAndTimestampBytes));
+        expectLastCall();
+        init();
+
+        metered.put(key, valueAndTimestamp);

Review comment:
       Since you mocked the inner store `get`, you shouldn't need to actually do a `put`, right?

##########
File path: streams/src/test/java/org/apache/kafka/streams/state/internals/MeteredTimestampedKeyValueStoreTest.java
##########
@@ -181,6 +183,41 @@ public void shouldWriteBytesToInnerStoreAndRecordPutMetric() {
         verify(inner);
     }
 
+    @Test
+    public void shouldGetWithBinary() {
+        expect(inner.get(keyBytes)).andReturn(valueAndTimestampBytes);
+
+        inner.put(eq(keyBytes), aryEq(valueAndTimestampBytes));
+        expectLastCall();
+        init();
+
+        metered.put(key, valueAndTimestamp);
+
+        final RawAndDeserializedValue<String> valueWithBinary = metered.getWithBinary(key);
+        assertEquals(valueWithBinary.value, valueAndTimestamp);
+        assertEquals(valueWithBinary.serializedValue, valueAndTimestampBytes);
+    }
+
+    @SuppressWarnings("resource")
+    @Test
+    public void shouldPutIfDifferentValues() {
+        inner.put(eq(keyBytes), aryEq(valueAndTimestampBytes));

Review comment:
       It's kind of hard to read this test since it depends partly on externally constructed data and partially on data (like `newValueAndTimestamp`) created in the method itself.
   
   Since there were other comments that need to be addressed, I'll go ahead and also add a couple of nits, if you don't mind...
   
   Instead of testing two cases in one test method, can you split it into two test methods. I.e., one for L213, and another for L218. Also when you do that, can you just create all the values and serializedValues that you need in the test itself?
   
   Thanks! 

##########
File path: streams/src/test/java/org/apache/kafka/streams/state/internals/ValueAndTimestampSerializerTest.java
##########
@@ -50,6 +52,21 @@ public void shouldSerializeNonNullDataUsingTheInternalSerializer() {
         assertThat(deserialized, is(valueAndTimestamp));
     }
 
+    @Test
+    public void shouldCompareSerializedValuesWithoutTimestamp() {
+        final String value = "food";
+
+        final ValueAndTimestamp<String> oldValueAndTimestamp = ValueAndTimestamp.make(value, TIMESTAMP);
+        final byte[] oldSerializedValue = STRING_SERDE.serializer().serialize(TOPIC, oldValueAndTimestamp);
+        final ValueAndTimestamp<String> newValueAndTimestamp = ValueAndTimestamp.make(value, TIMESTAMP + 1);
+        final byte[] newSerializedValue = STRING_SERDE.serializer().serialize(TOPIC, newValueAndTimestamp);
+        assertTrue(ValueAndTimestampSerializer.maskTimestampAndCompareValues(oldSerializedValue, newSerializedValue));
+
+        final ValueAndTimestamp<String> outOfOrderValueAndTimestamp = ValueAndTimestamp.make(value, TIMESTAMP - 1);
+        final byte[] outOfOrderSerializedValue = STRING_SERDE.serializer().serialize(TOPIC, outOfOrderValueAndTimestamp);
+        assertFalse(ValueAndTimestampSerializer.maskTimestampAndCompareValues(oldSerializedValue, outOfOrderSerializedValue));

Review comment:
       Can you also split this out into a separate method, please? Thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org