You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/10/29 18:57:06 UTC

[camel] 05/13: CAMEL-15770 - Kafka serialize/deserialize properties are inconsistently named - serializerClass test

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git

commit 7bc1fc1fcb5ea0f33aed99fe10b1906a4defb8c9
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Thu Oct 29 18:52:54 2020 +0100

    CAMEL-15770 - Kafka serialize/deserialize properties are inconsistently named - serializerClass test
---
 .../resources/org/apache/camel/catalog/docs/kafka-component.adoc  | 4 ++--
 components/camel-kafka/src/main/docs/kafka-component.adoc         | 4 ++--
 .../java/org/apache/camel/component/kafka/KafkaComponentTest.java | 2 +-
 .../camel/builder/component/dsl/KafkaComponentBuilderFactory.java | 8 ++++----
 .../camel/builder/endpoint/dsl/KafkaEndpointBuilderFactory.java   | 6 +++---
 docs/components/modules/ROOT/pages/kafka-component.adoc           | 4 ++--
 6 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/kafka-component.adoc b/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/kafka-component.adoc
index 57badeb..9cabbeb 100644
--- a/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/kafka-component.adoc
+++ b/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/kafka-component.adoc
@@ -113,7 +113,7 @@ The Kafka component supports 97 options, which are listed below.
 | *retries* (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer
 | *retryBackoffMs* (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer
 | *sendBufferBytes* (producer) | Socket write buffer size | 131072 | Integer
-| *serializerClass* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
+| *valueSerializer* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
 | *workerPool* (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. |  | ExecutorService
 | *workerPoolCoreSize* (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer
 | *workerPoolMaxSize* (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer
@@ -240,7 +240,7 @@ with the following path and query parameters:
 | *retries* (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer
 | *retryBackoffMs* (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer
 | *sendBufferBytes* (producer) | Socket write buffer size | 131072 | Integer
-| *serializerClass* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
+| *valueSerializer* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
 | *workerPool* (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. |  | ExecutorService
 | *workerPoolCoreSize* (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer
 | *workerPoolMaxSize* (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer
diff --git a/components/camel-kafka/src/main/docs/kafka-component.adoc b/components/camel-kafka/src/main/docs/kafka-component.adoc
index 57badeb..9cabbeb 100644
--- a/components/camel-kafka/src/main/docs/kafka-component.adoc
+++ b/components/camel-kafka/src/main/docs/kafka-component.adoc
@@ -113,7 +113,7 @@ The Kafka component supports 97 options, which are listed below.
 | *retries* (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer
 | *retryBackoffMs* (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer
 | *sendBufferBytes* (producer) | Socket write buffer size | 131072 | Integer
-| *serializerClass* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
+| *valueSerializer* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
 | *workerPool* (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. |  | ExecutorService
 | *workerPoolCoreSize* (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer
 | *workerPoolMaxSize* (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer
@@ -240,7 +240,7 @@ with the following path and query parameters:
 | *retries* (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer
 | *retryBackoffMs* (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer
 | *sendBufferBytes* (producer) | Socket write buffer size | 131072 | Integer
-| *serializerClass* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
+| *valueSerializer* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
 | *workerPool* (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. |  | ExecutorService
 | *workerPoolCoreSize* (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer
 | *workerPoolMaxSize* (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer
diff --git a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/KafkaComponentTest.java b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/KafkaComponentTest.java
index b47e3bf..6af3393 100644
--- a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/KafkaComponentTest.java
+++ b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/KafkaComponentTest.java
@@ -129,7 +129,7 @@ public class KafkaComponentTest extends CamelTestSupport {
                 endpoint.getConfiguration().getMetricReporters());
         assertEquals(Integer.valueOf(3), endpoint.getConfiguration().getNoOfMetricsSample());
         assertEquals(Integer.valueOf(12344), endpoint.getConfiguration().getMetricsSampleWindowMs());
-        assertEquals(KafkaConstants.KAFKA_DEFAULT_SERIALIZER, endpoint.getConfiguration().getSerializerClass());
+        assertEquals(KafkaConstants.KAFKA_DEFAULT_SERIALIZER, endpoint.getConfiguration().getValueSerializer());
         assertEquals(KafkaConstants.KAFKA_DEFAULT_SERIALIZER, endpoint.getConfiguration().getKeySerializer());
         assertEquals("testing", endpoint.getConfiguration().getSslKeyPassword());
         assertEquals("/abc", endpoint.getConfiguration().getSslKeystoreLocation());
diff --git a/core/camel-componentdsl/src/generated/java/org/apache/camel/builder/component/dsl/KafkaComponentBuilderFactory.java b/core/camel-componentdsl/src/generated/java/org/apache/camel/builder/component/dsl/KafkaComponentBuilderFactory.java
index 061fcc1..0f38264 100644
--- a/core/camel-componentdsl/src/generated/java/org/apache/camel/builder/component/dsl/KafkaComponentBuilderFactory.java
+++ b/core/camel-componentdsl/src/generated/java/org/apache/camel/builder/component/dsl/KafkaComponentBuilderFactory.java
@@ -1090,9 +1090,9 @@ public interface KafkaComponentBuilderFactory {
          * Default: org.apache.kafka.common.serialization.StringSerializer
          * Group: producer
          */
-        default KafkaComponentBuilder serializerClass(
-                java.lang.String serializerClass) {
-            doSetProperty("serializerClass", serializerClass);
+        default KafkaComponentBuilder valueSerializer(
+                java.lang.String valueSerializer) {
+            doSetProperty("valueSerializer", valueSerializer);
             return this;
         }
         /**
@@ -1630,7 +1630,7 @@ public interface KafkaComponentBuilderFactory {
             case "retries": getOrCreateConfiguration((KafkaComponent) component).setRetries((java.lang.Integer) value); return true;
             case "retryBackoffMs": getOrCreateConfiguration((KafkaComponent) component).setRetryBackoffMs((java.lang.Integer) value); return true;
             case "sendBufferBytes": getOrCreateConfiguration((KafkaComponent) component).setSendBufferBytes((java.lang.Integer) value); return true;
-            case "serializerClass": getOrCreateConfiguration((KafkaComponent) component).setSerializerClass((java.lang.String) value); return true;
+            case "valueSerializer": getOrCreateConfiguration((KafkaComponent) component).setValueSerializer((java.lang.String) value); return true;
             case "workerPool": getOrCreateConfiguration((KafkaComponent) component).setWorkerPool((java.util.concurrent.ExecutorService) value); return true;
             case "workerPoolCoreSize": getOrCreateConfiguration((KafkaComponent) component).setWorkerPoolCoreSize((java.lang.Integer) value); return true;
             case "workerPoolMaxSize": getOrCreateConfiguration((KafkaComponent) component).setWorkerPoolMaxSize((java.lang.Integer) value); return true;
diff --git a/core/camel-endpointdsl/src/generated/java/org/apache/camel/builder/endpoint/dsl/KafkaEndpointBuilderFactory.java b/core/camel-endpointdsl/src/generated/java/org/apache/camel/builder/endpoint/dsl/KafkaEndpointBuilderFactory.java
index 27f6a8f..67be3f5 100644
--- a/core/camel-endpointdsl/src/generated/java/org/apache/camel/builder/endpoint/dsl/KafkaEndpointBuilderFactory.java
+++ b/core/camel-endpointdsl/src/generated/java/org/apache/camel/builder/endpoint/dsl/KafkaEndpointBuilderFactory.java
@@ -2527,9 +2527,9 @@ public interface KafkaEndpointBuilderFactory {
          * Default: org.apache.kafka.common.serialization.StringSerializer
          * Group: producer
          */
-        default KafkaEndpointProducerBuilder serializerClass(
-                String serializerClass) {
-            doSetProperty("serializerClass", serializerClass);
+        default KafkaEndpointProducerBuilder valueSerializer(
+                String valueSerializer) {
+            doSetProperty("valueSerializer", valueSerializer);
             return this;
         }
         /**
diff --git a/docs/components/modules/ROOT/pages/kafka-component.adoc b/docs/components/modules/ROOT/pages/kafka-component.adoc
index 9e2b92f..e307248 100644
--- a/docs/components/modules/ROOT/pages/kafka-component.adoc
+++ b/docs/components/modules/ROOT/pages/kafka-component.adoc
@@ -115,7 +115,7 @@ The Kafka component supports 97 options, which are listed below.
 | *retries* (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer
 | *retryBackoffMs* (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer
 | *sendBufferBytes* (producer) | Socket write buffer size | 131072 | Integer
-| *serializerClass* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
+| *valueSerializer* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
 | *workerPool* (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. |  | ExecutorService
 | *workerPoolCoreSize* (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer
 | *workerPoolMaxSize* (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer
@@ -242,7 +242,7 @@ with the following path and query parameters:
 | *retries* (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer
 | *retryBackoffMs* (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer
 | *sendBufferBytes* (producer) | Socket write buffer size | 131072 | Integer
-| *serializerClass* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
+| *valueSerializer* (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String
 | *workerPool* (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. |  | ExecutorService
 | *workerPoolCoreSize* (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer
 | *workerPoolMaxSize* (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer