You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/10/22 09:59:21 UTC

[camel-kafka-connector-examples] 01/03: [SQSSource] Rephrase some parts of the readme + fix mistakes

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 75543f4f34459e9394062c0ef7c1a31724353f22
Author: Andrej Vano <av...@redhat.com>
AuthorDate: Thu Oct 22 09:46:11 2020 +0200

    [SQSSource] Rephrase some parts of the readme + fix mistakes
---
 aws2-sqs/aws2-sqs-source/README.adoc               | 205 ++++++++++++---------
 .../openshift/aws2-sqs-source-connector.yaml       |   6 +-
 2 files changed, 118 insertions(+), 93 deletions(-)

diff --git a/aws2-sqs/aws2-sqs-source/README.adoc b/aws2-sqs/aws2-sqs-source/README.adoc
index 7ae5e62..80dd3c4 100644
--- a/aws2-sqs/aws2-sqs-source/README.adoc
+++ b/aws2-sqs/aws2-sqs-source/README.adoc
@@ -1,42 +1,52 @@
-# Camel-Kafka-connector AWS2 SQS Source
+= Camel-Kafka-connector AWS2 SQS Source
 
-This is an example for Camel-Kafka-connector AW2-SQS
+This is an example for Camel-Kafka-connector AWS2-SQS
 
-## Standalone 
+== Standalone
 
-### What is needed
+=== What is needed
 
 - An AWS SQS queue
 
-### Running Kafka
+=== Running Kafka
 
-```
-$KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
-$KAFKA_HOME/bin/kafka-server-start.sh config/server.properties
+[source]
+----
+$KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties
+$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
 $KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
-```
-
-### Setting up the needed bits and running the example
-
-You'll need to setup the plugin.path property in your kafka
-
-Open the `$KAFKA_HOME/config/connect-standalone.properties`
+----
 
-and set the `plugin.path` property to your choosen location
+=== Download the connector package
 
-In this example we'll use `/home/oscerd/connectors/`
+Download the connector package zip and extract the content to a directory.In this example we'll use `/home/oscerd/connectors/`
 
-```
+[source]
+----
 > cd /home/oscerd/connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.5.0/camel-aws2-sqs-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sqs-kafka-connector-0.5.0-package.zip
-```
+----
+
+=== Configuring Kafka Connect
+
+You'll need to set up the `plugin.path` property in your kafka
 
-Now it's time to setup the connectors
+Open the `$KAFKA_HOME/config/connect-standalone.properties` and set the `plugin.path` property to your choosen location
 
-Open the AWS2 SQS configuration file
+[source]
+----
+...
+plugin.path=/home/oscerd/connectors
+...
+----
+
+=== Setup the connectors
 
-```
+Open the AWS2 SQS configuration file at `$EXAMPLES/aws2-sqs/aws2-sqs-source/config/CamelAWS2SQSSourceConnector.properties`
+
+[source]
+----
 name=CamelAWS2SQSSourceConnector
 connector.class=org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector
 key.converter=org.apache.kafka.connect.storage.StringConverter
@@ -46,39 +56,45 @@ camel.source.maxPollDuration=10000
 
 topics=mytopic
 
-camel.source.url=aws2-sqs://camel-1?deleteAfterRead=false&deleteIfFiltered=true
+camel.source.path.queueNameOrArn=camel-1
+camel.source.endpoint.deleteAfterRead=false
 
 camel.component.aws2-sqs.access-key=xxxx
 camel.component.aws2-sqs.secret-key=yyyy
 camel.component.aws2-sqs.region=eu-west-1
-```
+----
 
 and add the correct credentials for AWS.
 
-Now you can run the example
+=== Running the example
+
+Run the kafka connect with the SQS Source connector:
 
-```
-$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/CamelAWSS3SourceConnector.properties config/CamelAWS2SQSSourceConnector.properties
-```
+[source]
+----
+$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties $EXAMPLES/aws2-sqs/aws2-sqs-source/config/CamelAWS2SQSSourceConnector.properties
+----
 
-Just connect to your AWS Console and send message to the camel-1 queue, through the AWS Console.
+Just connect to your AWS Console and send a message to the `camel-1` queue, through the AWS Console.
 
 On a different terminal run the kafka-consumer and you should see messages from the SQS queue arriving through Kafka Broker.
 
-```
-bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning
-SQS to Kafka through Camel
-SQS to Kafka through Camel
-```
+[source]
+----
+$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning
+<your message 1>
+<your message 2>
+...
+----
 
-## Openshift
+== Openshift
 
-### What is needed
+=== What is needed
 
 - An AWS SQS queue
 - An Openshift instance
 
-### Running Kafka using Strimzi Operator
+=== Running Kafka using Strimzi Operator
 
 First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
 We need to create security objects as part of installation so it is necessary to switch to admin user.
@@ -111,7 +127,7 @@ We can now install the Strimzi operator into this project:
 oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml
 ----
 
-Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the Debezium connectors installed:
+Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the SQS connectors installed:
 
 [source,bash,options="nowrap",subs="attributes"]
 ----
@@ -122,29 +138,30 @@ oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/example
 oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
 ----
 
-Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource:
+In the OpenShift environment, you can instantiate the Kafka Connectors in two ways, either using the Kafka Connect API, or through an OpenShift custom resource.
+
+If you want to use the custom resources, you need to add following annotation to the Kafka Connect S2I custom resource:
 [source,bash,options="nowrap"]
 ----
 oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
 ----
 
-### Add Camel Kafka connector binaries
+=== Add Camel Kafka connector binaries
 
 Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
-We now need to build the connectors and add them to the image,
-if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+We now need to build the connectors and add them to the image.
+If you have built the whole `Camel Kafka Connector` project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
 so that each one is in its own subfolder
 (alternatively you can download the latest officially released and packaged connectors from maven):
 
-So we need to do something like this:
-
-```
+[source]
+----
 > cd my-connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.5.0/camel-aws2-sqs-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sqs-kafka-connector-0.5.0-package.zip
-```
+----
 
-Now we can start the build 
+Now we can start the build
 
 [source,bash,options="nowrap"]
 ----
@@ -169,16 +186,20 @@ You should see something like this:
 [{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":" [...]
 ----
 
-### Set the AWS credential as secret (optional)
+=== Set the AWS credentials as OpenShift secret (optional)
+
+Credentials to your AWS account can be specified directly in the connector instance definition in plain text, or you can create an OpenShift secret object beforehand and then reference the secret.
 
-You can also set the aws creds option as secret, you'll need to edit the file config/aws2-sqs-cred.properties with the correct credentials and then execute the following command
+If you want to use the secret, you'll need to edit the file `$EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-cred.properties` with the correct credentials and then create the secret with the following command:
 
 [source,bash,options="nowrap"]
 ----
-oc create secret generic aws2-sqs --from-file=config/openshift/aws2-sqs-cred.properties
+oc create secret generic aws2-sqs --from-file=$EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-cred.properties
 ----
 
-Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+Then you need to edit KafkaConnectS2I custom resource to reference the secret. You can do that either in the OpenShift console or using `oc edit KafkaConnectS2I` command.
+
+Add following configuration to the custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -195,37 +216,11 @@ spec:
           secretName: aws2-sqs
 ----
 
-In this way the secret aws2-sqs will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
-
-### Create connector instance
+In this way the secret `aws2-sqs` will be mounted as volume with path `/opt/kafka/external-configuration/aws-credentials/`
 
-Now we can create some instance of the AWS2 SQS source connector:
-
-[source,bash,options="nowrap"]
-----
-oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
-    -H "Accept:application/json" \
-    -H "Content-Type:application/json" \
-    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
-{
-  "name": "sqs-source-connector",
-  "config": {
-    "connector.class": "org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector",
-    "tasks.max": "1",
-    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "topics": "sqs-topic",
-    "camel.source.path.queueNameOrArn": "camel-connector-test",
-    "camel.source.maxPollDuration": 10000,
-    "camel.component.aws2-sqs.accessKey": "xxx",
-    "camel.component.aws2-sqs.secretKey": "xxx",
-    "camel.component.aws2-sqs.region": "xxx"
-  }
-}
-EOF
-----
+=== Create connector instance
 
-Altenatively, if have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+If you have enabled the connector custom resources using the `use-connector-resources` annotation, you can create the connector instance by creating a specific custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -246,36 +241,66 @@ spec:
     topics: sqs-topic
     camel.source.path.queueNameOrArn: camel-connector-test
     camel.source.maxPollDuration: 10000
-    camel.component.aws2-sqs.accessKey: xxxx
-    camel.component.aws2-sqs.secretKey: yyyy
-    camel.component.aws2-sqs.region: region
+    camel.component.aws2-sqs.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
+    camel.component.aws2-sqs.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
+    camel.component.aws2-sqs.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
 EOF
 ----
 
-If you followed the optional step for secret credentials you can run the following command:
+If you don't want to use the OpenShift secret for storing the credentials, replace the properties in the custom resource for the actual values,
+otherwise you can now create the custom resource using:
+
+[source]
+----
+oc apply -f $EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
+----
+
+The other option, if you are not using the custom resources, is to create the instance of AWS2 SQS source connector through the Kafka Connect API:
 
 [source,bash,options="nowrap"]
 ----
-oc apply -f config/openshift/aws2-sqs-source-connector.yaml
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "sqs-source-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "sqs-topic",
+    "camel.source.path.queueNameOrArn": "camel-connector-test",
+    "camel.source.maxPollDuration": 10000,
+    "camel.component.aws2-sqs.accessKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}",
+    "camel.component.aws2-sqs.secretKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}",
+    "camel.component.aws2-sqs.region": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}"
+  }
+}
+EOF
 ----
 
-You can check the status of the connector using
+Again, if you don't use the OpenShift secret, replace the properties with your actual AWS credentials.
+
+You can check the status of the connector using:
 
 [source,bash,options="nowrap"]
 ----
 oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/sqs-source-connector/status
 ----
 
-Just connect to your AWS Console and send message to the camel-connector-test, through the AWS Console.
+Then you can connect to your AWS Console and send a message to the `camel-connector-test` queue.
 
-### Check received messages
+=== Check received messages
 
 You can also run the Kafka console consumer to see the messages received from the topic:
 
 [source,bash,options="nowrap"]
 ----
 oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sqs-topic --from-beginning
-SQS to Kafka through Camel
-SQS to Kafka through Camel
+<your message 1>
+<your message 2>
+...
 ----
 
diff --git a/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml b/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
index d6f92cd..8cb3de1 100644
--- a/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
+++ b/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
@@ -14,6 +14,6 @@ spec:
     topics: sqs-topic
     camel.source.path.queueNameOrArn: camel-connector-test
     camel.source.maxPollDuration: 10000
-    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
-    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
-    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
+    camel.component.aws2-sqs.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
+    camel.component.aws2-sqs.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
+    camel.component.aws2-sqs.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}