You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/10/22 09:59:20 UTC

[camel-kafka-connector-examples] branch master updated (01ee272 -> 7572a93)

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git.


    from 01ee272  Telegram Sink Example: Better Naming for Bot Authorization Token
     new 75543f4  [SQSSource] Rephrase some parts of the readme + fix mistakes
     new a23ebcc  [SQSSink] Rephrase some parts of the readme + fix mistakes
     new 7572a93  [SNSSink] Rephrase some parts of the readme + fix mistakes

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 aws2-sns/aws2-sns-sink/README.adoc                 | 196 +++++++++++---------
 .../config/openshift/aws2-sns-sink-connector.yaml  |   6 +-
 aws2-sqs/aws2-sqs-sink/README.adoc                 | 192 ++++++++++---------
 .../config/openshift/aws2-sqs-sink-connector.yaml  |   6 +-
 aws2-sqs/aws2-sqs-source/README.adoc               | 205 ++++++++++++---------
 .../openshift/aws2-sqs-source-connector.yaml       |   6 +-
 6 files changed, 344 insertions(+), 267 deletions(-)


[camel-kafka-connector-examples] 02/03: [SQSSink] Rephrase some parts of the readme + fix mistakes

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit a23ebcc2a552912a858942838a84eb0ba3216150
Author: Andrej Vano <av...@redhat.com>
AuthorDate: Thu Oct 22 10:58:59 2020 +0200

    [SQSSink] Rephrase some parts of the readme + fix mistakes
---
 aws2-sqs/aws2-sqs-sink/README.adoc                 | 192 ++++++++++++---------
 .../config/openshift/aws2-sqs-sink-connector.yaml  |   6 +-
 2 files changed, 111 insertions(+), 87 deletions(-)

diff --git a/aws2-sqs/aws2-sqs-sink/README.adoc b/aws2-sqs/aws2-sqs-sink/README.adoc
index fcab2e9..1c25900 100644
--- a/aws2-sqs/aws2-sqs-sink/README.adoc
+++ b/aws2-sqs/aws2-sqs-sink/README.adoc
@@ -1,42 +1,52 @@
-# Camel-Kafka-connector AWS2 SQS Sink
+= Camel-Kafka-connector AWS2 SQS Sink
 
-This is an example for Camel-Kafka-connector AW2-SQS Sink
+This is an example for Camel-Kafka-connector AWS2-SQS Sink
 
-## Standalone
+== Standalone
 
-### What is needed
+=== What is needed
 
 - An AWS SQS queue
 
-### Running Kafka
+=== Running Kafka
 
-```
-$KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
-$KAFKA_HOME/bin/kafka-server-start.sh config/server.properties
+[source]
+----
+$KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties
+$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
 $KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
-```
-
-### Setting up the needed bits and running the example
-
-You'll need to setup the plugin.path property in your kafka
-
-Open the `$KAFKA_HOME/config/connect-standalone.properties`
+----
 
-and set the `plugin.path` property to your choosen location
+=== Download the connector package
 
-In this example we'll use `/home/oscerd/connectors/`
+Download the connector package zip and extract the content to a directory. In this example we'll use `/home/oscerd/connectors/`
 
-```
+[source]
+----
 > cd /home/oscerd/connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.5.0/camel-aws2-sqs-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sqs-kafka-connector-0.5.0-package.zip
-```
+----
+
+=== Configuring Kafka Connect
+
+You'll need to set up the `plugin.path` property in your kafka
 
-Now it's time to setup the connectors
+Open the `$KAFKA_HOME/config/connect-standalone.properties` and set the `plugin.path` property to your choosen location:
 
-Open the AWS2 SQS configuration file
+[source]
+----
+...
+plugin.path=/home/oscerd/connectors
+...
+----
+
+=== Setup the connectors
 
-```
+Open the AWS2 SQS configuration file at `$EXAMPLES/aws2-sqs/aws2-sqs-sink/config/CamelAWS2SQSSinkConnector.properties`
+
+[source]
+----
 name=CamelAWS2SQSSinkConnector
 connector.class=org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnector
 key.converter=org.apache.kafka.connect.storage.StringConverter
@@ -49,36 +59,39 @@ camel.sink.path.queueNameOrArn=camel-1
 camel.component.aws2-sqs.access-key=xxxx
 camel.component.aws2-sqs.secret-key=yyyy
 camel.component.aws2-sqs.region=eu-west-1
-```
+
+----
 
 and add the correct credentials for AWS.
 
-Now you can run the example
+=== Running the example
 
-```
-$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/CamelAWS2SQSSinkConnector.properties
-```
+Run the kafka connect with the SQS Sink connector:
 
-Just connect to your AWS Console and poll message on the SQS Queue Camel-1
+[source]
+----
+$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties $EXAMPLES/aws2-sqs/aws2-sqs-sink/config/CamelAWS2SQSSinkConnector.properties
+----
 
-On a different terminal run the kafka-producer and send messages to your Kafka Broker.
+On a different terminal run the kafka-producer and send messages to your Kafka Broker:
 
-```
+[source]
+----
 bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytopic
 Kafka to SQS message 1
 Kafka to SQS message 2
-```
+----
 
-You shold see the messages enqueued in the camel-1 SQS queue.
+You should see the messages enqueued in the `camel-1` SQS queue.
 
-## Openshift
+== Openshift
 
-### What is needed
+=== What is needed
 
 - An AWS SQS queue
 - An Openshift instance
 
-### Running Kafka using Strimzi Operator
+=== Running Kafka using Strimzi Operator
 
 First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
 We need to create security objects as part of installation so it is necessary to switch to admin user.
@@ -122,29 +135,32 @@ oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/example
 oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
 ----
 
-Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource:
+In the OpenShift environment, you can instantiate the Kafka Connectors in two ways, either using the Kafka Connect API, or through an OpenShift custom resource.
+
+If you want to use the custom resources, you need to add following annotation to the Kafka Connect S2I custom resource:
 [source,bash,options="nowrap"]
 ----
 oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
 ----
 
-### Add Camel Kafka connector binaries
+=== Add Camel Kafka connector binaries
 
 Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
-We now need to build the connectors and add them to the image,
-if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+We now need to build the connectors and add them to the image.
+If you have built the whole `Camel Kafka Connector` project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
 so that each one is in its own subfolder
 (alternatively you can download the latest officially released and packaged connectors from maven):
 
 So we need to do something like this:
 
-```
+[source]
+----
 > cd my-connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.5.0/camel-aws2-sqs-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sqs-kafka-connector-0.5.0-package.zip
-```
+----
 
-Now we can start the build 
+Now we can start the build
 
 [source,bash,options="nowrap"]
 ----
@@ -169,16 +185,20 @@ You should see something like this:
 [{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":" [...]
 ----
 
-### Set the AWS credential as secret (optional)
+=== Set the AWS credentials as OpenShift secret (optional)
 
-You can also set the aws creds option as secret, you'll need to edit the file config/aws2-sqs-cred.properties with the correct credentials and then execute the following command
+Credentials to your AWS account can be specified directly in the connector instance definition in plain text, or you can create an OpenShift secret object beforehand and then reference the secret.
+
+If you want to use the secret, you'll need to edit the file `$EXAMPLES/aws2-sqs/aws2-sqs-sink/config/openshift/aws2-sqs-cred.properties` with the correct credentials and then create the secret with the following command:
 
 [source,bash,options="nowrap"]
 ----
-oc create secret generic aws2-sqs --from-file=config/openshift/aws2-sqs-cred.properties
+oc create secret generic aws2-sqs --from-file=$EXAMPLES/aws2-sqs/aws2-sqs-sink/config/openshift/aws2-sqs-cred.properties
 ----
 
-Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+Then you need to edit KafkaConnectS2I custom resource to reference the secret. You can do that either in the OpenShift console or using `oc edit KafkaConnectS2I` command.
+
+Add following configuration to the custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -195,36 +215,11 @@ spec:
           secretName: aws2-sqs
 ----
 
-In this way the secret aws2-sqs will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
+In this way the secret `aws2-sqs` will be mounted as volume with path `/opt/kafka/external-configuration/aws-credentials/`
 
-### Create connector instance
+=== Create connector instance
 
-Now we can create some instance of AWS2 SQS Sink connector
-
-[source,bash,options="nowrap"]
-----
-oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
-    -H "Accept:application/json" \
-    -H "Content-Type:application/json" \
-    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
-{
-  "name": "sqs-sink-connector",
-  "config": {
-    "connector.class": "org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnector",
-    "tasks.max": "1",
-    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "topics": "sqs-topic",
-    "camel.sink.path.queueNameOrArn": "camel-connector-test,
-    "camel.component.aws2-sqs.accessKey": "xxx",
-    "camel.component.aws2-sqs.secretKey": "xxx",
-    "camel.component.aws2-sqs.region": "xxx"
-  }
-}
-EOF
-----
-
-Altenatively, if you have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+If you have enabled the connector custom resources using the `use-connector-resources` annotation, you can create the connector instance by creating a specific custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -244,35 +239,64 @@ spec:
     value.converter: org.apache.kafka.connect.storage.StringConverter
     topics: sqs-topic
     camel.sink.path.queueNameOrArn: camel-connector-test
-    camel.component.aws2-sqs.accessKey: xxxx
-    camel.component.aws2-sqs.secretKey: yyyy
-    camel.component.aws2-sqs.region: region
+    camel.component.aws2-sqs.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
+    camel.component.aws2-sqs.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
+    camel.component.aws2-sqs.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
 EOF
 ----
 
-If you followed the optional step for secret credentials you can run the following command:
+If you don't want to use the OpenShift secret for storing the credentials, replace the properties in the custom resource for the actual values,
+otherwise you can now create the custom resource using:
+
+[source]
+----
+oc apply -f $EXAMPLES/aws2-sqs/aws2-sqs-sink/config/openshift/aws2-sqs-sink-connector.yaml
+----
+
+The other option, if you are not using the custom resources, is to create the instance of AWS2 SQS sink connector through the Kafka Connect API:
 
 [source,bash,options="nowrap"]
 ----
-oc apply -f config/openshift/aws2-sqs-sink-connector.yaml
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "sqs-sink-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "sqs-topic",
+    "camel.sink.path.queueNameOrArn": "camel-connector-test,
+    "camel.component.aws2-sqs.accessKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}",
+    "camel.component.aws2-sqs.secretKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}",
+    "camel.component.aws2-sqs.region": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}"
+  }
+}
+EOF
 ----
 
-You can check the status of the connector using
+Again, if you don't use the OpenShift secret, replace the properties with your actual AWS credentials.
+
+You can check the status of the connector using:
 
 [source,bash,options="nowrap"]
 ----
 oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/sqs-sink-connector/status
 ----
 
-### Check enqueued messages
+=== Check enqueued messages
 
-Just connect to your AWS Console and poll message on the SQS Queue Camel-1
+Just connect to your AWS Console and poll message on the SQS Queue `camel-connector-test`
 
 Run the kafka-producer and send messages to your Kafka Broker.
 
-```
+[source]
+----
 oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic sqs-topic
 Kafka to SQS message 1
 Kafka to SQS message 2
-```
+----
 
diff --git a/aws2-sqs/aws2-sqs-sink/config/openshift/aws2-sqs-sink-connector.yaml b/aws2-sqs/aws2-sqs-sink/config/openshift/aws2-sqs-sink-connector.yaml
index 2bd8528..35279b8 100644
--- a/aws2-sqs/aws2-sqs-sink/config/openshift/aws2-sqs-sink-connector.yaml
+++ b/aws2-sqs/aws2-sqs-sink/config/openshift/aws2-sqs-sink-connector.yaml
@@ -13,6 +13,6 @@ spec:
     value.converter: org.apache.kafka.connect.storage.StringConverter
     topics: sqs-topic
     camel.sink.path.queueNameOrArn: camel-connector-test
-    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
-    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
-    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
+    camel.component.aws2-sns.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
+    camel.component.aws2-sns.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
+    camel.component.aws2-sns.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}


[camel-kafka-connector-examples] 03/03: [SNSSink] Rephrase some parts of the readme + fix mistakes

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 7572a93127916584e0764e72d169499c9690ca8b
Author: Andrej Vano <av...@redhat.com>
AuthorDate: Thu Oct 22 11:27:14 2020 +0200

    [SNSSink] Rephrase some parts of the readme + fix mistakes
---
 aws2-sns/aws2-sns-sink/README.adoc                 | 196 ++++++++++++---------
 .../config/openshift/aws2-sns-sink-connector.yaml  |   6 +-
 2 files changed, 115 insertions(+), 87 deletions(-)

diff --git a/aws2-sns/aws2-sns-sink/README.adoc b/aws2-sns/aws2-sns-sink/README.adoc
index 8753571..8bf380b 100644
--- a/aws2-sns/aws2-sns-sink/README.adoc
+++ b/aws2-sns/aws2-sns-sink/README.adoc
@@ -1,42 +1,52 @@
-# Camel-Kafka-connector AWS2 SNS Sink
+= Camel-Kafka-connector AWS2 SNS Sink
 
-This is an example for Camel-Kafka-connector AWS2-SNS Sink 
+This is an example for Camel-Kafka-connector AWS2-SNS Sink
 
-## Standalone
+== Standalone
 
-### What is needed
+=== What is needed
 
-- An AWS SNS queue
+- An AWS SNS topic
 
-### Running Kafka
+=== Running Kafka
 
-```
-$KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
-$KAFKA_HOME/bin/kafka-server-start.sh config/server.properties
+[source]
+----
+$KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties
+$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
 $KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
-```
-
-## Setting up the needed bits and running the example
-
-You'll need to setup the plugin.path property in your kafka
-
-Open the `$KAFKA_HOME/config/connect-standalone.properties`
+----
 
-and set the `plugin.path` property to your choosen location
+=== Download the connector package
 
-In this example we'll use `/home/oscerd/connectors/`
+Download the connector package zip and extract the content to a directory. In this example we'll use `/home/oscerd/connectors/`
 
-```
+[source]
+----
 > cd /home/oscerd/connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sns-kafka-connector/0.5.0/camel-aws2-sns-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sns-kafka-connector-0.5.0-package.zip
-```
+----
+
+=== Configuring Kafka Connect
+
+You'll need to set up the `plugin.path` property in your kafka
 
-Now it's time to setup the connectors
+Open the `$KAFKA_HOME/config/connect-standalone.properties` and set the `plugin.path` property to your choosen location:
 
-Open the AWS2 SNS configuration file
+[source]
+----
+...
+plugin.path=/home/oscerd/connectors
+...
+----
+
+=== Setup the connectors
 
-```
+Open the AWS2 SNS configuration file at `$EXAMPLES/aws2-sns/aws2-sns-sink/config/CamelAWS2SNSSinkConnector.properties`
+
+[source]
+----
 name=CamelAWS2SNSSinkConnector
 connector.class=org.apache.camel.kafkaconnector.aws2sns.CamelAws2snsSinkConnector
 key.converter=org.apache.kafka.connect.storage.StringConverter
@@ -44,41 +54,43 @@ value.converter=org.apache.kafka.connect.storage.StringConverter
 
 topics=mytopic
 
-camel.sink.path.topicNameOrArn=topic-1
+camel.sink.path.topicNameOrArn=camel-1
 
 camel.component.aws2-sns.access-key=xxxx
 camel.component.aws2-sns.secret-key=yyyy
 camel.component.aws2-sns.region=eu-west-1
-```
+----
 
 and add the correct credentials for AWS.
 
-Now you can run the example
+=== Running the example
 
-```
-$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/CamelAWS2SNSSinkConnector.properties
-```
+Run the kafka connect with the SNS Sink connector:
 
-Just connect to your AWS Console and poll message on the SNS Topic Camel-1
+[source]
+----
+$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties $EXAMPLES/aws2-sns/aws2-sns-sink/config/CamelAWS2SNSSinkConnector.properties
+----
 
 On a different terminal run the kafka-producer and send messages to your Kafka Broker.
 
-```
-bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytopic
+[source]
+----
+$KAFKA_HOME/bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytopic
 Kafka to SNS message 1
 Kafka to SNS message 2
-```
+----
 
-You shold see the messages enqueued in the topic-1 SNS Topic, through your subscription.
+Connect to your AWS Console and create a subscription for the `camel-1`, you should then receive messages on the chosen subscriber.
 
-## Openshift
+== Openshift
 
-### What is needed
+=== What is needed
 
-- An AWS SQS queue
+- An AWS SNS topic
 - An Openshift instance
 
-### Running Kafka using Strimzi Operator
+=== Running Kafka using Strimzi Operator
 
 First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
 We need to create security objects as part of installation so it is necessary to switch to admin user.
@@ -128,21 +140,22 @@ Optionally enable the possibility to instantiate Kafka Connectors through specif
 oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
 ----
 
-### Add Camel Kafka connector binaries
+=== Add Camel Kafka connector binaries
 
 Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
-We now need to build the connectors and add them to the image,
-if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+We now need to build the connectors and add them to the image.
+If you have built the whole `Camel Kafka Connector` project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
 so that each one is in its own subfolder
 (alternatively you can download the latest officially released and packaged connectors from maven):
 
 So we need to do something like this:
 
-```
+[source]
+----
 > cd my-connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sns-kafka-connector/0.5.0/camel-aws2-sns-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sns-kafka-connector-0.5.0-package.zip
-```
+----
 
 Now we can start the build 
 
@@ -169,16 +182,20 @@ You should see something like this:
 [{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sns.CamelAws2snsSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sns.CamelAws2snsSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":" [...]
 ----
 
-### Set the AWS credential as secret (optional)
+=== Set the AWS credentials as OpenShift secret (optional)
 
-You can also set the aws creds option as secret, you'll need to edit the file config/aws2-sns-cred.properties with the correct credentials and then execute the following command
+Credentials to your AWS account can be specified directly in the connector instance definition in plain text, or you can create an OpenShift secret object beforehand and then reference the secret.
+
+If you want to use the secret, you'll need to edit the file `$EXAMPLES/aws2-sns/aws2-sns-sink/config/openshift/aws2-sns-cred.properties` with the correct credentials and then create the secret with the following command:
 
 [source,bash,options="nowrap"]
 ----
-oc create secret generic aws2-sns --from-file=config/openshift/aws2-sns-cred.properties
+oc create secret generic aws2-sns --from-file=$EXAMPLES/aws2-sns/aws2-sns-sink/config/openshift/aws2-sns-cred.properties
 ----
 
-Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+Then you need to edit KafkaConnectS2I custom resource to reference the secret. You can do that either in the OpenShift console or using `oc edit KafkaConnectS2I` command.
+
+Add following configuration to the custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -195,36 +212,11 @@ spec:
           secretName: aws2-sns
 ----
 
-In this way the secret aws2-sns will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
+In this way the secret `aws2-sns` will be mounted as volume with path `/opt/kafka/external-configuration/aws-credentials/`
 
-### Create connector instance
-
-Now we can create some instance of a AWS2-SNS sink connector:
-
-[source,bash,options="nowrap"]
-----
-oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
-    -H "Accept:application/json" \
-    -H "Content-Type:application/json" \
-    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
-{
-  "name": "sns-sink-connector",
-  "config": {
-    "connector.class": "org.apache.camel.kafkaconnector.aws2sns.CamelAws2snsSinkConnector",
-    "tasks.max": "1",
-    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "topics": "sqs-topic",
-    "camel.sink.path.topicNameOrArn": "camel-connector-test",
-    "camel.component.aws2-sns.accessKey": "xxx",
-    "camel.component.aws2-sns.secretKey": "xxx",
-    "camel.component.aws2-sns.region": "xxx"
-  }
-}
-EOF
-----
+=== Create connector instance
 
-Altenatively, if you have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+If you have enabled the connector custom resources using the `use-connector-resources` annotation, you can create the connector instance by creating a specific custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -244,28 +236,64 @@ spec:
     value.converter: org.apache.kafka.connect.storage.StringConverter
     topics: sns-topic
     camel.sink.path.topicNameOrArn: camel-connector-test
-    camel.component.aws2-sns.accessKey: xxxx
-    camel.component.aws2-sns.secretKey: yyyy
-    camel.component.aws2-sns.region: region
+    camel.component.aws2-sns.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:accessKey}
+    camel.component.aws2-sns.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:secretKey}
+    camel.component.aws2-sns.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:region}
 EOF
 ----
 
-You can check the status of the connector using
+If you don't want to use the OpenShift secret for storing the credentials, replace the properties in the custom resource for the actual values,
+otherwise you can now create the custom resource using:
+
+[source]
+----
+oc apply -f $EXAMPLES/aws2-sns/aws2-sns-sink/config/openshift/aws2-sns-sink-connector.yaml
+----
+
+The other option, if you are not using the custom resources, is to create the instance of AWS2 SNS sink connector through the Kafka Connect API:
 
 [source,bash,options="nowrap"]
 ----
-oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/sqs-sink-connector/status
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "sns-sink-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2sns.CamelAws2snsSinkConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "sns-topic",
+    "camel.sink.path.topicNameOrArn": "camel-connector-test",
+    "camel.component.aws2-sns.accessKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:accessKey}",
+    "camel.component.aws2-sns.secretKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:secretKey}",
+    "camel.component.aws2-sns.region": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:region}"
+  }
+}
+EOF
 ----
 
-### Check enqueued messages
+Again, if you don't use the OpenShift secret, replace the properties with your actual AWS credentials.
 
-Just connect to your AWS Console and for the camel-connector-test topic create a subscription, you should received messages on the chosen subscriber.
+You can check the status of the connector using:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/sns-sink-connector/status
+----
+
+=== Check enqueued messages
+
+Connect to your AWS Console and create a subscription for the `camel-connector-test`, you should then receive messages on the chosen subscriber.
 
 Run the kafka-producer and send messages to your Kafka Broker.
 
-```
+[source]
+----
 oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic sns-topic
 Kafka to SNS message 1
 Kafka to SNS message 2
-```
+----
 
diff --git a/aws2-sns/aws2-sns-sink/config/openshift/aws2-sns-sink-connector.yaml b/aws2-sns/aws2-sns-sink/config/openshift/aws2-sns-sink-connector.yaml
index c9696f2..e961005 100644
--- a/aws2-sns/aws2-sns-sink/config/openshift/aws2-sns-sink-connector.yaml
+++ b/aws2-sns/aws2-sns-sink/config/openshift/aws2-sns-sink-connector.yaml
@@ -13,6 +13,6 @@ spec:
     value.converter: org.apache.kafka.connect.storage.StringConverter
     topics: sns-topic
     camel.sink.path.topicNameOrArn: camel-connector-test
-    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
-    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
-    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
+    camel.component.aws2-sns.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:accessKey}
+    camel.component.aws2-sns.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:secretKey}
+    camel.component.aws2-sns.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sns-cred.properties:region}


[camel-kafka-connector-examples] 01/03: [SQSSource] Rephrase some parts of the readme + fix mistakes

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 75543f4f34459e9394062c0ef7c1a31724353f22
Author: Andrej Vano <av...@redhat.com>
AuthorDate: Thu Oct 22 09:46:11 2020 +0200

    [SQSSource] Rephrase some parts of the readme + fix mistakes
---
 aws2-sqs/aws2-sqs-source/README.adoc               | 205 ++++++++++++---------
 .../openshift/aws2-sqs-source-connector.yaml       |   6 +-
 2 files changed, 118 insertions(+), 93 deletions(-)

diff --git a/aws2-sqs/aws2-sqs-source/README.adoc b/aws2-sqs/aws2-sqs-source/README.adoc
index 7ae5e62..80dd3c4 100644
--- a/aws2-sqs/aws2-sqs-source/README.adoc
+++ b/aws2-sqs/aws2-sqs-source/README.adoc
@@ -1,42 +1,52 @@
-# Camel-Kafka-connector AWS2 SQS Source
+= Camel-Kafka-connector AWS2 SQS Source
 
-This is an example for Camel-Kafka-connector AW2-SQS
+This is an example for Camel-Kafka-connector AWS2-SQS
 
-## Standalone 
+== Standalone
 
-### What is needed
+=== What is needed
 
 - An AWS SQS queue
 
-### Running Kafka
+=== Running Kafka
 
-```
-$KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
-$KAFKA_HOME/bin/kafka-server-start.sh config/server.properties
+[source]
+----
+$KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties
+$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
 $KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
-```
-
-### Setting up the needed bits and running the example
-
-You'll need to setup the plugin.path property in your kafka
-
-Open the `$KAFKA_HOME/config/connect-standalone.properties`
+----
 
-and set the `plugin.path` property to your choosen location
+=== Download the connector package
 
-In this example we'll use `/home/oscerd/connectors/`
+Download the connector package zip and extract the content to a directory.In this example we'll use `/home/oscerd/connectors/`
 
-```
+[source]
+----
 > cd /home/oscerd/connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.5.0/camel-aws2-sqs-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sqs-kafka-connector-0.5.0-package.zip
-```
+----
+
+=== Configuring Kafka Connect
+
+You'll need to set up the `plugin.path` property in your kafka
 
-Now it's time to setup the connectors
+Open the `$KAFKA_HOME/config/connect-standalone.properties` and set the `plugin.path` property to your choosen location
 
-Open the AWS2 SQS configuration file
+[source]
+----
+...
+plugin.path=/home/oscerd/connectors
+...
+----
+
+=== Setup the connectors
 
-```
+Open the AWS2 SQS configuration file at `$EXAMPLES/aws2-sqs/aws2-sqs-source/config/CamelAWS2SQSSourceConnector.properties`
+
+[source]
+----
 name=CamelAWS2SQSSourceConnector
 connector.class=org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector
 key.converter=org.apache.kafka.connect.storage.StringConverter
@@ -46,39 +56,45 @@ camel.source.maxPollDuration=10000
 
 topics=mytopic
 
-camel.source.url=aws2-sqs://camel-1?deleteAfterRead=false&deleteIfFiltered=true
+camel.source.path.queueNameOrArn=camel-1
+camel.source.endpoint.deleteAfterRead=false
 
 camel.component.aws2-sqs.access-key=xxxx
 camel.component.aws2-sqs.secret-key=yyyy
 camel.component.aws2-sqs.region=eu-west-1
-```
+----
 
 and add the correct credentials for AWS.
 
-Now you can run the example
+=== Running the example
+
+Run the kafka connect with the SQS Source connector:
 
-```
-$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/CamelAWSS3SourceConnector.properties config/CamelAWS2SQSSourceConnector.properties
-```
+[source]
+----
+$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties $EXAMPLES/aws2-sqs/aws2-sqs-source/config/CamelAWS2SQSSourceConnector.properties
+----
 
-Just connect to your AWS Console and send message to the camel-1 queue, through the AWS Console.
+Just connect to your AWS Console and send a message to the `camel-1` queue, through the AWS Console.
 
 On a different terminal run the kafka-consumer and you should see messages from the SQS queue arriving through Kafka Broker.
 
-```
-bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning
-SQS to Kafka through Camel
-SQS to Kafka through Camel
-```
+[source]
+----
+$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning
+<your message 1>
+<your message 2>
+...
+----
 
-## Openshift
+== Openshift
 
-### What is needed
+=== What is needed
 
 - An AWS SQS queue
 - An Openshift instance
 
-### Running Kafka using Strimzi Operator
+=== Running Kafka using Strimzi Operator
 
 First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
 We need to create security objects as part of installation so it is necessary to switch to admin user.
@@ -111,7 +127,7 @@ We can now install the Strimzi operator into this project:
 oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml
 ----
 
-Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the Debezium connectors installed:
+Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the SQS connectors installed:
 
 [source,bash,options="nowrap",subs="attributes"]
 ----
@@ -122,29 +138,30 @@ oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/example
 oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
 ----
 
-Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource:
+In the OpenShift environment, you can instantiate the Kafka Connectors in two ways, either using the Kafka Connect API, or through an OpenShift custom resource.
+
+If you want to use the custom resources, you need to add following annotation to the Kafka Connect S2I custom resource:
 [source,bash,options="nowrap"]
 ----
 oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
 ----
 
-### Add Camel Kafka connector binaries
+=== Add Camel Kafka connector binaries
 
 Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
-We now need to build the connectors and add them to the image,
-if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+We now need to build the connectors and add them to the image.
+If you have built the whole `Camel Kafka Connector` project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
 so that each one is in its own subfolder
 (alternatively you can download the latest officially released and packaged connectors from maven):
 
-So we need to do something like this:
-
-```
+[source]
+----
 > cd my-connectors/
 > wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.5.0/camel-aws2-sqs-kafka-connector-0.5.0-package.zip
 > unzip camel-aws2-sqs-kafka-connector-0.5.0-package.zip
-```
+----
 
-Now we can start the build 
+Now we can start the build
 
 [source,bash,options="nowrap"]
 ----
@@ -169,16 +186,20 @@ You should see something like this:
 [{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":" [...]
 ----
 
-### Set the AWS credential as secret (optional)
+=== Set the AWS credentials as OpenShift secret (optional)
+
+Credentials to your AWS account can be specified directly in the connector instance definition in plain text, or you can create an OpenShift secret object beforehand and then reference the secret.
 
-You can also set the aws creds option as secret, you'll need to edit the file config/aws2-sqs-cred.properties with the correct credentials and then execute the following command
+If you want to use the secret, you'll need to edit the file `$EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-cred.properties` with the correct credentials and then create the secret with the following command:
 
 [source,bash,options="nowrap"]
 ----
-oc create secret generic aws2-sqs --from-file=config/openshift/aws2-sqs-cred.properties
+oc create secret generic aws2-sqs --from-file=$EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-cred.properties
 ----
 
-Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+Then you need to edit KafkaConnectS2I custom resource to reference the secret. You can do that either in the OpenShift console or using `oc edit KafkaConnectS2I` command.
+
+Add following configuration to the custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -195,37 +216,11 @@ spec:
           secretName: aws2-sqs
 ----
 
-In this way the secret aws2-sqs will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
-
-### Create connector instance
+In this way the secret `aws2-sqs` will be mounted as volume with path `/opt/kafka/external-configuration/aws-credentials/`
 
-Now we can create some instance of the AWS2 SQS source connector:
-
-[source,bash,options="nowrap"]
-----
-oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
-    -H "Accept:application/json" \
-    -H "Content-Type:application/json" \
-    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
-{
-  "name": "sqs-source-connector",
-  "config": {
-    "connector.class": "org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector",
-    "tasks.max": "1",
-    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
-    "topics": "sqs-topic",
-    "camel.source.path.queueNameOrArn": "camel-connector-test",
-    "camel.source.maxPollDuration": 10000,
-    "camel.component.aws2-sqs.accessKey": "xxx",
-    "camel.component.aws2-sqs.secretKey": "xxx",
-    "camel.component.aws2-sqs.region": "xxx"
-  }
-}
-EOF
-----
+=== Create connector instance
 
-Altenatively, if have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+If you have enabled the connector custom resources using the `use-connector-resources` annotation, you can create the connector instance by creating a specific custom resource:
 
 [source,bash,options="nowrap"]
 ----
@@ -246,36 +241,66 @@ spec:
     topics: sqs-topic
     camel.source.path.queueNameOrArn: camel-connector-test
     camel.source.maxPollDuration: 10000
-    camel.component.aws2-sqs.accessKey: xxxx
-    camel.component.aws2-sqs.secretKey: yyyy
-    camel.component.aws2-sqs.region: region
+    camel.component.aws2-sqs.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
+    camel.component.aws2-sqs.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
+    camel.component.aws2-sqs.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
 EOF
 ----
 
-If you followed the optional step for secret credentials you can run the following command:
+If you don't want to use the OpenShift secret for storing the credentials, replace the properties in the custom resource for the actual values,
+otherwise you can now create the custom resource using:
+
+[source]
+----
+oc apply -f $EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
+----
+
+The other option, if you are not using the custom resources, is to create the instance of AWS2 SQS source connector through the Kafka Connect API:
 
 [source,bash,options="nowrap"]
 ----
-oc apply -f config/openshift/aws2-sqs-source-connector.yaml
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "sqs-source-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "sqs-topic",
+    "camel.source.path.queueNameOrArn": "camel-connector-test",
+    "camel.source.maxPollDuration": 10000,
+    "camel.component.aws2-sqs.accessKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}",
+    "camel.component.aws2-sqs.secretKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}",
+    "camel.component.aws2-sqs.region": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}"
+  }
+}
+EOF
 ----
 
-You can check the status of the connector using
+Again, if you don't use the OpenShift secret, replace the properties with your actual AWS credentials.
+
+You can check the status of the connector using:
 
 [source,bash,options="nowrap"]
 ----
 oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/sqs-source-connector/status
 ----
 
-Just connect to your AWS Console and send message to the camel-connector-test, through the AWS Console.
+Then you can connect to your AWS Console and send a message to the `camel-connector-test` queue.
 
-### Check received messages
+=== Check received messages
 
 You can also run the Kafka console consumer to see the messages received from the topic:
 
 [source,bash,options="nowrap"]
 ----
 oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sqs-topic --from-beginning
-SQS to Kafka through Camel
-SQS to Kafka through Camel
+<your message 1>
+<your message 2>
+...
 ----
 
diff --git a/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml b/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
index d6f92cd..8cb3de1 100644
--- a/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
+++ b/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
@@ -14,6 +14,6 @@ spec:
     topics: sqs-topic
     camel.source.path.queueNameOrArn: camel-connector-test
     camel.source.maxPollDuration: 10000
-    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
-    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
-    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
+    camel.component.aws2-sqs.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
+    camel.component.aws2-sqs.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
+    camel.component.aws2-sqs.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}