You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/09/21 05:41:24 UTC
[camel-kafka-connector-examples] 01/01: AWS2-S3 Move After Read
example: Added steps for secret credentials
This is an automated email from the ASF dual-hosted git repository.
acosentino pushed a commit to branch aws2-s3-move-after-read-sec
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git
commit 3ebe5d49bf9e49af2c288446c05592345dca1fe3
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Mon Sep 21 07:40:01 2020 +0200
AWS2-S3 Move After Read example: Added steps for secret credentials
---
aws2-s3/aws2-s3-move-after-read/README.adoc | 35 ++++++++++++++++++++++
.../config/openshift/aws2-s3-cred.properties | 3 ++
.../config/openshift/aws2-s3-source-connector.yaml | 21 +++++++++++++
3 files changed, 59 insertions(+)
diff --git a/aws2-s3/aws2-s3-move-after-read/README.adoc b/aws2-s3/aws2-s3-move-after-read/README.adoc
index 3aa7818..b783894 100644
--- a/aws2-s3/aws2-s3-move-after-read/README.adoc
+++ b/aws2-s3/aws2-s3-move-after-read/README.adoc
@@ -172,6 +172,34 @@ You should see something like this:
[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink [...]
----
+### Set the AWS credential as secret (optional)
+
+You can also set the aws creds option as secret, you'll need to edit the file config/aws-s3-cred.properties with the correct credentials and then execute the following command
+
+[source,bash,options="nowrap"]
+----
+oc create secret generic aws2-s3 --from-file=config/openshift/aws2-s3-cred.properties
+----
+
+Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+
+[source,bash,options="nowrap"]
+----
+spec:
+ # ...
+ config:
+ config.providers: file
+ config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
+ #...
+ externalConfiguration:
+ volumes:
+ - name: aws-credentials
+ secret:
+ secretName: aws2-s3
+----
+
+In this way the secret aws2-s3 will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
+
### Create connector instance
Now we can create some instance of the AWS2 S3 source connector:
@@ -229,6 +257,13 @@ spec:
EOF
----
+If you followed the optional step for secret credentials you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f config/openshift/aws2-s3-source-connector.yaml
+----
+
You can check the status of the connector using
[source,bash,options="nowrap"]
diff --git a/aws2-s3/aws2-s3-move-after-read/config/openshift/aws2-s3-cred.properties b/aws2-s3/aws2-s3-move-after-read/config/openshift/aws2-s3-cred.properties
new file mode 100644
index 0000000..d1596a1
--- /dev/null
+++ b/aws2-s3/aws2-s3-move-after-read/config/openshift/aws2-s3-cred.properties
@@ -0,0 +1,3 @@
+accessKey=xxxx
+secretKey=yyyy
+region=region
diff --git a/aws2-s3/aws2-s3-move-after-read/config/openshift/aws2-s3-source-connector.yaml b/aws2-s3/aws2-s3-move-after-read/config/openshift/aws2-s3-source-connector.yaml
new file mode 100644
index 0000000..977aaf1
--- /dev/null
+++ b/aws2-s3/aws2-s3-move-after-read/config/openshift/aws2-s3-source-connector.yaml
@@ -0,0 +1,21 @@
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+ name: s3-source-connector
+ namespace: myproject
+ labels:
+ strimzi.io/cluster: my-connect-cluster
+spec:
+ class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector
+ tasksMax: 1
+ config:
+ key.converter: org.apache.kafka.connect.storage.StringConverter
+ value.converter: org.apache.kafka.connect.storage.StringConverter
+ topics: s3-topic
+ camel.source.path.bucketNameOrArn: camel-kafka-connector
+ camel.source.maxPollDuration: 10000
+ camel.source.endpoint.moveAfterRead: true
+ camel.source.endpoint.destinationBucket: camel-1
+ camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:accessKey}
+ camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:secretKey}
+ camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:region}