You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/09/21 06:06:47 UTC

[camel-kafka-connector-examples] branch master updated: AWS2-S3 Sink with aggregation example: Added steps for secret credentials (#76)

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git


The following commit(s) were added to refs/heads/master by this push:
     new 12c770b  AWS2-S3 Sink with aggregation example: Added steps for secret credentials (#76)
12c770b is described below

commit 12c770b5a597b7666a539494d344b2342b2c75ea
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Mon Sep 21 08:06:41 2020 +0200

    AWS2-S3 Sink with aggregation example: Added steps for secret credentials (#76)
---
 aws2-s3/aws2-s3-sink-with-aggregation/README.adoc  | 35 ++++++++++++++++++++++
 .../config/openshift/aws2-s3-cred.properties       |  3 ++
 .../openshift/aws2-s3-sink-with-aggregation.yaml   | 22 ++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc b/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
index 72bf4d9..758e2f5 100644
--- a/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
@@ -186,6 +186,34 @@ You should see something like this:
 [{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink [...]
 ----
 
+### Set the AWS credential as secret (optional)
+
+You can also set the aws creds option as secret, you'll need to edit the file config/aws-s3-cred.properties with the correct credentials and then execute the following command
+
+[source,bash,options="nowrap"]
+----
+oc create secret generic aws2-s3 --from-file=config/openshift/aws2-s3-cred.properties
+----
+
+Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+
+[source,bash,options="nowrap"]
+----
+spec:
+  # ...
+  config:
+    config.providers: file
+    config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
+  #...
+  externalConfiguration:
+    volumes:
+      - name: aws-credentials
+        secret:
+          secretName: aws2-s3
+----
+
+In this way the secret aws2-s3 will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
+
 ### Create connector instance
 
 Now we can create some instance of the AWS2 S3 sink connector:
@@ -247,6 +275,13 @@ spec:
 EOF
 ----
 
+If you followed the optional step for secret credentials you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f config/openshift/aws2-s3-sink-with-aggregation.yaml
+----
+
 You can check the status of the connector using
 
 [source,bash,options="nowrap"]
diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-cred.properties b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-cred.properties
new file mode 100644
index 0000000..d1596a1
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-cred.properties
@@ -0,0 +1,3 @@
+accessKey=xxxx
+secretKey=yyyy
+region=region
diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-sink-with-aggregation.yaml b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-sink-with-aggregation.yaml
new file mode 100644
index 0000000..2b1aa0e
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-sink-with-aggregation.yaml
@@ -0,0 +1,22 @@
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: s3-sink-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: s3-topic
+    camel.sink.path.bucketNameOrArn: camel-kafka-connector
+    camel.sink.endpoint.keyName: ${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}
+    camel.beans.aggregate: #class:org.apache.camel.kafkaconnector.aggregator.StringAggregator
+    camel.beans.aggregation.size: 10
+    camel.beans.aggregation.timeout: 5000
+    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:accessKey}
+    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:secretKey}
+    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:region}