You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/09/21 06:06:33 UTC

[camel-kafka-connector-examples] branch aws2-s3-sink-aggregation created (now 328a6eb)

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a change to branch aws2-s3-sink-aggregation
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git.


      at 328a6eb  AWS2-S3 Sink with aggregation example: Added steps for secret credentials

This branch includes the following new commits:

     new 328a6eb  AWS2-S3 Sink with aggregation example: Added steps for secret credentials

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[camel-kafka-connector-examples] 01/01: AWS2-S3 Sink with aggregation example: Added steps for secret credentials

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch aws2-s3-sink-aggregation
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 328a6eb60c83fe9f01889b0011ce74793b3f2794
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Mon Sep 21 08:05:46 2020 +0200

    AWS2-S3 Sink with aggregation example: Added steps for secret credentials
---
 aws2-s3/aws2-s3-sink-with-aggregation/README.adoc  | 35 ++++++++++++++++++++++
 .../config/openshift/aws2-s3-cred.properties       |  3 ++
 .../openshift/aws2-s3-sink-with-aggregation.yaml   | 22 ++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc b/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
index 72bf4d9..758e2f5 100644
--- a/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
@@ -186,6 +186,34 @@ You should see something like this:
 [{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink [...]
 ----
 
+### Set the AWS credential as secret (optional)
+
+You can also set the aws creds option as secret, you'll need to edit the file config/aws-s3-cred.properties with the correct credentials and then execute the following command
+
+[source,bash,options="nowrap"]
+----
+oc create secret generic aws2-s3 --from-file=config/openshift/aws2-s3-cred.properties
+----
+
+Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+
+[source,bash,options="nowrap"]
+----
+spec:
+  # ...
+  config:
+    config.providers: file
+    config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
+  #...
+  externalConfiguration:
+    volumes:
+      - name: aws-credentials
+        secret:
+          secretName: aws2-s3
+----
+
+In this way the secret aws2-s3 will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
+
 ### Create connector instance
 
 Now we can create some instance of the AWS2 S3 sink connector:
@@ -247,6 +275,13 @@ spec:
 EOF
 ----
 
+If you followed the optional step for secret credentials you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f config/openshift/aws2-s3-sink-with-aggregation.yaml
+----
+
 You can check the status of the connector using
 
 [source,bash,options="nowrap"]
diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-cred.properties b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-cred.properties
new file mode 100644
index 0000000..d1596a1
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-cred.properties
@@ -0,0 +1,3 @@
+accessKey=xxxx
+secretKey=yyyy
+region=region
diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-sink-with-aggregation.yaml b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-sink-with-aggregation.yaml
new file mode 100644
index 0000000..2b1aa0e
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/config/openshift/aws2-s3-sink-with-aggregation.yaml
@@ -0,0 +1,22 @@
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: s3-sink-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: s3-topic
+    camel.sink.path.bucketNameOrArn: camel-kafka-connector
+    camel.sink.endpoint.keyName: ${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}
+    camel.beans.aggregate: #class:org.apache.camel.kafkaconnector.aggregator.StringAggregator
+    camel.beans.aggregation.size: 10
+    camel.beans.aggregation.timeout: 5000
+    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:accessKey}
+    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:secretKey}
+    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:region}