You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/10/13 06:41:17 UTC

[camel-kafka-connector-examples] branch master updated (daf6244 -> 17e2ff2)

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git.


    from daf6244  AWS2-S3 Zip aggregation example: Fixed connector name
     new 435c076  AWS2-S3-zip-dataformat example: Added instruction for Openshift
     new 9df061f  AWS2-S3-Zip-dataformat example: Added openshift configuration files
     new 17e2ff2  AWS2-S3-Zip-dataformat example: Correct file naming

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../aws2-s3-sink-with-zip-dataformat/README.adoc   | 297 +++++++++++++++++++++
 .../config/openshift/aws2-s3-cred.properties       |   0
 .../aws2-s3-sink-with-zip-dataformat.yaml}         |   4 +-
 3 files changed, 298 insertions(+), 3 deletions(-)
 copy aws2-s3/{aws2-s3-source => aws2-s3-sink-with-zip-dataformat}/config/openshift/aws2-s3-cred.properties (100%)
 copy aws2-s3/{aws2-s3-sink-with-zip-aggregation/config/openshift/aws2-s3-sink-with-zip-aggregation.yaml => aws2-s3-sink-with-zip-dataformat/config/openshift/aws2-s3-sink-with-zip-dataformat.yaml} (84%)


[camel-kafka-connector-examples] 03/03: AWS2-S3-Zip-dataformat example: Correct file naming

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 17e2ff25d7a31b0a55f1e380ea5cde0c151b52e3
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Tue Oct 13 08:40:19 2020 +0200

    AWS2-S3-Zip-dataformat example: Correct file naming
---
 aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc b/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc
index 763bd19..c98e590 100644
--- a/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc
+++ b/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc
@@ -444,7 +444,7 @@ If you followed the optional step for secret credentials you can run the followi
 
 [source,bash,options="nowrap"]
 ----
-oc apply -f config/openshift/aws2-s3-sink-with-zip-aggregation.yaml
+oc apply -f config/openshift/aws2-s3-sink-with-zip-dataformat.yaml
 ----
 
 You can check the status of the connector using


[camel-kafka-connector-examples] 01/03: AWS2-S3-zip-dataformat example: Added instruction for Openshift

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 435c07628d0f51533f42157a51975ec020d8f21a
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Tue Oct 13 08:37:24 2020 +0200

    AWS2-S3-zip-dataformat example: Added instruction for Openshift
---
 .../aws2-s3-sink-with-zip-dataformat/README.adoc   | 297 +++++++++++++++++++++
 1 file changed, 297 insertions(+)

diff --git a/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc b/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc
index ac552dd..763bd19 100644
--- a/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc
+++ b/aws2-s3/aws2-s3-sink-with-zip-dataformat/README.adoc
@@ -171,3 +171,300 @@ You should see (after the timeout has been reached) a file with date-exchangeId.
 Kafka to S3 message 1
 ```
 
+## Openshift
+
+### What is needed
+
+- An AWS S3 bucket
+- An Openshift instance
+
+### Running Kafka using Strimzi Operator
+
+First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
+We need to create security objects as part of installation so it is necessary to switch to admin user.
+If you use Minishift, you can do it with the following command:
+
+[source,bash,options="nowrap"]
+----
+oc login -u system:admin
+----
+
+We will use OpenShift project `myproject`.
+If it doesn't exist yet, you can create it using following command:
+
+[source,bash,options="nowrap"]
+----
+oc new-project myproject
+----
+
+If the project already exists, you can switch to it with:
+
+[source,bash,options="nowrap"]
+----
+oc project myproject
+----
+
+We can now install the Strimzi operator into this project:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml
+----
+
+Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the Debezium connectors installed:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+# Deploy a single node Kafka broker
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/kafka/kafka-persistent-single.yaml
+
+# Deploy a single instance of Kafka Connect with no plug-in installed
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
+----
+
+Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource:
+[source,bash,options="nowrap"]
+----
+oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
+----
+
+### Add Camel Kafka connector binaries
+
+Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
+We now need to build the connectors and add them to the image,
+if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+so that each one is in its own subfolder
+(alternatively you can download the latest officially released and packaged connectors from maven):
+
+In this case we need to extend an existing connector and add a ZipAggregationStrategy, so we need to leverage the archetype
+
+```
+> mvn archetype:generate  -DarchetypeGroupId=org.apache.camel.kafkaconnector.archetypes  -DarchetypeArtifactId=camel-kafka-connector-extensible-archetype  -DarchetypeVersion=0.5.0
+[INFO] Using property: camel-kafka-connector-version = 0.5.0
+Confirm properties configuration:
+groupId: org.apache.camel.ckc
+artifactId: aws2s3
+version: 1.0-SNAPSHOT
+package: org.apache.camel.ckc
+camel-kafka-connector-version: 0.5.0
+ Y: : Y
+[INFO] ----------------------------------------------------------------------------
+[INFO] Using following parameters for creating project from Archetype: camel-kafka-connector-extensible-archetype:0.5.0
+[INFO] ----------------------------------------------------------------------------
+[INFO] Parameter: groupId, Value: org.apache.camel.ckc
+[INFO] Parameter: artifactId, Value: aws2s3
+[INFO] Parameter: version, Value: 1.0-SNAPSHOT
+[INFO] Parameter: package, Value: org.apache.camel.ckc
+[INFO] Parameter: packageInPathFormat, Value: org/apache/camel/ckc
+[INFO] Parameter: package, Value: com.github.oscerd
+[INFO] Parameter: version, Value: 1.0-SNAPSHOT
+[INFO] Parameter: groupId, Value: org.apache.camel.ckc
+[INFO] Parameter: camel-kafka-connector-version, Value: 0.5.0
+[INFO] Parameter: artifactId, Value: aws2s3
+[INFO] Project created from Archetype in dir: /home/workspace/miscellanea/aws2s3
+[INFO] ------------------------------------------------------------------------
+[INFO] BUILD SUCCESS
+[INFO] ------------------------------------------------------------------------
+[INFO] Total time:  30.084 s
+[INFO] Finished at: 2020-08-26T11:08:21+02:00
+[INFO] ------------------------------------------------------------------------
+> cd /home/workspace/miscellanea/aws2s3
+```
+
+Now we need to edit the POM
+
+
+```
+  .
+  .
+  .
+  <version>1.0-SNAPSHOT</version>
+
+  <name>A Camel Kafka Connector extended</name>
+
+  <properties>
+    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
+    <kafka-version>2.5.0</kafka-version>
+    <camel-kafka-connector-version>${project.version}</camel-kafka-connector-version>
+  </properties>
+
+    <dependencies>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>connect-api</artifactId>
+      <scope>provided</scope>
+      <version>${kafka-version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>connect-transforms</artifactId>
+      <scope>provided</scope>
+      <version>${kafka-version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.camel.kafkaconnector</groupId>
+      <artifactId>camel-kafka-connector</artifactId>
+      <version>0.5.0</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.camel.kafkaconnector</groupId>
+      <artifactId>camel-aws2-s3-kafka-connector</artifactId>
+      <version>0.5.0</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.camel</groupId>
+      <artifactId>camel-zipfile</artifactId>
+    </dependency>
+  </dependencies>
+  .
+  .
+  .
+```
+
+Now we need to build the connector:
+
+```
+> mvn clean package
+```
+
+And move the zip package in targe to my-connectors folder and unzipped it.
+
+Now we can start the build 
+
+[source,bash,options="nowrap"]
+----
+oc start-build my-connect-cluster-connect --from-dir=./my-connectors/ --follow
+----
+
+We should now wait for the rollout of the new image to finish and the replica set with the new connector to become ready.
+Once it is done, we can check that the connectors are available in our Kafka Connect cluster.
+Strimzi is running Kafka Connect in a distributed mode.
+
+To check the available connector plugins, you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connector-plugins
+----
+
+You should see something like this:
+
+[source,json,options="nowrap"]
+----
+[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink [...]
+----
+
+### Set the AWS credential as secret (optional)
+
+You can also set the aws creds option as secret, you'll need to edit the file config/aws2-s3-cred.properties with the correct credentials and then execute the following command
+
+[source,bash,options="nowrap"]
+----
+oc create secret generic aws2-s3 --from-file=config/openshift/aws2-s3-cred.properties
+----
+
+Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+
+[source,bash,options="nowrap"]
+----
+spec:
+  # ...
+  config:
+    config.providers: file
+    config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
+  #...
+  externalConfiguration:
+    volumes:
+      - name: aws-credentials
+        secret:
+          secretName: aws2-s3
+----
+
+In this way the secret aws2-s3 will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
+
+### Create connector instance
+
+Now we can create some instance of the AWS2 S3 sink connector:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "s3-sink-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "sqs-topic",
+    "camel.sink.path.bucketNameOrArn": "camel-kafka-connector",
+    "camel.sink.endpoint.keyName": "${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}.zip",
+    "camel.sink.marshal": "zipfile",
+    "camel.component.aws2-s3.accessKey": "xxx",
+    "camel.component.aws2-s3.secretKey": "xxx",
+    "camel.component.aws2-s3.region": "xxx"
+  }
+}
+EOF
+----
+
+Altenatively, if have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f - << EOF
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: s3-sink-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: s3-topic
+    camel.sink.path.bucketNameOrArn: camel-kafka-connector
+    camel.sink.endpoint.keyName: ${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}.zip
+    camel.sink.marshal: zipfile
+    camel.component.aws2-s3.accessKey: xxxx
+    camel.component.aws2-s3.secretKey: yyyy
+    camel.component.aws2-s3.region: region
+EOF
+----
+
+If you followed the optional step for secret credentials you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f config/openshift/aws2-s3-sink-with-zip-aggregation.yaml
+----
+
+You can check the status of the connector using
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/s3-sink-connector/status
+----
+
+Just connect to your AWS Console and check the content of camel-kafka-connector bucket.
+
+On a different terminal run the kafka-producer and send messages to your Kafka Broker.
+
+```
+oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic s3-topic
+Kafka to S3 message 1
+```
+
+You should see a file with date-exchangeId.zip name containing the one file named date-exchangeId. This file will contain the message.
+
+```
+Kafka to S3 message 1
+```


[camel-kafka-connector-examples] 02/03: AWS2-S3-Zip-dataformat example: Added openshift configuration files

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 9df061f579ef038d9a69faaeb0110096f74e0ff6
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Tue Oct 13 08:39:38 2020 +0200

    AWS2-S3-Zip-dataformat example: Added openshift configuration files
---
 .../config/openshift/aws2-s3-cred.properties         |  3 +++
 .../openshift/aws2-s3-sink-with-zip-dataformat.yaml  | 20 ++++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/aws2-s3/aws2-s3-sink-with-zip-dataformat/config/openshift/aws2-s3-cred.properties b/aws2-s3/aws2-s3-sink-with-zip-dataformat/config/openshift/aws2-s3-cred.properties
new file mode 100644
index 0000000..d1596a1
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-zip-dataformat/config/openshift/aws2-s3-cred.properties
@@ -0,0 +1,3 @@
+accessKey=xxxx
+secretKey=yyyy
+region=region
diff --git a/aws2-s3/aws2-s3-sink-with-zip-dataformat/config/openshift/aws2-s3-sink-with-zip-dataformat.yaml b/aws2-s3/aws2-s3-sink-with-zip-dataformat/config/openshift/aws2-s3-sink-with-zip-dataformat.yaml
new file mode 100644
index 0000000..3c185f5
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-zip-dataformat/config/openshift/aws2-s3-sink-with-zip-dataformat.yaml
@@ -0,0 +1,20 @@
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: s3-sink-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: s3-topic
+    camel.sink.path.bucketNameOrArn: camel-kafka-connector
+    camel.sink.endpoint.keyName: ${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}.zip
+    camel.sink.marshal: zipfile
+    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:accessKey}
+    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:secretKey}
+    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-s3-cred.properties:region}