You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/09/17 12:41:39 UTC

[camel-kafka-connector-examples] branch aws2-s3-source created (now 5a1b62f)

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a change to branch aws2-s3-source
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git.


      at 5a1b62f  AWS2-S3 Source connector example: Add Openshift docs

This branch includes the following new commits:

     new 5a1b62f  AWS2-S3 Source connector example: Add Openshift docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[camel-kafka-connector-examples] 01/01: AWS2-S3 Source connector example: Add Openshift docs

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch aws2-s3-source
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 5a1b62fd618419054c5e324db17d94c5e0a0e755
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Thu Sep 17 14:41:07 2020 +0200

    AWS2-S3 Source connector example: Add Openshift docs
---
 aws2-s3/aws2-s3-source/README.adoc | 181 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 177 insertions(+), 4 deletions(-)

diff --git a/aws2-s3/aws2-s3-source/README.adoc b/aws2-s3/aws2-s3-source/README.adoc
index b2e9a15..1ea4a3d 100644
--- a/aws2-s3/aws2-s3-source/README.adoc
+++ b/aws2-s3/aws2-s3-source/README.adoc
@@ -1,14 +1,14 @@
 # Camel-Kafka-connector AWS2 S3 Source
 
-## Introduction
-
 This is an example for Camel-Kafka-connector AW2-S3
 
-## What is needed
+## Standalone
+
+### What is needed
 
 - An AWS S3 Bucket
 
-## Running Kafka
+### Running Kafka
 
 ```
 $KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
@@ -70,3 +70,176 @@ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --
 S3 to Kafka through Camel
 ```
 
+## Openshift
+
+### What is needed
+
+- An AWS S3 bucket
+- An Openshift instance
+
+### Running Kafka using Strimzi Operator
+
+First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
+We need to create security objects as part of installation so it is necessary to switch to admin user.
+If you use Minishift, you can do it with the following command:
+
+[source,bash,options="nowrap"]
+----
+oc login -u system:admin
+----
+
+We will use OpenShift project `myproject`.
+If it doesn't exist yet, you can create it using following command:
+
+[source,bash,options="nowrap"]
+----
+oc new-project myproject
+----
+
+If the project already exists, you can switch to it with:
+
+[source,bash,options="nowrap"]
+----
+oc project myproject
+----
+
+We can now install the Strimzi operator into this project:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml
+----
+
+Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the Debezium connectors installed:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+# Deploy a single node Kafka broker
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/kafka/kafka-persistent-single.yaml
+
+# Deploy a single instance of Kafka Connect with no plug-in installed
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
+----
+
+Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource:
+[source,bash,options="nowrap"]
+----
+oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
+----
+
+### Add Camel Kafka connector binaries
+
+Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
+We now need to build the connectors and add them to the image,
+if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+so that each one is in its own subfolder
+(alternatively you can download the latest officially released and packaged connectors from maven):
+
+So we need to do something like this:
+
+```
+> cd my-connectors/
+> wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-s3-kafka-connector/0.5.0/camel-aws2-s3-kafka-connector-0.5.0-package.zip
+> unzip camel-aws2-s3-kafka-connector-0.5.0-package.zip
+```
+
+Now we can start the build 
+
+[source,bash,options="nowrap"]
+----
+oc start-build my-connect-cluster-connect --from-dir=./my-connectors/ --follow
+----
+
+We should now wait for the rollout of the new image to finish and the replica set with the new connector to become ready.
+Once it is done, we can check that the connectors are available in our Kafka Connect cluster.
+Strimzi is running Kafka Connect in a distributed mode.
+
+To check the available connector plugins, you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connector-plugins
+----
+
+You should see something like this:
+
+[source,json,options="nowrap"]
+----
+[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink [...]
+----
+
+### Create connector instance
+
+Now we can create some instance of the AWS2 S3 source connector:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "s3-source-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "sqs-topic",
+    "camel.source.path.bucketNameOrArn": "camel-connector-test",
+    "camel.source.maxPollDuration": 10000,
+    "camel.component.aws2-s3.accessKey": "xxx",
+    "camel.component.aws2-s3.secretKey": "xxx",
+    "camel.component.aws2-s3.region": "xxx"
+  }
+}
+EOF
+----
+
+Altenatively, if have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f - << EOF
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: s3-source-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: s3-topic
+    camel.source.path.bucketNameOrArn: camel-kafka-connector
+    camel.source.maxPollDuration: 10000
+    camel.component.aws2-s3.accessKey: xxxx
+    camel.component.aws2-s3.secretKey: yyyy
+    camel.component.aws2-s3.region: region
+EOF
+----
+
+You can check the status of the connector using
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/s3-source-connector/status
+----
+
+Just connect to your AWS Console and upload a file to the camel-kafka-connector bucket, through the AWS Console.
+
+### Check received messages
+
+You can also run the Kafka console consumer to see the messages received from the topic:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic s3-topic --from-beginning
+<content of file>
+<content of file>
+----
+