You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/09/18 06:14:22 UTC

[camel-kafka-connector-examples] 01/01: AWS2-S3 Move after read source example: Added Openshift docs

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch docs-aws2-s3-move-after-read
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 553c93da6088288fce015901cf6a6f78dd7b99f3
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Fri Sep 18 08:13:53 2020 +0200

    AWS2-S3 Move after read source example: Added Openshift docs
---
 aws2-s3/aws2-s3-move-after-read/README.adoc | 185 +++++++++++++++++++++++++++-
 1 file changed, 180 insertions(+), 5 deletions(-)

diff --git a/aws2-s3/aws2-s3-move-after-read/README.adoc b/aws2-s3/aws2-s3-move-after-read/README.adoc
index f201178..a862c6d 100644
--- a/aws2-s3/aws2-s3-move-after-read/README.adoc
+++ b/aws2-s3/aws2-s3-move-after-read/README.adoc
@@ -1,14 +1,14 @@
 # Camel-Kafka-connector AWS2 S3 Source with move after read
 
-## Introduction
-
 This is an example for Camel-Kafka-connector AW2-S3 with move after read option
 
-## What is needed
+## Standalone
+
+### What is needed
 
 - Your AWS credentials
 
-## Running Kafka
+### Running Kafka
 
 ```
 $KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
@@ -16,7 +16,7 @@ $KAFKA_HOME/bin/kafka-server-start.sh config/server.properties
 $KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
 ```
 
-## Setting up the needed bits and running the example
+### Setting up the needed bits and running the example
 
 You'll need to setup the plugin.path property in your kafka
 
@@ -74,3 +74,178 @@ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --
 Hello from Camel Kafka connector!
 ```
 
+## Openshift
+
+### What is needed
+
+- An AWS S3 bucket
+- An Openshift instance
+
+### Running Kafka using Strimzi Operator
+
+First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
+We need to create security objects as part of installation so it is necessary to switch to admin user.
+If you use Minishift, you can do it with the following command:
+
+[source,bash,options="nowrap"]
+----
+oc login -u system:admin
+----
+
+We will use OpenShift project `myproject`.
+If it doesn't exist yet, you can create it using following command:
+
+[source,bash,options="nowrap"]
+----
+oc new-project myproject
+----
+
+If the project already exists, you can switch to it with:
+
+[source,bash,options="nowrap"]
+----
+oc project myproject
+----
+
+We can now install the Strimzi operator into this project:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml
+----
+
+Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the Debezium connectors installed:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+# Deploy a single node Kafka broker
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/kafka/kafka-persistent-single.yaml
+
+# Deploy a single instance of Kafka Connect with no plug-in installed
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
+----
+
+Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource:
+[source,bash,options="nowrap"]
+----
+oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
+----
+
+### Add Camel Kafka connector binaries
+
+Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
+We now need to build the connectors and add them to the image,
+if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+so that each one is in its own subfolder
+(alternatively you can download the latest officially released and packaged connectors from maven):
+
+So we need to do something like this:
+
+```
+> cd my-connectors/
+> wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-s3-kafka-connector/0.5.0/camel-aws2-s3-kafka-connector-0.5.0-package.zip
+> unzip camel-aws2-s3-kafka-connector-0.5.0-package.zip
+```
+
+Now we can start the build 
+
+[source,bash,options="nowrap"]
+----
+oc start-build my-connect-cluster-connect --from-dir=./my-connectors/ --follow
+----
+
+We should now wait for the rollout of the new image to finish and the replica set with the new connector to become ready.
+Once it is done, we can check that the connectors are available in our Kafka Connect cluster.
+Strimzi is running Kafka Connect in a distributed mode.
+
+To check the available connector plugins, you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connector-plugins
+----
+
+You should see something like this:
+
+[source,json,options="nowrap"]
+----
+[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink [...]
+----
+
+### Create connector instance
+
+Now we can create some instance of the AWS2 S3 source connector:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "s3-source-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "sqs-topic",
+    "camel.source.path.bucketNameOrArn": "camel-kafka-connector",
+    "camel.source.maxPollDuration": 10000,
+    "camel.source.endpoint.moveAfterRead": "true",
+    "camel.source.endpoint.destinationBucket: "camel-1",
+    "camel.component.aws2-s3.accessKey": "xxx",
+    "camel.component.aws2-s3.secretKey": "xxx",
+    "camel.component.aws2-s3.region": "xxx"
+  }
+}
+EOF
+----
+
+Altenatively, if have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f - << EOF
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: s3-source-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: s3-topic
+    camel.source.path.bucketNameOrArn: camel-kafka-connector
+    camel.source.maxPollDuration: 10000
+    camel.component.aws2-s3.accessKey: xxxx
+    camel.component.aws2-s3.secretKey: yyyy
+    camel.component.aws2-s3.region: region
+EOF
+----
+
+You can check the status of the connector using
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/s3-source-connector/status
+----
+
+Just connect to your AWS Console and upload a file to the camel-kafka-connector bucket, through the AWS Console.
+
+### Check received messages
+
+You can also run the Kafka console consumer to see the messages received from the topic:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic s3-topic --from-beginning
+<content of file>
+<content of file>
+----
+