You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/09/24 07:00:03 UTC

[camel-kafka-connector-examples] 01/01: AWS2 Kinesis Firehose Sink Example: Added steps for Openshift

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch kinesis-firehose-openshift
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git

commit 66934b975ed0fb92ef7d81c9a8de4e8634164639
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Thu Sep 24 08:59:15 2020 +0200

    AWS2 Kinesis Firehose Sink Example: Added steps for Openshift
---
 .../aws2-kinesis-firehose-sink/README.adoc         | 219 ++++++++++++++++++++-
 .../aws2-kinesis-firehose-cred.properties          |   3 +
 .../openshift/aws2-kinesis-firehose-sink.yaml      |  18 ++
 3 files changed, 235 insertions(+), 5 deletions(-)

diff --git a/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/README.adoc b/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/README.adoc
index e740378..ede7509 100644
--- a/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/README.adoc
+++ b/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/README.adoc
@@ -1,16 +1,16 @@
 # Camel-Kafka-connector AWS2 Kinesis Firehose Sink
 
-## Introduction
-
 This is an example for Camel-Kafka-connector AWS2-Kinesis Firehose Sink 
 
-## What is needed
+## Standalone
+
+### What is needed
 
 - An AWS Kinesis Firehose delivery stream
 - An S3 bucket
 - Some work on the AWS console
 
-## Running Kafka
+### Running Kafka
 
 ```
 $KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
@@ -18,7 +18,7 @@ $KAFKA_HOME/bin/kafka-server-start.sh config/server.properties
 $KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
 ```
 
-## Setting up the needed bits and running the example
+### Setting up the needed bits and running the example
 
 You'll need to setup the plugin.path property in your kafka
 
@@ -76,3 +76,212 @@ Kafka to Kinesis Firehose message 2
 
 You shold see an S3 object created each 60 seconds and in it, you should see the messages concatenated.
 
+## Openshift
+
+### What is needed
+
+- An AWS Kinesis Firehose delivery stream
+- An AWS S3 bucket
+- An Openshift instance
+
+### Running Kafka using Strimzi Operator
+
+First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
+We need to create security objects as part of installation so it is necessary to switch to admin user.
+If you use Minishift, you can do it with the following command:
+
+[source,bash,options="nowrap"]
+----
+oc login -u system:admin
+----
+
+We will use OpenShift project `myproject`.
+If it doesn't exist yet, you can create it using following command:
+
+[source,bash,options="nowrap"]
+----
+oc new-project myproject
+----
+
+If the project already exists, you can switch to it with:
+
+[source,bash,options="nowrap"]
+----
+oc project myproject
+----
+
+We can now install the Strimzi operator into this project:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml
+----
+
+Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the Debezium connectors installed:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+# Deploy a single node Kafka broker
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/kafka/kafka-persistent-single.yaml
+
+# Deploy a single instance of Kafka Connect with no plug-in installed
+oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
+----
+
+Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource:
+[source,bash,options="nowrap"]
+----
+oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
+----
+
+### Add Camel Kafka connector binaries
+
+Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
+We now need to build the connectors and add them to the image,
+if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
+so that each one is in its own subfolder
+(alternatively you can download the latest officially released and packaged connectors from maven):
+
+So we need to do something like this:
+
+```
+> cd my-connectors/
+> wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-kinesis-firehose-kafka-connector/0.5.0/camel-aws2-kinesis-firehose-kafka-connector-0.5.0-package.zip
+> unzip camel-aws2-kinesis-firehose-kafka-connector-0.5.0-package.zip
+```
+
+Now we can start the build 
+
+[source,bash,options="nowrap"]
+----
+oc start-build my-connect-cluster-connect --from-dir=./my-connectors/ --follow
+----
+
+We should now wait for the rollout of the new image to finish and the replica set with the new connector to become ready.
+Once it is done, we can check that the connectors are available in our Kafka Connect cluster.
+Strimzi is running Kafka Connect in a distributed mode.
+
+To check the available connector plugins, you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connector-plugins
+----
+
+You should see something like this:
+
+[source,json,options="nowrap"]
+----
+[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.aws2kinesisfirehose.CamelAws2kinesisfirehoseSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"2.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector [...]
+----
+
+### Set the AWS credential as secret (optional)
+
+You can also set the aws creds option as secret, you'll need to edit the file config/aws2-kinesis-firehose-cred.properties with the correct credentials and then execute the following command
+
+[source,bash,options="nowrap"]
+----
+oc create secret generic aws2-kinesis-firehose --from-file=config/openshift/aws2-kinesis-firehose-cred.properties
+----
+
+Now we need to edit KafkaConnectS2I custom resource to reference the secret. For example:
+
+[source,bash,options="nowrap"]
+----
+spec:
+  # ...
+  config:
+    config.providers: file
+    config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
+  #...
+  externalConfiguration:
+    volumes:
+      - name: aws-credentials
+        secret:
+          secretName: aws2-kinesis-firehose
+----
+
+In this way the secret aws2-kibesis-firehose will be mounted as volume with path /opt/kafka/external-configuration/aws-credentials/
+
+### Create connector instance
+
+Now we can create some instance of the AWS2 Kinesis Firehose sink connector:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+    -H "Accept:application/json" \
+    -H "Content-Type:application/json" \
+    http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+  "name": "kinesis-firehose-sink-connector",
+  "config": {
+    "connector.class": "org.apache.camel.kafkaconnector.aws2kinesisfirehose.CamelAws2kinesisfirehoseSinkConnector",
+    "tasks.max": "1",
+    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+    "topics": "kinesis-firehose-topic",
+    "camel.sink.path.streamName": "firehose-stream",
+    "camel.component.aws2-kinesis-firehose.accessKey": "xxx",
+    "camel.component.aws2-kinesis-firehose.secretKey": "xxx",
+    "camel.component.aws2-kinesis-firehose.region": "xxx"
+  }
+}
+EOF
+----
+
+Altenatively, if have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f - << EOF
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: kinesis-firehose-sink-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2kinesisfirehose.CamelAws2kinesisfirehoseSinkConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: kinesis-firehose-topic
+    camel.sink.path.streamName: firehose-stream
+    camel.component.aws2-kinesis-firehose.accessKey: xxxx
+    camel.component.aws2-kinesis-firehose.secretKey: yyyy
+    camel.component.aws2-kinesis-firehose.region: region
+EOF
+----
+
+If you followed the optional step for secret credentials you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f config/openshift/aws2-kinesis-firehose-sink.yaml
+----
+
+You can check the status of the connector using
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/kinesis-firehose-sink-connector/status
+----
+
+Just connect to your AWS Console and check the content of camel-kafka-connector bucket.
+
+On a different terminal run the kafka-producer and send messages to your Kafka Broker.
+
+```
+oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic kinesis-firehose-topic
+Kafka to S3 message 1
+Kafka to S3 message 2
+Kafka to S3 message 3
+Kafka to S3 message 4
+Kafka to S3 message 5
+```
+
+You shold see an S3 object created each 60 seconds and in it, you should see the messages concatenated.
+
diff --git a/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/config/openshift/aws2-kinesis-firehose-cred.properties b/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/config/openshift/aws2-kinesis-firehose-cred.properties
new file mode 100644
index 0000000..d1596a1
--- /dev/null
+++ b/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/config/openshift/aws2-kinesis-firehose-cred.properties
@@ -0,0 +1,3 @@
+accessKey=xxxx
+secretKey=yyyy
+region=region
diff --git a/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/config/openshift/aws2-kinesis-firehose-sink.yaml b/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/config/openshift/aws2-kinesis-firehose-sink.yaml
new file mode 100644
index 0000000..e45c682
--- /dev/null
+++ b/aws2-kinesis-firehose/aws2-kinesis-firehose-sink/config/openshift/aws2-kinesis-firehose-sink.yaml
@@ -0,0 +1,18 @@
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+  name: kinesis-firehose-sink-connector
+  namespace: myproject
+  labels:
+    strimzi.io/cluster: my-connect-cluster
+spec:
+  class: org.apache.camel.kafkaconnector.aws2kinesisfirehose.CamelAws2kinesisfirehoseSinkConnector
+  tasksMax: 1
+  config:
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    topics: kinesis-firehose-topic
+    camel.sink.path.streamName: firehose-stream
+    camel.component.aws2-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-kinesis-firehose-cred.properties:accessKey}
+    camel.component.aws2-s3.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-kinesis-firehose-cred.properties:secretKey}
+    camel.component.aws2-s3.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-kinesis-firehose-cred.properties:region}