You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/07/26 10:48:31 UTC

[camel-kafka-connector] branch master updated: Document how to run camel-kafka-connector on Kubernetes.

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector.git


The following commit(s) were added to refs/heads/master by this push:
     new 26c9c33  Document how to run camel-kafka-connector on Kubernetes.
     new 6e653b7  Merge pull request #335 from fvaleri/try-on-k8s
26c9c33 is described below

commit 26c9c339105654b48df01268642efad4bc1675e9
Author: Federico Valeri <fv...@localhost>
AuthorDate: Sun Jul 26 00:45:26 2020 +0200

    Document how to run camel-kafka-connector on Kubernetes.
    
    This also addresses #96.
---
 docs/modules/ROOT/pages/index.adoc                 |   3 +-
 .../ROOT/pages/try-it-out-on-kubernetes.adoc       | 147 +++++++++++++++++++++
 2 files changed, 149 insertions(+), 1 deletion(-)

diff --git a/docs/modules/ROOT/pages/index.adoc b/docs/modules/ROOT/pages/index.adoc
index 6fb91d9..3b84a7e 100644
--- a/docs/modules/ROOT/pages/index.adoc
+++ b/docs/modules/ROOT/pages/index.adoc
@@ -7,7 +7,8 @@
 ** xref:archetypes.adoc[Archetype]
 * xref:getting-started.adoc[Getting started]
 ** xref:try-it-out-locally.adoc[Try it locally]
-** xref:try-it-out-on-openshift-with-strimzi.adoc[Try it on OpenShift cluster]
+** xref:try-it-out-on-kubernetes.adoc[Try it on Kubernetes]
+** xref:try-it-out-on-openshift-with-strimzi.adoc[Try it on OpenShift]
 ** xref:getting-started-with-packages.adoc[Packages documentation]
 * xref:connectors.adoc[Connectors list]
 * xref:contributing.adoc[Contributing]
diff --git a/docs/modules/ROOT/pages/try-it-out-on-kubernetes.adoc b/docs/modules/ROOT/pages/try-it-out-on-kubernetes.adoc
new file mode 100644
index 0000000..f03f86c
--- /dev/null
+++ b/docs/modules/ROOT/pages/try-it-out-on-kubernetes.adoc
@@ -0,0 +1,147 @@
+[[Tryitoutk8s-Tryitoutk8s]]
+= Try it out on Kubernetes
+
+You can use CamelKafkaConnector on plain Kubernetes with the https://strimzi.io[Strimzi project],
+which provides a set of operators and container images to easily run Kafka on Kubernetes. This
+procedure assumes that you have `cluster-admin` access to a Kubernetes cluster (i.e. Minikube)
+with Internet access and an external registry for pushing images (i.e. quay.io).
+
+[[Tryitoutk8s-DeployKafka]]
+== Deploy Kafka and Kafka Connect
+
+First, we create a new namespace and deploy a single-node Kafka cluster:
+
+[source,bash,options="nowrap"]
+----
+NAMESPACE="demo"
+STRIMZI_VER="0.18.0"
+
+# create a new namespace
+kubectl create namespace $NAMESPACE && kubectl config set-context --current --namespace=$NAMESPACE
+
+# deploy Strimzi operator
+curl -L https://github.com/strimzi/strimzi-kafka-operator/releases/download/$STRIMZI_VER/strimzi-cluster-operator-$STRIMZI_VER.yaml \
+    | sed "s/namespace: .*/namespace: $NAMESPACE/g" | kubectl apply -f -
+
+# deploy Kafka cluster
+oc apply -f https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/release-${STRIMZI_VER/%.0/.x}/examples/kafka/kafka-persistent-single.yaml
+----
+
+Next, we build a custom KafkaConnect image to include all needed connectors (use your own registry here):
+
+[source,bash,options="nowrap"]
+----
+CKC_VERSION="0.3.0"
+STRIMZI_IMG="strimzi/kafka:latest-kafka-2.5.0"
+REGISTRY_URL="quay.io"
+REGISTRY_USR="fvaleri"
+
+TMP="/tmp/my-connect"
+BASEURL="https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector"
+PLUGINS=(
+    "$BASEURL/camel-file-kafka-connector/$CKC_VERSION/camel-file-kafka-connector-$CKC_VERSION-package.zip"
+    "$BASEURL/camel-sjms2-kafka-connector/$CKC_VERSION/camel-sjms2-kafka-connector-$CKC_VERSION-package.zip"
+)
+
+# dowload connect plugins
+rm -rf $TMP && mkdir -p $TMP/plugins
+for url in "${PLUGINS[@]}"; do
+    curl -sL $url -o $TMP/plugins/file.zip && unzip -qq $TMP/plugins/file.zip -d $TMP/plugins
+    rm -f $TMP/plugins/file.zip
+done
+
+# build and push the custom image
+echo -e "FROM $STRIMZI_IMG\nCOPY ./plugins/ /opt/kafka/plugins/\nUSER 1001" > $TMP/Dockerfile
+sudo podman build --layers=false -t $REGISTRY_USR/my-connect:1.0.0 -f $TMP/Dockerfile
+sudo podman login -u $REGISTRY_USR $REGISTRY_URL
+sudo podman push localhost/$REGISTRY_USR/my-connect:1.0.0 $REGISTRY_URL/$REGISTRY_USR/my-connect:1.0.0
+sudo podman push localhost/$REGISTRY_USR/my-connect:1.0.0 $REGISTRY_URL/$REGISTRY_USR/my-connect:latest
+----
+
+Finally, we deploy the KafkaConnect cluster using our custom image:
+
+[source,bash,options="nowrap"]
+----
+cat <<'EOF' > $TMP/my-connect.yaml
+apiVersion: kafka.strimzi.io/v1beta1
+kind: KafkaConnect
+metadata:
+  name: my-connect
+  annotations:
+    # enable connect operator
+    strimzi.io/use-connector-resources: "true"
+spec:
+  replicas: 1
+  version: 2.5.0
+  image: my/custom/image
+  bootstrapServers: my-cluster-kafka-bootstrap:9092
+  resources:
+    requests:
+      memory: "1Gi"
+    limits:
+      memory: "1Gi"
+  jvmOptions:
+    gcLoggingEnabled: false
+  config:
+    group.id: my-connect
+    key.converter: org.apache.kafka.connect.storage.StringConverter
+    value.converter: org.apache.kafka.connect.storage.StringConverter
+    offset.storage.topic: my-connect-offsets
+    config.storage.topic: my-connect-configs
+    status.storage.topic: my-connect-status
+    # single node setup
+    config.storage.replication.factor: 1
+    offset.storage.replication.factor: 1
+    status.storage.replication.factor: 1
+EOF
+
+sed "s/image: .*/image: $REGISTRY_URL\/$REGISTRY_USR\/my-connect/g" $TMP/my-connect.yaml \
+    | kubectl apply -f -
+----
+
+[[Tryitoutk8s-CreateConnectorInstances]]
+== Create connector instance
+
+A soon as the infrastructure is running, we can create an instance of a connector plugin:
+
+[source,bash,options="nowrap"]
+----
+kubectl apply -f - <<'EOF'
+kind: KafkaConnector
+apiVersion: kafka.strimzi.io/v1alpha1
+metadata:
+  name: file-sink
+  labels:
+    # must match connect cluster name
+    strimzi.io/cluster: my-connect
+spec:
+  tasksMax: 1
+  class: org.apache.camel.kafkaconnector.file.CamelFileSinkConnector
+  config:
+    topics: my-topic
+    camel.sink.url: file:/tmp/?fileName=test.txt&fileExist=Append
+EOF
+----
+
+You can check the status of the connector instance using:
+
+[source,bash,options="nowrap"]
+----
+kubectl describe kafkaconnector file-sink
+----
+
+[[Tryitoutk8s-CheckMessages]]
+== Check received messages
+
+To test the connector instance, we can send a message to the topic and see if it is written to file:
+
+[source,bash,options="nowrap"]
+----
+# send a message to Kafka
+echo "Hello CamelKafkaConnector" | kubectl exec -i my-cluster-kafka-0 -c kafka -- \
+    bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic
+
+# read the message from file
+POD_NAME=$(kubectl get pods | grep my-connect | grep Running | cut -d " " -f1) && \
+    kubectl exec -i $POD_NAME -- cat /tmp/test.txt
+----