You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2020/08/03 09:59:02 UTC

[camel-kafka-connector-examples] branch master updated: Added an AWS2-S3 sink connector with aggregation example

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git


The following commit(s) were added to refs/heads/master by this push:
     new 650fc1d  Added an AWS2-S3 sink connector with aggregation example
650fc1d is described below

commit 650fc1d3cbe882f09777d75898984d9347c5d236
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Mon Aug 3 11:58:21 2020 +0200

    Added an AWS2-S3 sink connector with aggregation example
---
 aws2-s3/aws2-s3-sink-with-aggregation/README.adoc  | 90 ++++++++++++++++++++++
 .../config/CamelAWS2S3SinkConnector.properties     | 35 +++++++++
 2 files changed, 125 insertions(+)

diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc b/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
new file mode 100644
index 0000000..ef9ea8e
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/README.adoc
@@ -0,0 +1,90 @@
+# Camel-Kafka-connector AWS2 S3 Sink
+
+## Introduction
+
+This is an example for Camel-Kafka-connector AWS2-S3 Sink 
+
+## What is needed
+
+- An AWS S3 bucket
+
+## Running Kafka
+
+```
+$KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties
+$KAFKA_HOME/bin/kafka-server-start.sh config/server.properties
+$KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
+```
+
+## Setting up the needed bits and running the example
+
+You'll need to setup the plugin.path property in your kafka
+
+Open the `$KAFKA_HOME/config/connect-standalone.properties`
+
+and set the `plugin.path` property to your choosen location
+
+In this example we'll use `/home/oscerd/connectors/`
+
+```
+> cd /home/oscerd/connectors/
+> wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-s3-kafka-connector/0.4.0/camel-aws2-s3-kafka-connector-0.4.0-package.zip
+> unzip camel-aws2-s3-kafka-connector-0.4.0-package.zip
+```
+
+Now it's time to setup the connectors
+
+Open the AWS2 S3 configuration file
+
+```
+name=CamelAWS2S3SourceConnector
+connector.class=org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
+key.converter=org.apache.kafka.connect.storage.StringConverter
+value.converter=org.apache.kafka.connect.storage.StringConverter
+
+topics=mytopic
+
+camel.sink.path.bucketNameOrArn=camel-kafka-connector
+
+camel.component.aws2-s3.access-key=xxxx
+camel.component.aws2-s3.secret-key=yyyy
+camel.component.aws2-s3.region=eu-west-1
+
+camel.sink.endpoint.keyName=${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}
+
+camel.beans.aggregate=#class:org.apache.camel.kafkaconnector.aggregator.StringAggregator
+camel.beans.aggregation.size=10
+camel.beans.aggregation.timeout=5000
+```
+
+and add the correct credentials for AWS.
+
+Now you can run the example
+
+```
+$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/CamelAWS2S3SinkConnector.properties
+```
+
+Just connect to your AWS Console and check the content of camel-kafka-connector bucket.
+
+On a different terminal run the kafka-producer and send messages to your Kafka Broker.
+
+```
+bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytopic
+Kafka to S3 message 1
+Kafka to S3 message 2
+Kafka to S3 message 3
+Kafka to S3 message 4
+Kafka to S3 message 5
+```
+
+You should see (after the timeout has been reached) a file with date-exchangeId name containing the following content
+
+```
+Kafka to S3 message 1
+Kafka to S3 message 2
+Kafka to S3 message 3
+Kafka to S3 message 4
+Kafka to S3 message 5
+```
+
diff --git a/aws2-s3/aws2-s3-sink-with-aggregation/config/CamelAWS2S3SinkConnector.properties b/aws2-s3/aws2-s3-sink-with-aggregation/config/CamelAWS2S3SinkConnector.properties
new file mode 100644
index 0000000..35ab39c
--- /dev/null
+++ b/aws2-s3/aws2-s3-sink-with-aggregation/config/CamelAWS2S3SinkConnector.properties
@@ -0,0 +1,35 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+name=CamelAWS2S3SourceConnector
+connector.class=org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
+key.converter=org.apache.kafka.connect.storage.StringConverter
+value.converter=org.apache.kafka.connect.storage.StringConverter
+
+topics=mytopic
+
+camel.sink.path.bucketNameOrArn=camel-kafka-connector
+
+camel.component.aws2-s3.access-key=xxxx
+camel.component.aws2-s3.secret-key=yyyy
+camel.component.aws2-s3.region=eu-west-1
+
+camel.sink.endpoint.keyName=${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}
+
+camel.beans.aggregate=#class:org.apache.camel.kafkaconnector.aggregator.StringAggregator
+camel.beans.aggregation.size=10
+camel.beans.aggregation.timeout=5000