You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/12/08 17:46:28 UTC

[GitHub] [beam] TheNeuralBit commented on a change in pull request #13112: [BEAM-11065] Apache Beam pipeline example to ingest from Apache Kafka to Google Pub/Sub

TheNeuralBit commented on a change in pull request #13112:
URL: https://github.com/apache/beam/pull/13112#discussion_r538648561



##########
File path: examples/kafka-to-pubsub/README.md
##########
@@ -0,0 +1,163 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam pipeline example to ingest data from Apache Kafka to Google Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) pipeline example that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Cloud Pub/Sub](https://cloud.google.com/pubsub).
+
+Supported data formats:
+- Serializable plaintext formats, such as JSON
+- [PubSubMessage](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage).
+
+Supported input source configurations:
+- Single or multiple Apache Kafka bootstrap servers
+- Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection
+- Secrets vault service [HashiCorp Vault](https://www.vaultproject.io/).
+
+Supported destination configuration:
+- Single Google Cloud Pub/Sub topic.
+
+In a simple scenario, the example will create an Apache Beam pipeline that will read messages from a source Kafka server with a source topic, and stream the text messages into specified Pub/Sub destination topic. Other scenarios may need Kafka SASL/SCRAM authentication, that can be performed over plain text or SSL encrypted connection. The example supports using a single Kafka user account to authenticate in the provided source Kafka servers and topics. To support SASL authenticaton over SSL the example will need an SSL certificate location and access to a secrets vault service with Kafka username and password, currently supporting HashiCorp Vault.
+
+## Requirements
+
+- Java 8
+- Kafka Bootstrap Server(s) up and running
+- Existing source Kafka topic(s)
+- An existing Pub/Sub destination output topic
+- (Optional) An existing HashiCorp Vault
+- (Optional) A configured secure SSL connection for Kafka
+
+## Getting Started
+
+This section describes what is needed to get the exaple up and running.
+- Assembling the Uber-JAR
+- Local execution
+- Google Dataflow Template
+  - Set up the environment
+  - Creating the Dataflow Flex Template
+  - Create a Dataflow job to ingest data using the template
+- Avro format transferring.
+- E2E tests (TBD)
+
+## Assembling the Uber-JAR
+
+To run this example the Java project should be built into
+an Uber JAR file.
+
+Navigate to the Beam folder:
+
+```
+cd /path/to/beam
+```
+
+In order to create Uber JAR with Gradle, [Shadow plugin](https://github.com/johnrengelman/shadow)
+is used. It creates the `shadowJar` task that builds the Uber JAR:
+
+```
+./gradlew -p examples/kafka-to-pubsub clean shadowJar
+```
+
+ℹ️ An **Uber JAR** - also known as **fat JAR** - is a single JAR file that contains
+both target package *and* all its dependencies.
+
+The result of the `shadowJar` task execution is a `.jar` file that is generated
+under the `build/libs/` folder in kafka-to-pubsub directory.
+
+## Local execution
+To execute this pipeline locally, specify the parameters:
+- Kafka Bootstrap servers
+- Kafka input topics
+- Pub/Sub output topic
+- Output format
+
+in the following format:
+```bash
+--bootstrapServers=host:port \
+--inputTopics=your-input-topic \
+--outputTopic=projects/your-project-id/topics/your-topic-pame \
+--outputFormat=AVRO|PUBSUB
+```
+Optionally, to retrieve Kafka credentials for SASL/SCRAM,
+specify a URL to the credentials in HashiCorp Vault and the vault access token:
+```bash
+--secretStoreUrl=http(s)://host:port/path/to/credentials
+--vaultToken=your-token
+```
+Optionally, to configure secure SSL connection between the Beam pipeline and Kafka,
+specify the parameters:
+- A path to a truststore file (it can be a local path or a GCS path, which should start with `gs://`)
+- A path to a keystore file (it can be a local path or a GCS path, which should start with `gs://`)
+- Truststore password
+- Keystore password
+- Key password
+```bash
+--truststorePath=path/to/kafka.truststore.jks
+--keystorePath=path/to/kafka.keystore.jks
+--truststorePassword=your-truststore-password
+--keystorePassword=your-keystore-password
+--keyPassword=your-key-password
+```
+To change the runner, specify:

Review comment:
       ```suggestion
   By default this will run the pipeline locally with the DirectRunner. To change the runner, specify:
   ```

##########
File path: examples/kafka-to-pubsub/README.md
##########
@@ -0,0 +1,163 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam pipeline example to ingest data from Apache Kafka to Google Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) pipeline example that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Cloud Pub/Sub](https://cloud.google.com/pubsub).
+
+Supported data formats:
+- Serializable plaintext formats, such as JSON
+- [PubSubMessage](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage).
+
+Supported input source configurations:
+- Single or multiple Apache Kafka bootstrap servers
+- Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection
+- Secrets vault service [HashiCorp Vault](https://www.vaultproject.io/).
+
+Supported destination configuration:
+- Single Google Cloud Pub/Sub topic.
+
+In a simple scenario, the example will create an Apache Beam pipeline that will read messages from a source Kafka server with a source topic, and stream the text messages into specified Pub/Sub destination topic. Other scenarios may need Kafka SASL/SCRAM authentication, that can be performed over plain text or SSL encrypted connection. The example supports using a single Kafka user account to authenticate in the provided source Kafka servers and topics. To support SASL authenticaton over SSL the example will need an SSL certificate location and access to a secrets vault service with Kafka username and password, currently supporting HashiCorp Vault.
+
+## Requirements
+
+- Java 8
+- Kafka Bootstrap Server(s) up and running
+- Existing source Kafka topic(s)
+- An existing Pub/Sub destination output topic
+- (Optional) An existing HashiCorp Vault
+- (Optional) A configured secure SSL connection for Kafka
+
+## Getting Started
+
+This section describes what is needed to get the exaple up and running.
+- Assembling the Uber-JAR
+- Local execution
+- Google Dataflow Template
+  - Set up the environment
+  - Creating the Dataflow Flex Template
+  - Create a Dataflow job to ingest data using the template
+- Avro format transferring.
+- E2E tests (TBD)
+
+## Assembling the Uber-JAR
+
+To run this example the Java project should be built into
+an Uber JAR file.
+
+Navigate to the Beam folder:
+
+```
+cd /path/to/beam
+```
+
+In order to create Uber JAR with Gradle, [Shadow plugin](https://github.com/johnrengelman/shadow)
+is used. It creates the `shadowJar` task that builds the Uber JAR:
+
+```
+./gradlew -p examples/kafka-to-pubsub clean shadowJar
+```
+
+ℹ️ An **Uber JAR** - also known as **fat JAR** - is a single JAR file that contains
+both target package *and* all its dependencies.
+
+The result of the `shadowJar` task execution is a `.jar` file that is generated
+under the `build/libs/` folder in kafka-to-pubsub directory.
+
+## Local execution
+To execute this pipeline locally, specify the parameters:
+- Kafka Bootstrap servers
+- Kafka input topics
+- Pub/Sub output topic
+- Output format
+
+in the following format:
+```bash
+--bootstrapServers=host:port \
+--inputTopics=your-input-topic \
+--outputTopic=projects/your-project-id/topics/your-topic-pame \
+--outputFormat=AVRO|PUBSUB
+```
+Optionally, to retrieve Kafka credentials for SASL/SCRAM,
+specify a URL to the credentials in HashiCorp Vault and the vault access token:
+```bash
+--secretStoreUrl=http(s)://host:port/path/to/credentials
+--vaultToken=your-token
+```
+Optionally, to configure secure SSL connection between the Beam pipeline and Kafka,
+specify the parameters:
+- A path to a truststore file (it can be a local path or a GCS path, which should start with `gs://`)
+- A path to a keystore file (it can be a local path or a GCS path, which should start with `gs://`)
+- Truststore password
+- Keystore password
+- Key password
+```bash
+--truststorePath=path/to/kafka.truststore.jks
+--keystorePath=path/to/kafka.keystore.jks
+--truststorePassword=your-truststore-password
+--keystorePassword=your-keystore-password
+--keyPassword=your-key-password
+```
+To change the runner, specify:
+```bash
+--runner=YOUR_SELECTED_RUNNER
+```
+See examples/java/README.md for steps and examples to configure different runners.
+
+## Google Dataflow Execution
+
+This example also exists as Google Dataflow Template, see its [README.md](https://github.com/GoogleCloudPlatform/DataflowTemplates/blob/master/v2/kafka-to-pubsub/README.md) for more information.

Review comment:
       :+1: I think this is a great way to make the connection to the related Dataflow template while still making this example useful for Beam users using other runners. Thank you!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org