You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/11/18 00:54:47 UTC

[GitHub] [beam] TheNeuralBit commented on a change in pull request #13112: [BEAM-11065] Apache Beam Template to ingest from Apache Kafka to Google Pub/Sub

TheNeuralBit commented on a change in pull request #13112:
URL: https://github.com/apache/beam/pull/13112#discussion_r525611995



##########
File path: examples/templates/java/README.md
##########
@@ -0,0 +1,252 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam Template to ingest data from Apache Kafka to Google Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) Template that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Pub/Sub](https://cloud.google.com/pubsub).
+
+Supported data formats:
+- Serializable plaintext formats, such as JSON
+- [PubSubMessage](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage).
+
+Supported input source configurations:
+- Single or multiple Apache Kafka bootstrap servers
+- Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection
+- Secrets vault service [HashiCorp Vault](https://www.vaultproject.io/).
+
+Supported destination configuration:
+- Single Google Pub/Sub topic.
+
+In a simple scenario, the template will create an Apache Beam pipeline that will read messages from a source Kafka server with a source topic, and stream the text messages into specified Pub/Sub destination topic. Other scenarios may need Kafka SASL/SCRAM authentication, that can be performed over plain text or SSL encrypted connection. The template supports using a single Kafka user account to authenticate in the provided source Kafka servers and topics. To support SASL authenticaton over SSL the template will need an SSL certificate location and access to a secrets vault service with Kafka username and password, currently supporting HashiCorp Vault.
+
+## Requirements
+
+- Java 11
+- Kafka Bootstrap Server(s) up and running
+- Existing source Kafka topic(s)
+- An existing Pub/Sub destination output topic
+- (Optional) An existing HashiCorp Vault
+- (Optional) A configured secure SSL connection for Kafka
+
+## Getting Started
+
+This section describes what is needed to get the template up and running.
+- Assembling the Uber-JAR
+- Local execution
+- Google Dataflow Template
+  - Set up the environment
+  - Creating the Dataflow Flex Template
+  - Create a Dataflow job to ingest data using the template.
+- Avro format transferring.
+
+## Assembling the Uber-JAR
+
+To run this template the template Java project should be built into
+an Uber JAR file.
+
+Navigate to the Beam folder:
+
+```
+cd /path/to/beam
+```
+
+In order to create Uber JAR with Gradle, [Shadow plugin](https://github.com/johnrengelman/shadow)
+is used. It creates the `shadowJar` task that builds the Uber JAR:
+
+```
+./gradlew -p examples/templates/java/kafka-to-pubsub clean shadowJar
+```
+
+ℹ️ An **Uber JAR** - also known as **fat JAR** - is a single JAR file that contains
+both target package *and* all its dependencies.
+
+The result of the `shadowJar` task execution is a `.jar` file that is generated
+under the `build/libs/` folder in kafka-to-pubsub directory.
+
+## Local execution
+To execute this pipeline locally, specify the parameters:
+- Kafka Bootstrap servers
+- Kafka input topics
+- Pub/Sub output topic
+in the following format:
+```bash
+--bootstrapServers=host:port \
+--inputTopics=your-input-topic \
+--outputTopic=projects/your-project-id/topics/your-topic-pame
+```
+Optionally, to retrieve Kafka credentials for SASL/SCRAM,
+specify a URL to the credentials in HashiCorp Vault and the vault access token:
+```bash
+--secretStoreUrl=http(s)://host:port/path/to/credentials
+--vaultToken=your-token
+```
+Optionally, to configure secure SSL connection between the Beam pipeline and Kafka,
+specify the parameters:
+- A local path to a truststore file
+- A local path to a keystore file

Review comment:
       Does this work with a local path? Wouldn't the keystore need to get staged for use on a distributed runner?
   
   FWIW I think that configuring KafkaIO to use SSL is much more difficult than it should be. Here's an SO question that describes how you can create a custom ConsumerFactoryFn that downloads a keystore from GCS at execution time: https://stackoverflow.com/questions/42726011/truststore-and-google-cloud-dataflow
   I think it could be worthwhile to make KafkaIO do this by default

##########
File path: examples/templates/java/kafka-to-pubsub/build.gradle
##########
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+plugins {
+    id 'java'
+    id 'org.apache.beam.module'
+    id 'com.github.johnrengelman.shadow'
+}
+applyJavaNature(
+        exportJavadoc: false,
+        enableChecker: false,

Review comment:
       Please enable the checker and try to fix any nullness issues it detects. If there are confusing/tricky issues you can supress these warnings at the class or function level with `@SuppressWarnings("nullness")`

##########
File path: examples/templates/java/README.md
##########
@@ -0,0 +1,252 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam Template to ingest data from Apache Kafka to Google Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) Template that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Pub/Sub](https://cloud.google.com/pubsub).

Review comment:
       nit: Google Pub/Sub -> Google Cloud Pub/Sub
   
   There's a couple places where cloud products are referenced as Google X, they should be Google Cloud X, or just X

##########
File path: examples/templates/java/README.md
##########
@@ -0,0 +1,252 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam Template to ingest data from Apache Kafka to Google Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) Template that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Pub/Sub](https://cloud.google.com/pubsub).
+
+Supported data formats:
+- Serializable plaintext formats, such as JSON
+- [PubSubMessage](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage).
+
+Supported input source configurations:
+- Single or multiple Apache Kafka bootstrap servers
+- Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection
+- Secrets vault service [HashiCorp Vault](https://www.vaultproject.io/).
+
+Supported destination configuration:
+- Single Google Pub/Sub topic.
+
+In a simple scenario, the template will create an Apache Beam pipeline that will read messages from a source Kafka server with a source topic, and stream the text messages into specified Pub/Sub destination topic. Other scenarios may need Kafka SASL/SCRAM authentication, that can be performed over plain text or SSL encrypted connection. The template supports using a single Kafka user account to authenticate in the provided source Kafka servers and topics. To support SASL authenticaton over SSL the template will need an SSL certificate location and access to a secrets vault service with Kafka username and password, currently supporting HashiCorp Vault.
+
+## Requirements
+
+- Java 11
+- Kafka Bootstrap Server(s) up and running
+- Existing source Kafka topic(s)
+- An existing Pub/Sub destination output topic
+- (Optional) An existing HashiCorp Vault
+- (Optional) A configured secure SSL connection for Kafka
+
+## Getting Started
+
+This section describes what is needed to get the template up and running.
+- Assembling the Uber-JAR
+- Local execution
+- Google Dataflow Template
+  - Set up the environment
+  - Creating the Dataflow Flex Template
+  - Create a Dataflow job to ingest data using the template.
+- Avro format transferring.
+
+## Assembling the Uber-JAR
+
+To run this template the template Java project should be built into
+an Uber JAR file.
+
+Navigate to the Beam folder:
+
+```
+cd /path/to/beam
+```
+
+In order to create Uber JAR with Gradle, [Shadow plugin](https://github.com/johnrengelman/shadow)
+is used. It creates the `shadowJar` task that builds the Uber JAR:
+
+```
+./gradlew -p examples/templates/java/kafka-to-pubsub clean shadowJar
+```
+
+ℹ️ An **Uber JAR** - also known as **fat JAR** - is a single JAR file that contains
+both target package *and* all its dependencies.
+
+The result of the `shadowJar` task execution is a `.jar` file that is generated
+under the `build/libs/` folder in kafka-to-pubsub directory.
+
+## Local execution
+To execute this pipeline locally, specify the parameters:
+- Kafka Bootstrap servers
+- Kafka input topics
+- Pub/Sub output topic
+in the following format:
+```bash
+--bootstrapServers=host:port \
+--inputTopics=your-input-topic \
+--outputTopic=projects/your-project-id/topics/your-topic-pame
+```
+Optionally, to retrieve Kafka credentials for SASL/SCRAM,
+specify a URL to the credentials in HashiCorp Vault and the vault access token:
+```bash
+--secretStoreUrl=http(s)://host:port/path/to/credentials
+--vaultToken=your-token
+```
+Optionally, to configure secure SSL connection between the Beam pipeline and Kafka,
+specify the parameters:
+- A local path to a truststore file
+- A local path to a keystore file
+- Truststore password
+- Keystore password
+- Key password
+```bash
+--truststorePath=path/to/kafka.truststore.jks
+--keystorePath=path/to/kafka.keystore.jks
+--truststorePassword=your-truststore-password
+--keystorePassword=your-keystore-password
+--keyPassword=your-key-password
+```
+To change the runner, specify:
+```bash
+--runner=YOUR_SELECTED_RUNNER
+```
+See examples/java/README.md for steps and examples to configure different runners.
+
+## Google Dataflow Template
+
+### Setting Up Project Environment
+
+#### Pipeline variables:
+
+```
+PROJECT=id-of-my-project
+BUCKET_NAME=my-bucket
+REGION=my-region
+```
+
+#### Template Metadata Storage Bucket Creation
+
+The Dataflow Flex template has to store its metadata in a bucket in
+[Google Cloud Storage](https://cloud.google.com/storage), so it can be executed from the Google Cloud Platform.
+Create the bucket in Google Cloud Storage if it doesn't exist yet:
+
+```
+gsutil mb gs://${BUCKET_NAME}
+```
+
+#### Containerization variables:
+
+```
+IMAGE_NAME=my-image-name
+TARGET_GCR_IMAGE=gcr.io/${PROJECT}/${IMAGE_NAME}
+BASE_CONTAINER_IMAGE=my-base-container-image
+TEMPLATE_PATH="gs://${BUCKET_NAME}/templates/kafka-pubsub.json"
+```
+
+### Creating the Dataflow Flex Template
+
+Dataflow Flex Templates package the pipeline as a Docker image and stage these images
+on your project's [Container Registry](https://cloud.google.com/container-registry).
+
+To execute the template you need to create the template spec file containing all
+the necessary information to run the job. This template already has the following
+[metadata file](kafka-to-pubsub/src/main/resources/kafka_to_pubsub_metadata.json) in resources.
+
+Navigate to the template folder:
+
+```
+cd /path/to/beam/examples/templates/java/kafka-to-pubsub
+```
+
+Build the Dataflow Flex Template:
+
+```
+gcloud dataflow flex-template build ${TEMPLATE_PATH} \
+       --image-gcr-path ${TARGET_GCR_IMAGE} \
+       --sdk-language "JAVA" \
+       --flex-template-base-image ${BASE_CONTAINER_IMAGE} \
+       --metadata-file "src/main/resources/kafka_to_pubsub_metadata.json" \
+       --jar "build/libs/beam-examples-templates-java-kafka-to-pubsub-2.25.0-SNAPSHOT-all.jar" \
+       --env FLEX_TEMPLATE_JAVA_MAIN_CLASS="org.apache.beam.templates.KafkaToPubsub"
+```
+
+### Create Dataflow Job Using the Apache Kafka to Google Pub/Sub Dataflow Flex Template
+
+To deploy the pipeline, you should refer to the template file and pass the
+[parameters](https://cloud.google.com/dataflow/docs/guides/specifying-exec-params#setting-other-cloud-dataflow-pipeline-options)
+required by the pipeline.
+
+You can do this in 3 different ways:
+1. Using [Dataflow Google Cloud Console](https://console.cloud.google.com/dataflow/jobs)
+
+2. Using `gcloud` CLI tool
+    ```
+    gcloud dataflow flex-template run "kafka-to-pubsub-`date +%Y%m%d-%H%M%S`" \
+        --template-file-gcs-location "${TEMPLATE_PATH}" \
+        --parameters bootstrapServers="broker_1:9092,broker_2:9092" \
+        --parameters inputTopics="topic1,topic2" \
+        --parameters outputTopic="projects/${PROJECT}/topics/your-topic-name" \
+        --parameters outputFormat="PLAINTEXT" \
+        --parameters secretStoreUrl="http(s)://host:port/path/to/credentials" \
+        --parameters vaultToken="your-token" \
+        --region "${REGION}"
+    ```
+3. With a REST API request
+    ```
+    API_ROOT_URL="https://dataflow.googleapis.com"
+    TEMPLATES_LAUNCH_API="${API_ROOT_URL}/v1b3/projects/${PROJECT}/locations/${REGION}/flexTemplates:launch"
+    JOB_NAME="kafka-to-pubsub-`date +%Y%m%d-%H%M%S-%N`"
+
+    time curl -X POST -H "Content-Type: application/json" \
+        -H "Authorization: Bearer $(gcloud auth print-access-token)" \
+        -d '
+         {
+             "launch_parameter": {
+                 "jobName": "'$JOB_NAME'",
+                 "containerSpecGcsPath": "'$TEMPLATE_PATH'",
+                 "parameters": {
+                     "bootstrapServers": "broker_1:9091, broker_2:9092",
+                     "inputTopics": "topic1, topic2",
+                     "outputTopic": "projects/'$PROJECT'/topics/your-topic-name",
+                     "outputFormat": "PLAINTEXT",
+                     "secretStoreUrl": "http(s)://host:port/path/to/credentials",
+                     "vaultToken": "your-token"
+                 }
+             }
+         }
+        '
+        "${TEMPLATES_LAUNCH_API}"
+    ```
+
+## AVRO format transferring.
+This template contains an example Class to deserialize AVRO from Kafka and serialize it to AVRO in Pub/Sub.

Review comment:
       I'm not sure I understand the purpose of this, won't this just end up re-serializing to the same byte array? And in that case couldn't we just forward the value byte array directly instead?
   
   Maybe I'm missing something, could you clarify?

##########
File path: examples/templates/java/build.gradle
##########
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+plugins {
+    id 'java'
+}
+
+version '2.25.0-SNAPSHOT'
+
+repositories {
+    mavenCentral()
+}
+
+dependencies {
+    testCompile group: 'junit', name: 'junit', version: '4.12'
+}

Review comment:
       Is this file necessary?

##########
File path: examples/templates/java/kafka-to-pubsub/src/main/java/org/apache/beam/templates/KafkaToPubsub.java
##########
@@ -0,0 +1,319 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.beam.templates;
+
+import static org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkArgument;
+
+import com.google.gson.JsonObject;
+import com.google.gson.JsonParser;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import org.apache.beam.sdk.Pipeline;
+import org.apache.beam.sdk.PipelineResult;
+import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
+import org.apache.beam.sdk.options.PipelineOptionsFactory;
+import org.apache.beam.sdk.transforms.Values;
+import org.apache.beam.templates.avro.TaxiRide;
+import org.apache.beam.templates.options.KafkaToPubsubOptions;
+import org.apache.beam.templates.transforms.FormatTransform;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.util.EntityUtils;
+import org.apache.kafka.common.config.SaslConfigs;
+import org.apache.kafka.common.config.SslConfigs;
+import org.apache.kafka.common.security.auth.SecurityProtocol;
+import org.apache.kafka.common.security.scram.ScramMechanism;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The {@link KafkaToPubsub} pipeline is a streaming pipeline which ingests data in JSON format from
+ * Kafka, and outputs the resulting records to PubSub. Input topics, output topic, Bootstrap servers
+ * are specified by the user as template parameters. <br>
+ * Kafka may be configured with SASL/SCRAM security mechanism, in this case a Vault secret storage
+ * with credentials should be provided. URL to credentials and Vault token are specified by the user
+ * as template parameters.
+ *
+ * <p><b>Pipeline Requirements</b>
+ *
+ * <ul>
+ *   <li>Kafka Bootstrap Server(s).
+ *   <li>Kafka Topic(s) exists.
+ *   <li>The PubSub output topic exists.
+ *   <li>(Optional) An existing HashiCorp Vault secret storage
+ * </ul>
+ *
+ * <p><b>Example Usage</b>
+ *
+ * <pre>
+ * # Set the pipeline vars
+ * PROJECT=id-of-my-project
+ * BUCKET_NAME=my-bucket
+ *
+ * # Set containerization vars
+ * IMAGE_NAME=my-image-name
+ * TARGET_GCR_IMAGE=gcr.io/${PROJECT}/${IMAGE_NAME}
+ * BASE_CONTAINER_IMAGE=my-base-container-image
+ * TEMPLATE_PATH="gs://${BUCKET_NAME}/templates/kafka-pubsub.json"
+ *
+ * # Create bucket in the cloud storage
+ * gsutil mb gs://${BUCKET_NAME}
+ *
+ * # Go to the beam folder
+ * cd /path/to/beam
+ *
+ * <b>FLEX TEMPLATE</b>
+ * # Assemble uber-jar
+ * ./gradlew -p templates/kafka-to-pubsub clean shadowJar
+ *
+ * # Go to the template folder
+ * cd /path/to/beam/templates/kafka-to-pubsub
+ *
+ * # Build the flex template
+ * gcloud dataflow flex-template build ${TEMPLATE_PATH} \
+ *       --image-gcr-path "${TARGET_GCR_IMAGE}" \
+ *       --sdk-language "JAVA" \
+ *       --flex-template-base-image ${BASE_CONTAINER_IMAGE} \
+ *       --metadata-file "src/main/resources/kafka_to_pubsub_metadata.json" \
+ *       --jar "build/libs/beam-templates-kafka-to-pubsub-2.25.0-SNAPSHOT-all.jar" \
+ *       --env FLEX_TEMPLATE_JAVA_MAIN_CLASS="org.apache.beam.templates.KafkaToPubsub"
+ *
+ * # Execute template:
+ *    API_ROOT_URL="https://dataflow.googleapis.com"
+ *    TEMPLATES_LAUNCH_API="${API_ROOT_URL}/v1b3/projects/${PROJECT}/locations/${REGION}/flexTemplates:launch"
+ *    JOB_NAME="kafka-to-pubsub-`date +%Y%m%d-%H%M%S-%N`"
+ *
+ *    time curl -X POST -H "Content-Type: application/json" \
+ *            -H "Authorization: Bearer $(gcloud auth print-access-token)" \
+ *            -d '
+ *             {
+ *                 "launch_parameter": {
+ *                     "jobName": "'$JOB_NAME'",
+ *                     "containerSpecGcsPath": "'$TEMPLATE_PATH'",
+ *                     "parameters": {
+ *                         "bootstrapServers": "broker_1:9091, broker_2:9092",
+ *                         "inputTopics": "topic1, topic2",
+ *                         "outputTopic": "projects/'$PROJECT'/topics/your-topic-name",
+ *                         "secretStoreUrl": "http(s)://host:port/path/to/credentials",
+ *                         "vaultToken": "your-token"
+ *                     }
+ *                 }
+ *             }
+ *            '
+ *            "${TEMPLATES_LAUNCH_API}"
+ * </pre>
+ *
+ * <p><b>Example Avro usage</b>
+ *
+ * <pre>
+ * This template contains an example Class to deserialize AVRO from Kafka and serialize it to AVRO in Pub/Sub.
+ *
+ * To use this example in the specific case, follow the few steps:
+ * <ul>
+ * <li> Create your own class to describe AVRO schema. As an example use {@link TaxiRide}. Just define necessary fields.
+ * <li> Create your own Avro Deserializer class. As an example use {@link org.apache.beam.templates.avro.TaxiRidesKafkaAvroDeserializer}. Just rename it, and put your own Schema class as the necessary types.
+ * <li> Modify the {@link FormatTransform}. Put your Schema class and Deserializer to the related parameter.
+ * <li> Modify write step in the {@link KafkaToPubsub} by put your Schema class to "writeAvrosToPubSub" step.
+ * </ul>

Review comment:
       This looks like a dupe of the README, could the javadoc just refer to that instead?

##########
File path: examples/templates/java/README.md
##########
@@ -0,0 +1,252 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam Template to ingest data from Apache Kafka to Google Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) Template that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Pub/Sub](https://cloud.google.com/pubsub).
+
+Supported data formats:
+- Serializable plaintext formats, such as JSON
+- [PubSubMessage](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage).
+
+Supported input source configurations:
+- Single or multiple Apache Kafka bootstrap servers
+- Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection
+- Secrets vault service [HashiCorp Vault](https://www.vaultproject.io/).
+
+Supported destination configuration:
+- Single Google Pub/Sub topic.
+
+In a simple scenario, the template will create an Apache Beam pipeline that will read messages from a source Kafka server with a source topic, and stream the text messages into specified Pub/Sub destination topic. Other scenarios may need Kafka SASL/SCRAM authentication, that can be performed over plain text or SSL encrypted connection. The template supports using a single Kafka user account to authenticate in the provided source Kafka servers and topics. To support SASL authenticaton over SSL the template will need an SSL certificate location and access to a secrets vault service with Kafka username and password, currently supporting HashiCorp Vault.
+
+## Requirements
+
+- Java 11
+- Kafka Bootstrap Server(s) up and running
+- Existing source Kafka topic(s)
+- An existing Pub/Sub destination output topic
+- (Optional) An existing HashiCorp Vault
+- (Optional) A configured secure SSL connection for Kafka
+
+## Getting Started
+
+This section describes what is needed to get the template up and running.
+- Assembling the Uber-JAR
+- Local execution
+- Google Dataflow Template
+  - Set up the environment
+  - Creating the Dataflow Flex Template
+  - Create a Dataflow job to ingest data using the template.
+- Avro format transferring.
+
+## Assembling the Uber-JAR
+
+To run this template the template Java project should be built into
+an Uber JAR file.
+
+Navigate to the Beam folder:
+
+```
+cd /path/to/beam
+```
+
+In order to create Uber JAR with Gradle, [Shadow plugin](https://github.com/johnrengelman/shadow)
+is used. It creates the `shadowJar` task that builds the Uber JAR:
+
+```
+./gradlew -p examples/templates/java/kafka-to-pubsub clean shadowJar
+```
+
+ℹ️ An **Uber JAR** - also known as **fat JAR** - is a single JAR file that contains
+both target package *and* all its dependencies.
+
+The result of the `shadowJar` task execution is a `.jar` file that is generated
+under the `build/libs/` folder in kafka-to-pubsub directory.
+
+## Local execution
+To execute this pipeline locally, specify the parameters:
+- Kafka Bootstrap servers
+- Kafka input topics
+- Pub/Sub output topic
+in the following format:
+```bash
+--bootstrapServers=host:port \
+--inputTopics=your-input-topic \
+--outputTopic=projects/your-project-id/topics/your-topic-pame
+```
+Optionally, to retrieve Kafka credentials for SASL/SCRAM,
+specify a URL to the credentials in HashiCorp Vault and the vault access token:
+```bash
+--secretStoreUrl=http(s)://host:port/path/to/credentials
+--vaultToken=your-token
+```
+Optionally, to configure secure SSL connection between the Beam pipeline and Kafka,
+specify the parameters:
+- A local path to a truststore file
+- A local path to a keystore file
+- Truststore password
+- Keystore password
+- Key password
+```bash
+--truststorePath=path/to/kafka.truststore.jks
+--keystorePath=path/to/kafka.keystore.jks
+--truststorePassword=your-truststore-password
+--keystorePassword=your-keystore-password
+--keyPassword=your-key-password
+```
+To change the runner, specify:
+```bash
+--runner=YOUR_SELECTED_RUNNER
+```
+See examples/java/README.md for steps and examples to configure different runners.
+
+## Google Dataflow Template
+
+### Setting Up Project Environment
+
+#### Pipeline variables:
+
+```
+PROJECT=id-of-my-project
+BUCKET_NAME=my-bucket
+REGION=my-region
+```
+
+#### Template Metadata Storage Bucket Creation
+
+The Dataflow Flex template has to store its metadata in a bucket in
+[Google Cloud Storage](https://cloud.google.com/storage), so it can be executed from the Google Cloud Platform.
+Create the bucket in Google Cloud Storage if it doesn't exist yet:
+
+```
+gsutil mb gs://${BUCKET_NAME}
+```
+
+#### Containerization variables:
+
+```
+IMAGE_NAME=my-image-name
+TARGET_GCR_IMAGE=gcr.io/${PROJECT}/${IMAGE_NAME}
+BASE_CONTAINER_IMAGE=my-base-container-image
+TEMPLATE_PATH="gs://${BUCKET_NAME}/templates/kafka-pubsub.json"
+```
+
+### Creating the Dataflow Flex Template
+
+Dataflow Flex Templates package the pipeline as a Docker image and stage these images
+on your project's [Container Registry](https://cloud.google.com/container-registry).
+
+To execute the template you need to create the template spec file containing all
+the necessary information to run the job. This template already has the following
+[metadata file](kafka-to-pubsub/src/main/resources/kafka_to_pubsub_metadata.json) in resources.
+
+Navigate to the template folder:
+
+```
+cd /path/to/beam/examples/templates/java/kafka-to-pubsub
+```
+
+Build the Dataflow Flex Template:
+
+```
+gcloud dataflow flex-template build ${TEMPLATE_PATH} \
+       --image-gcr-path ${TARGET_GCR_IMAGE} \
+       --sdk-language "JAVA" \
+       --flex-template-base-image ${BASE_CONTAINER_IMAGE} \
+       --metadata-file "src/main/resources/kafka_to_pubsub_metadata.json" \
+       --jar "build/libs/beam-examples-templates-java-kafka-to-pubsub-2.25.0-SNAPSHOT-all.jar" \

Review comment:
       nit: this shouldn't specify a specific version
   ```suggestion
          --jar "build/libs/beam-examples-templates-java-kafka-to-pubsub-<version>-all.jar" \
   ```

##########
File path: examples/templates/java/README.md
##########
@@ -0,0 +1,252 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam Template to ingest data from Apache Kafka to Google Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) Template that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Pub/Sub](https://cloud.google.com/pubsub).
+
+Supported data formats:
+- Serializable plaintext formats, such as JSON
+- [PubSubMessage](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage).
+
+Supported input source configurations:
+- Single or multiple Apache Kafka bootstrap servers
+- Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection
+- Secrets vault service [HashiCorp Vault](https://www.vaultproject.io/).
+
+Supported destination configuration:
+- Single Google Pub/Sub topic.
+
+In a simple scenario, the template will create an Apache Beam pipeline that will read messages from a source Kafka server with a source topic, and stream the text messages into specified Pub/Sub destination topic. Other scenarios may need Kafka SASL/SCRAM authentication, that can be performed over plain text or SSL encrypted connection. The template supports using a single Kafka user account to authenticate in the provided source Kafka servers and topics. To support SASL authenticaton over SSL the template will need an SSL certificate location and access to a secrets vault service with Kafka username and password, currently supporting HashiCorp Vault.
+
+## Requirements
+
+- Java 11
+- Kafka Bootstrap Server(s) up and running
+- Existing source Kafka topic(s)
+- An existing Pub/Sub destination output topic
+- (Optional) An existing HashiCorp Vault
+- (Optional) A configured secure SSL connection for Kafka
+
+## Getting Started
+
+This section describes what is needed to get the template up and running.
+- Assembling the Uber-JAR
+- Local execution
+- Google Dataflow Template
+  - Set up the environment
+  - Creating the Dataflow Flex Template
+  - Create a Dataflow job to ingest data using the template.
+- Avro format transferring.
+
+## Assembling the Uber-JAR
+
+To run this template the template Java project should be built into
+an Uber JAR file.
+
+Navigate to the Beam folder:
+
+```
+cd /path/to/beam
+```
+
+In order to create Uber JAR with Gradle, [Shadow plugin](https://github.com/johnrengelman/shadow)
+is used. It creates the `shadowJar` task that builds the Uber JAR:
+
+```
+./gradlew -p examples/templates/java/kafka-to-pubsub clean shadowJar
+```
+
+ℹ️ An **Uber JAR** - also known as **fat JAR** - is a single JAR file that contains
+both target package *and* all its dependencies.
+
+The result of the `shadowJar` task execution is a `.jar` file that is generated
+under the `build/libs/` folder in kafka-to-pubsub directory.
+
+## Local execution
+To execute this pipeline locally, specify the parameters:
+- Kafka Bootstrap servers
+- Kafka input topics
+- Pub/Sub output topic
+in the following format:
+```bash
+--bootstrapServers=host:port \
+--inputTopics=your-input-topic \
+--outputTopic=projects/your-project-id/topics/your-topic-pame
+```
+Optionally, to retrieve Kafka credentials for SASL/SCRAM,
+specify a URL to the credentials in HashiCorp Vault and the vault access token:
+```bash
+--secretStoreUrl=http(s)://host:port/path/to/credentials
+--vaultToken=your-token
+```
+Optionally, to configure secure SSL connection between the Beam pipeline and Kafka,
+specify the parameters:
+- A local path to a truststore file
+- A local path to a keystore file
+- Truststore password
+- Keystore password
+- Key password
+```bash
+--truststorePath=path/to/kafka.truststore.jks
+--keystorePath=path/to/kafka.keystore.jks
+--truststorePassword=your-truststore-password
+--keystorePassword=your-keystore-password
+--keyPassword=your-key-password
+```
+To change the runner, specify:
+```bash
+--runner=YOUR_SELECTED_RUNNER
+```
+See examples/java/README.md for steps and examples to configure different runners.
+
+## Google Dataflow Template
+
+### Setting Up Project Environment
+
+#### Pipeline variables:
+
+```
+PROJECT=id-of-my-project
+BUCKET_NAME=my-bucket
+REGION=my-region
+```
+
+#### Template Metadata Storage Bucket Creation
+
+The Dataflow Flex template has to store its metadata in a bucket in
+[Google Cloud Storage](https://cloud.google.com/storage), so it can be executed from the Google Cloud Platform.
+Create the bucket in Google Cloud Storage if it doesn't exist yet:
+
+```
+gsutil mb gs://${BUCKET_NAME}
+```
+
+#### Containerization variables:
+
+```
+IMAGE_NAME=my-image-name
+TARGET_GCR_IMAGE=gcr.io/${PROJECT}/${IMAGE_NAME}
+BASE_CONTAINER_IMAGE=my-base-container-image
+TEMPLATE_PATH="gs://${BUCKET_NAME}/templates/kafka-pubsub.json"
+```
+
+### Creating the Dataflow Flex Template
+
+Dataflow Flex Templates package the pipeline as a Docker image and stage these images
+on your project's [Container Registry](https://cloud.google.com/container-registry).
+
+To execute the template you need to create the template spec file containing all
+the necessary information to run the job. This template already has the following
+[metadata file](kafka-to-pubsub/src/main/resources/kafka_to_pubsub_metadata.json) in resources.
+
+Navigate to the template folder:
+
+```
+cd /path/to/beam/examples/templates/java/kafka-to-pubsub
+```
+
+Build the Dataflow Flex Template:
+
+```
+gcloud dataflow flex-template build ${TEMPLATE_PATH} \
+       --image-gcr-path ${TARGET_GCR_IMAGE} \
+       --sdk-language "JAVA" \
+       --flex-template-base-image ${BASE_CONTAINER_IMAGE} \
+       --metadata-file "src/main/resources/kafka_to_pubsub_metadata.json" \
+       --jar "build/libs/beam-examples-templates-java-kafka-to-pubsub-2.25.0-SNAPSHOT-all.jar" \
+       --env FLEX_TEMPLATE_JAVA_MAIN_CLASS="org.apache.beam.templates.KafkaToPubsub"
+```
+
+### Create Dataflow Job Using the Apache Kafka to Google Pub/Sub Dataflow Flex Template
+
+To deploy the pipeline, you should refer to the template file and pass the
+[parameters](https://cloud.google.com/dataflow/docs/guides/specifying-exec-params#setting-other-cloud-dataflow-pipeline-options)
+required by the pipeline.
+
+You can do this in 3 different ways:
+1. Using [Dataflow Google Cloud Console](https://console.cloud.google.com/dataflow/jobs)

Review comment:
       Maybe this could point to the GCP docs with more details on launching a flex tempalte from cloud console (if such a page exists)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org