You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ap...@apache.org on 2024/03/05 13:38:49 UTC

(camel-kamelets-examples) branch main updated: Fix various typos and case in readmes

This is an automated email from the ASF dual-hosted git repository.

apupier pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-kamelets-examples.git


The following commit(s) were added to refs/heads/main by this push:
     new b807035  Fix various typos and case in readmes
b807035 is described below

commit b807035bf6ee2fefa581e0379d253680c68934a8
Author: Aurélien Pupier <ap...@redhat.com>
AuthorDate: Tue Mar 5 14:30:16 2024 +0100

    Fix various typos and case in readmes
    
    Signed-off-by: Aurélien Pupier <ap...@redhat.com>
---
 camel-k/kafka-s3/README.adoc                         |  4 ++--
 jbang/artemis/README.adoc                            |  2 +-
 jbang/aws-database-admin-secrets-refresh/README.adoc |  2 +-
 jbang/aws-s3-cdc/README.adoc                         | 20 ++++++++++----------
 jbang/aws-s3-large-object/README.adoc                |  2 +-
 .../README.md                                        | 20 ++++++++++----------
 jbang/azure-eventhubs-kafka-ibm-mq/README.adoc       |  2 +-
 jbang/azure-key-vault-secrets-reloading/README.adoc  |  2 +-
 jbang/azure-servicebus/README.adoc                   |  4 ++--
 jbang/azure-storage-blob-cdc/README.adoc             | 12 ++++++------
 jbang/bean-builder/README.adoc                       |  2 +-
 jbang/bean-inlined-code/README.adoc                  |  2 +-
 jbang/chaos-monkey/README.adoc                       |  6 +++---
 jbang/circuit-breaker/README.adoc                    |  2 +-
 jbang/custom-kamelet/README.adoc                     |  4 ++--
 jbang/kafka-health/README.adoc                       |  2 +-
 jbang/mqtt/README.adoc                               |  4 ++--
 jbang/opensearch-search-log/README.adoc              |  8 ++++----
 jbang/route-controller/README.adoc                   |  2 +-
 jbang/snowflake-migration/README.adoc                |  2 +-
 jbang/timer-opensearch-index/README.adoc             |  8 ++++----
 kamelet-main/slack-source/README.adoc                |  2 +-
 22 files changed, 57 insertions(+), 57 deletions(-)

diff --git a/camel-k/kafka-s3/README.adoc b/camel-k/kafka-s3/README.adoc
index f443e49..d692c45 100644
--- a/camel-k/kafka-s3/README.adoc
+++ b/camel-k/kafka-s3/README.adoc
@@ -1,8 +1,8 @@
 == Kafka to S3 KameletBinding example
 
-In this sample you'll use Strimzi Operator and Camel K Operator on Openshift Cloud.
+In this sample you'll use Strimzi Operator and Camel K Operator on OpenShift Cloud.
 
-So this example expects you have an Openshift instance running on Cloud.
+So this example expects you have an OpenShift instance running on Cloud.
 
 === Setup the Strimzi Operator
 
diff --git a/jbang/artemis/README.adoc b/jbang/artemis/README.adoc
index 2d86347..d2e5e6b 100644
--- a/jbang/artemis/README.adoc
+++ b/jbang/artemis/README.adoc
@@ -5,7 +5,7 @@ messaging broker.
 
 === Apache ActiveMQ Artemis
 
-You first need to have an ActiveMQ Artimis broker up and running.
+You first need to have an ActiveMQ Artemis broker up and running.
 See more at: https://activemq.apache.org/components/artemis/
 
 
diff --git a/jbang/aws-database-admin-secrets-refresh/README.adoc b/jbang/aws-database-admin-secrets-refresh/README.adoc
index 98e6a84..9463d28 100644
--- a/jbang/aws-database-admin-secrets-refresh/README.adoc
+++ b/jbang/aws-database-admin-secrets-refresh/README.adoc
@@ -48,7 +48,7 @@ aws_access_key_id = accessKey
 aws_secret_access_key = secretKey
 ----
 
-=== Setup and populate the Postgresql Database
+=== Setup and populate the PostgreSQL Database
 
 We create a PostgreSQL instance in a docker container
 
diff --git a/jbang/aws-s3-cdc/README.adoc b/jbang/aws-s3-cdc/README.adoc
index 450a85f..fb81418 100644
--- a/jbang/aws-s3-cdc/README.adoc
+++ b/jbang/aws-s3-cdc/README.adoc
@@ -2,7 +2,7 @@
 
 In this sample you'll use the AWS S3 CDC Source Kamelet.
 
-Through the usage of Eventbridge and SQS Services you'll be able to consume events from specific bucket.
+Through the usage of EventBridge and SQS Services you'll be able to consume events from specific bucket.
 
 === Install JBang
 
@@ -26,7 +26,7 @@ $ jbang app install camel@apache/camel
 
 Which allows to run CamelJBang with `camel` as shown below.
 
-=== Setup the AWS S3 bucket, SQS Queue and Eventbrige Rule
+=== Setup the AWS S3 bucket, SQS Queue and EventBridge Rule
 
 You'll need a fully working AWS CLI locally.
 
@@ -37,14 +37,14 @@ Create a bucket on AWS on a particular region
 aws s3api create-bucket --bucket cdc-s3-bucket --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1
 ----
 
-Enable Eventbridge notification on the bucket
+Enable EventBridge notification on the bucket
 
 [source,sh]
 ----
 aws s3api put-bucket-notification-configuration --bucket cdc-s3-bucket --region eu-west-1 --notification-configuration '{ "EventBridgeConfiguration": {} }'
 ----
 
-Add an Eventbridge rule on the bucket
+Add an EventBridge rule on the bucket
 
 [source,sh]
 ----
@@ -65,16 +65,16 @@ Modify Access Policy for the queue just created. Don't forget to edit the policy
 aws sqs set-queue-attributes --queue-url <just_created_queue_arn> --attributes file://policy-queue.json
 ----
 
-Add a target for Eventbridge Rule which will be the SQS Queue just created
+Add a target for EventBridge Rule which will be the SQS Queue just created
 
 [source,sh]
 ----
 aws events put-targets --rule s3-events-cdc --targets "Id"="sqs-sub","Arn"="<just_created_queue_arn>" --region eu-west-1
 ----
 
-=== Setup the AWS S3 bucket, SQS Queue and Eventbrige Rule through Terraform
+=== Setup the AWS S3 bucket, SQS Queue and EventBridge Rule through Terraform
 
-If you are in a hurry you can also try this example by running the terraform configuration provided in terraform folder.
+If you are in a hurry you can also try this example by running the Terraform configuration provided in Terraform folder.
 
 [source,sh]
 ----
@@ -97,7 +97,7 @@ terraform apply -var="s3_bucket_name=s3-eventbridge-test-123" -var="sqs_queue_na
 
 You can specify whatever bucket name or SQS name you want.
 
-At the end the AWS enviroment on your account will be completed, and you could go ahead with the example.
+At the end the AWS environment on your account will be completed, and you could go ahead with the example.
 
 Don't forget to specify the correct sqs queue name in the yaml file and adding correct credentials for AWS.
 
@@ -163,7 +163,7 @@ aws s3api delete-object --bucket cdc-s3-bucket --key example-file.txt
 2022-11-02 15:13:41.250  INFO 120300 --- [://test-queue-3] info                                     : Exchange[ExchangePattern: InOnly, BodyType: com.fasterxml.jackson.databind.node.ObjectNode, Body: {  "version" : "0",  "id" : "d54290df-2ad9-31ff-308b-8331fee7344a",  "detail-type" : "Object Deleted",  "source" : "aws.s3",  "account" : "xxxx",  "time" : "2022-11-02T14:13:37Z",  "region" : "eu-west-1",  "resources" : [ "arn:aws:s3:::cdc-s3-bucket" ],  "detail" : {    "version" : "0",    " [...]
 ----
 
-=== Cleanup AWS S3 bucket, SQS Queue and Eventbrige Rule through Terraform
+=== Cleanup AWS S3 bucket, SQS Queue and EventBridge Rule through Terraform
 
 You'll need to cleanup everything from AWS console or CLI.
 
@@ -183,7 +183,7 @@ terraform destroy -var="s3_bucket_name=s3-eventbridge-test" -var="sqs_queue_name
 
 You'll need to specify the same var used for terraform apply.
 
-At the end the AWS enviroment on your account will be clean.
+At the end the AWS environment on your account will be clean.
 
 === Help and contributions
 
diff --git a/jbang/aws-s3-large-object/README.adoc b/jbang/aws-s3-large-object/README.adoc
index 49b48a9..0b819d4 100644
--- a/jbang/aws-s3-large-object/README.adoc
+++ b/jbang/aws-s3-large-object/README.adoc
@@ -124,7 +124,7 @@ Confirm the uploaded objects
 image::./images/noobaa-web-console.png[]
 
 === Tips & Tricks
-- To work with Nooboo (aka MCG) S3 compatible storage, you will need Camel S3 component with a version supporting, parameter 'forcePathStyle: true'. (version 3.21.x or 4.x and up)
+- To work with Noobaa (aka MCG) S3 compatible storage, you will need Camel S3 component with a version supporting, parameter 'forcePathStyle: true'. (version 3.21.x or 4.x and up)
 - To avoid OOTM error when dealing with object size larger than 2GB, must have this property, "camel.main.streamCachingEnabled=false"
 - For downloading large (i.e. >2GB) object, need these two parameters: "includeBody: false, autocloseBody: true"
 
diff --git a/jbang/azure-eventhubs-kafka-azure-schema-registry/README.md b/jbang/azure-eventhubs-kafka-azure-schema-registry/README.md
index 999ddaf..e1cc280 100644
--- a/jbang/azure-eventhubs-kafka-azure-schema-registry/README.md
+++ b/jbang/azure-eventhubs-kafka-azure-schema-registry/README.md
@@ -1,12 +1,12 @@
-# Example for consuming from Azure EventHubs in Avro format, using Azure Schema Registry
+# Example for consuming from Azure Event Hubs in Avro format, using Azure Schema Registry
 
-This example shows a YAML DSL route for consuming Avro messages from Eventhubs using Azure Schema Registry.
-The exmaple also includes a producer for convenience, as well as a wrapper around [DefaultAzureCredentials](https://learn.microsoft.com/en-us/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable)
+This example shows a YAML DSL route for consuming Avro messages from Event Hubs using Azure Schema Registry.
+The example also includes a producer for convenience, as well as a wrapper around [DefaultAzureCredentials](https://learn.microsoft.com/en-us/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable)
 to solve the instantiation problem, as the class uses a builder for instantiating.
 
 ## Build the infrastructure
 
-Choose a globally unique name for the eventhubs namespace and edit it in the terraform [script](main.tf).
+Choose a globally unique name for the Event Hubs namespace and edit it in the terraform [script](main.tf).
 Then, create the services using the script.
 
 For having a working example you will need to add a role assignment of type "Schema Registry Contributor (Preview)"
@@ -33,19 +33,19 @@ This step is important to fully run the example.
 
 ## Configure the applications
 
-Use [application.properties.template](application.properties.template) to create `application.properties` and define your eventhubs namespace in there.
-After the services have been created, the connection string for the eventhub can be found on the Azure Console,
+Use [application.properties.template](application.properties.template) to create `application.properties` and define your Event Hubs namespace in there.
+After the services have been created, the connection string for the Event Hub can be found on the Azure Console,
 or by running:
 ```bash
 az eventhubs eventhub authorization-rule keys list --resource-group "example-rg" --namespace-name "example-namespace" --eventhub-name "my-topic" --name "rw_policy"
 ```
 Set the `primaryConnectionString` as the `connectionstring` in `application.properties`.
 
-## Produce to Eventhubs.
+## Produce to Event Hubs.
 
 Run [`Produce.java`](./azure-identity/src/main/java/com/acme/example/eventhubs/Produce.java) to produce a message to the Eventhub.
 
-## Consume from Eventhubs.
+## Consume from Event Hubs.
 
 To consume messages using a Camel route, first install the azure identity maven project:
 ```bash
@@ -57,13 +57,13 @@ then run:
 camel run kafka-log.camel.yaml 
 ```
 
-You can also use the Kamelet for working with Azure Schema Registry and Azure Eventhubs Kafka
+You can also use the Kamelet for working with Azure Schema Registry and Azure Event Hubs Kafka
 
 ```bash
 jbang --fresh run camel@apache/camel run kafka-kamelet-log.camel.yaml
 ```
 
-You can also use the Kamelet for producing to Azure Schema Registry and Azure Eventhubs Kafka 
+You can also use the Kamelet for producing to Azure Schema Registry and Azure Event Hubs Kafka 
 
 ```bash
 jbang --fresh run camel@apache/camel run --local-kamelet-dir=<path_to_your_local_kamelets> azure-kafka-schema-registry-producer.camel.yaml
diff --git a/jbang/azure-eventhubs-kafka-ibm-mq/README.adoc b/jbang/azure-eventhubs-kafka-ibm-mq/README.adoc
index d85fbac..8ce315e 100644
--- a/jbang/azure-eventhubs-kafka-ibm-mq/README.adoc
+++ b/jbang/azure-eventhubs-kafka-ibm-mq/README.adoc
@@ -101,7 +101,7 @@ Then you can run this example using:
 $ camel run mq-log.yaml
 ----
 
-=== Send data to Kafka on Eventhubs
+=== Send data to Kafka on Event Hubs
 
 You can run the timer to kafka route.
 
diff --git a/jbang/azure-key-vault-secrets-reloading/README.adoc b/jbang/azure-key-vault-secrets-reloading/README.adoc
index 2d5eef2..3835d2e 100644
--- a/jbang/azure-key-vault-secrets-reloading/README.adoc
+++ b/jbang/azure-key-vault-secrets-reloading/README.adoc
@@ -28,7 +28,7 @@ Which allows to run CamelJBang with `camel` as shown below.
 
 You'll need to have a key vault as first step.
 
-Then you'll need to create an event grid subscription to Eventhubs with a Blob Account and container for storing the checkpoint.
+Then you'll need to create an event grid subscription to Event Hubs with a Blob Account and container for storing the checkpoint.
 
 It's not totally easy to do through the az cli, but everything could be done through the Azure UI. We're planning to improve this example by having all the instructions exposed as Azure CLI commands.
 
diff --git a/jbang/azure-servicebus/README.adoc b/jbang/azure-servicebus/README.adoc
index bb48d19..3c1251c 100644
--- a/jbang/azure-servicebus/README.adoc
+++ b/jbang/azure-servicebus/README.adoc
@@ -1,6 +1,6 @@
-== Sending message to Servicebus and consume them
+== Sending message to Azure Service Bus and consume them
 
-In this sample you'll use the Azure Servicebus source and sink Kamelet based on camel-azure-servicebus component.
+In this sample you'll use the Azure Service Bus source and sink Kamelet based on camel-azure-servicebus component.
 
 === Install JBang
 
diff --git a/jbang/azure-storage-blob-cdc/README.adoc b/jbang/azure-storage-blob-cdc/README.adoc
index ff60c86..31e1420 100644
--- a/jbang/azure-storage-blob-cdc/README.adoc
+++ b/jbang/azure-storage-blob-cdc/README.adoc
@@ -2,7 +2,7 @@
 
 In this sample you'll use the Azure Storage Blob CDC Source Kamelet.
 
-Through the usage of Event Grid and Servicebus Services you'll be able to consume events from specific containers.
+Through the usage of Event Grid and Service Bus Services you'll be able to consume events from specific containers.
 
 === Install JBang
 
@@ -26,9 +26,9 @@ $ jbang app install camel@apache/camel
 
 Which allows to run CamelJBang with `camel` as shown below.
 
-=== Setup the Servicebus, Azure Storage Blob and Eventgrid
+=== Setup the Service Bus, Azure Storage Blob and Event Grid
 
-Running the terraform configuration provided in terraform folder.
+Running the Terraform configuration provided in Terraform folder.
 
 [source,sh]
 ----
@@ -55,7 +55,7 @@ Enter yes and wait for the terraform configuration to end.
 
 In the `application.properties` add the correct accessKey for the Azure Storage Blob account.
 
-In the `azure-storage-blob-cdc.yaml` add the correct connection String for the Servicebus Queue.
+In the `azure-storage-blob-cdc.yaml` add the correct connection String for the Service Bus Queue.
 
 === How to run
 
@@ -158,7 +158,7 @@ You should see at first the content of the uploaded file and in the second messa
 
 ----
 
-=== Cleanup Servicebus, Azure Storage Blob and Eventgrid through Terraform
+=== Cleanup Service Bus, Azure Storage Blob and Event Grid through Terraform
 
 You'll need to cleanup everything from Azure console or CLI.
 
@@ -178,7 +178,7 @@ terraform destroy
 
 You'll need to specify the same var used for terraform apply.
 
-At the end the Azure enviroment on your account will be clean.
+At the end the Azure environment on your account will be clean.
 
 === Help and contributions
 
diff --git a/jbang/bean-builder/README.adoc b/jbang/bean-builder/README.adoc
index b3b532a..068b516 100644
--- a/jbang/bean-builder/README.adoc
+++ b/jbang/bean-builder/README.adoc
@@ -5,7 +5,7 @@ to be used in the Camel route.
 
 This shows the flexibility of bean configuration in YAML DSL to make it possible
 for non Java developers to configure beans that requires using fluent builder class,
-with the power of low-code prgramming.
+with the power of low-code programming.
 
 
 === Install JBang
diff --git a/jbang/bean-inlined-code/README.adoc b/jbang/bean-inlined-code/README.adoc
index cfd628a..c768ed9 100644
--- a/jbang/bean-inlined-code/README.adoc
+++ b/jbang/bean-inlined-code/README.adoc
@@ -5,7 +5,7 @@ to be used in the Camel route.
 
 This shows the full flexibility by allowing to write a little bit of Java code
 directly inlined in YAML DSL to make it possible to create and configure the bean
-anyay you need, with the power of low-code prgramming.
+anyway you need, with the power of low-code programming.
 
 
 === Install JBang
diff --git a/jbang/chaos-monkey/README.adoc b/jbang/chaos-monkey/README.adoc
index efeab01..ac57d77 100644
--- a/jbang/chaos-monkey/README.adoc
+++ b/jbang/chaos-monkey/README.adoc
@@ -1,8 +1,8 @@
 == Chaos Monkey
 
-This example shows a chaos moneky with Camel JBang.
+This example shows a chaos monkey with Camel JBang.
 
-When everything is okay then Camel reports UP in the helath check.
+When everything is okay then Camel reports UP in the health check.
 But the chaos monkey can from time to time cause problems and the health check is then DOWN.
 
 
@@ -56,7 +56,7 @@ $ camel run * --health
 ----
 
 Then you can browse: http://localhost:8080/q/health to introspect the health check
-of the running application. When the chaos moneky is causing problems then the check is DOWN otherwise its UP.
+of the running application. When the chaos monkey is causing problems then the check is DOWN otherwise its UP.
 
 You can also inspect the health-check from CLI via:
 
diff --git a/jbang/circuit-breaker/README.adoc b/jbang/circuit-breaker/README.adoc
index 57314de..c9604e7 100644
--- a/jbang/circuit-breaker/README.adoc
+++ b/jbang/circuit-breaker/README.adoc
@@ -1,6 +1,6 @@
 == Circuit Breaker
 
-This example shows how Camel JBang can use ciruit breaker EIP.
+This example shows how Camel JBang can use circuit breaker EIP.
 
 === Install JBang
 
diff --git a/jbang/custom-kamelet/README.adoc b/jbang/custom-kamelet/README.adoc
index d8c7bbf..4327b55 100644
--- a/jbang/custom-kamelet/README.adoc
+++ b/jbang/custom-kamelet/README.adoc
@@ -2,7 +2,7 @@
 
 This example is using a custom Kamelet that fetches random user data.
 
-The custom kamelets can be located locally or shared on github.
+The custom kamelets can be located locally or shared on GitHub.
 This example demonstrates how to run this in both situations.
 
 
@@ -37,7 +37,7 @@ Then you can run this example using:
 $ camel run user.java --local-kamelet-dir=../../custom-kamelets
 ----
 
-And to refer to the custom kamelets with link to github:
+And to refer to the custom kamelets with link to GitHub:
 
 [source,sh]
 ----
diff --git a/jbang/kafka-health/README.adoc b/jbang/kafka-health/README.adoc
index cc812d3..6ab475d 100644
--- a/jbang/kafka-health/README.adoc
+++ b/jbang/kafka-health/README.adoc
@@ -74,7 +74,7 @@ And you can also run the example by using wildcards, instead of typing every fil
 $ camel run * --health
 ----
 
-You can also run consumer and producer in two seperate integrations:
+You can also run consumer and producer in two separate integrations:
 
 [source,sh]
 ----
diff --git a/jbang/mqtt/README.adoc b/jbang/mqtt/README.adoc
index 2bc3f2b..41e14c7 100644
--- a/jbang/mqtt/README.adoc
+++ b/jbang/mqtt/README.adoc
@@ -52,7 +52,7 @@ Then you can run the Camel integration using:
 $ camel run mqtt.camel.yaml
 ----
 
-And then from another terminal (or run the integraiton with `--background` option),
+And then from another terminal (or run the integration with `--background` option),
 then send a message to the MQTT broker. This can be done with the help from camel-jbang
 where you can send a message as follows:
 
@@ -63,7 +63,7 @@ $ camel cmd send --body=file:payload.json mqtt
 
 This will send a message where the payload (body) is read from the local file named payload.json.
 The message is sent to an existing running Camel integration (named mqtt). Then Camel will
-then send the message to the MQTT broker. So in other words we use Camel as a proxy to send the
+send the message to the MQTT broker. So in other words we use Camel as a proxy to send the
 message to the actual MQTT broker.
 
 The Camel integration will then consume the payload and output in the console.
diff --git a/jbang/opensearch-search-log/README.adoc b/jbang/opensearch-search-log/README.adoc
index d1b75ba..2ca7959 100644
--- a/jbang/opensearch-search-log/README.adoc
+++ b/jbang/opensearch-search-log/README.adoc
@@ -1,6 +1,6 @@
-== Timer to Opensearch Search Source
+== Timer to OpenSearch Search Source
 
-In this sample you'll use the Opensearch Index Sink Kamelet and Opensearch Search Source Kamelet based on camel-opensearch component.
+In this sample you'll use the OpenSearch Index Sink Kamelet and OpenSearch Search Source Kamelet based on camel-opensearch component.
 
 === Install JBang
 
@@ -24,9 +24,9 @@ $ jbang app install camel@apache/camel
 
 Which allows to run CamelJBang with `camel` as shown below.
 
-=== Setup Opensearch
+=== Setup OpenSearch
 
-We are going to use the official Docker image for Opensearch.
+We are going to use the official Docker image for OpenSearch.
 
 We can run the following:
 
diff --git a/jbang/route-controller/README.adoc b/jbang/route-controller/README.adoc
index 2d74c1f..955a646 100644
--- a/jbang/route-controller/README.adoc
+++ b/jbang/route-controller/README.adoc
@@ -44,7 +44,7 @@ $ camel run *
 When the application is starting up, then the bar route will keep failing on startup,
 and the route controller will attempt to restart the route for up till 10 times.
 
-You can see activtiy in the console log, and as well from web console and CLI:
+You can see activity in the console log, and as well from web console and CLI:
 
 - http://localhost:8080/q/health
 - http://localhost:8080/q/dev/route-controller
diff --git a/jbang/snowflake-migration/README.adoc b/jbang/snowflake-migration/README.adoc
index 0db7c8c..2392f62 100644
--- a/jbang/snowflake-migration/README.adoc
+++ b/jbang/snowflake-migration/README.adoc
@@ -2,7 +2,7 @@
 
 In this sample you'll use a Snowflake instance from trial account.
 
-We are moving data between two tables, by using snowflake source and snowflake sink Kamelets.
+We are moving data between two tables, by using Snowflake source and Snowflake sink Kamelets.
 
 === Install JBang
 
diff --git a/jbang/timer-opensearch-index/README.adoc b/jbang/timer-opensearch-index/README.adoc
index aecf2db..106dbd4 100644
--- a/jbang/timer-opensearch-index/README.adoc
+++ b/jbang/timer-opensearch-index/README.adoc
@@ -1,6 +1,6 @@
-== Timer to Opensearch Index Sink
+== Timer to OpenSearch Index Sink
 
-In this sample you'll use the Opensearch Index Sink Kamelet based on camel-opensearch component.
+In this sample you'll use the OpenSearch Index Sink Kamelet based on camel-opensearch component.
 
 === Install JBang
 
@@ -24,9 +24,9 @@ $ jbang app install camel@apache/camel
 
 Which allows to run CamelJBang with `camel` as shown below.
 
-=== Setup Opensearch
+=== Setup OpenSearch
 
-We are going to use the official Docker image for Opensearch.
+We are going to use the official Docker image for OpenSearch.
 
 We can run the following:
 
diff --git a/kamelet-main/slack-source/README.adoc b/kamelet-main/slack-source/README.adoc
index 40b7adb..9bcf17b 100644
--- a/kamelet-main/slack-source/README.adoc
+++ b/kamelet-main/slack-source/README.adoc
@@ -1,6 +1,6 @@
 == Slack Source Example
 
-In this sample you'll use the Slack Source Kamelet throught camel-kamelet-main
+In this sample you'll use the Slack Source Kamelet through camel-kamelet-main
 
 === Setup the Slack Bot App