You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2021/04/08 12:24:10 UTC

[camel] branch master updated (e8bbb70 -> 55c58ce)

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git.


    from e8bbb70  CAMEL-16455: Optimize CircuitBreaker EIP with task pooling
     new a8c3bf7  CAMEL-16469 - Camel-AWS2-S3 - Streaming upload: restart from the last index when using the progressive naming strategy - docs
     new 55c58ce  Regen

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../camel/catalog/docs/aws2-s3-component.adoc       | 21 +++++++++++++++++++++
 .../src/main/docs/aws2-s3-component.adoc            | 21 +++++++++++++++++++++
 .../modules/ROOT/pages/aws2-s3-component.adoc       | 21 +++++++++++++++++++++
 3 files changed, 63 insertions(+)

[camel] 01/02: CAMEL-16469 - Camel-AWS2-S3 - Streaming upload: restart from the last index when using the progressive naming strategy - docs

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git

commit a8c3bf769f82a0643ff0012de1bd5bdd00a5cf95
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Thu Apr 8 14:04:47 2021 +0200

    CAMEL-16469 - Camel-AWS2-S3 - Streaming upload: restart from the last index when using the progressive naming strategy - docs
---
 .../src/main/docs/aws2-s3-component.adoc            | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
index 188dab6..843f297 100644
--- a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
+++ b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
@@ -578,6 +578,27 @@ from(kafka("topic2").brokers("localhost:9092"))
 
 The default size for a batch is 1 Mb, but you can adjust it according to your requirements.
 
+When you'll stop your producer route, the producer will take care of flushing the remaining buffered messaged and complete the upload.
+
+In Streaming upload you'll be able restart the producer from the point where it left. It's important to note that this feature is critical only when using the progressive naming strategy.
+
+By setting the restartingPolicy to lastPart, you will restart uploading files and contents from the last part number the producer left.
+
+As example: 
+- Start the route with progressive naming strategy and keyname equals to camel.txt, with batchMessageNumber equals to 20, and restartingPolicy equals to lastPart
+- Send 70 messages.
+- Stop the route
+- On your S3 bucket you should now see 4 files: camel.txt, camel-1.txt, camel-2.txt and camel-3.txt, the first three will have 20 messages, while the last one only 10.
+- Restart the route
+- Send 25 messages
+- Stop the route
+- You'll now have 2 other files in your bucket: camel-5.txt and camel-6.txt, the first with 20 messages and second with 5 messages.
+- Go ahead
+
+This won't be needed when using the random naming strategy.
+
+On the opposite you can specify the override restartingPolicy. In that case you'll be able to override whatever you written before (for that particular keyName) on your bucket.
+
 [NOTE]
 ====
 In Streaming upload mode the only keyName option that will be taken into account is the endpoint option. Using the header will throw an NPE and this is done by design.

[camel] 02/02: Regen

Posted by ac...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git

commit 55c58cec8cc424ca9472a677d2680dbb959cb935
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Thu Apr 8 14:20:28 2021 +0200

    Regen
---
 .../camel/catalog/docs/aws2-s3-component.adoc       | 21 +++++++++++++++++++++
 .../modules/ROOT/pages/aws2-s3-component.adoc       | 21 +++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/aws2-s3-component.adoc b/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/aws2-s3-component.adoc
index 188dab6..843f297 100644
--- a/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/aws2-s3-component.adoc
+++ b/catalog/camel-catalog/src/generated/resources/org/apache/camel/catalog/docs/aws2-s3-component.adoc
@@ -578,6 +578,27 @@ from(kafka("topic2").brokers("localhost:9092"))
 
 The default size for a batch is 1 Mb, but you can adjust it according to your requirements.
 
+When you'll stop your producer route, the producer will take care of flushing the remaining buffered messaged and complete the upload.
+
+In Streaming upload you'll be able restart the producer from the point where it left. It's important to note that this feature is critical only when using the progressive naming strategy.
+
+By setting the restartingPolicy to lastPart, you will restart uploading files and contents from the last part number the producer left.
+
+As example: 
+- Start the route with progressive naming strategy and keyname equals to camel.txt, with batchMessageNumber equals to 20, and restartingPolicy equals to lastPart
+- Send 70 messages.
+- Stop the route
+- On your S3 bucket you should now see 4 files: camel.txt, camel-1.txt, camel-2.txt and camel-3.txt, the first three will have 20 messages, while the last one only 10.
+- Restart the route
+- Send 25 messages
+- Stop the route
+- You'll now have 2 other files in your bucket: camel-5.txt and camel-6.txt, the first with 20 messages and second with 5 messages.
+- Go ahead
+
+This won't be needed when using the random naming strategy.
+
+On the opposite you can specify the override restartingPolicy. In that case you'll be able to override whatever you written before (for that particular keyName) on your bucket.
+
 [NOTE]
 ====
 In Streaming upload mode the only keyName option that will be taken into account is the endpoint option. Using the header will throw an NPE and this is done by design.
diff --git a/docs/components/modules/ROOT/pages/aws2-s3-component.adoc b/docs/components/modules/ROOT/pages/aws2-s3-component.adoc
index 0bfe40c..8224896 100644
--- a/docs/components/modules/ROOT/pages/aws2-s3-component.adoc
+++ b/docs/components/modules/ROOT/pages/aws2-s3-component.adoc
@@ -580,6 +580,27 @@ from(kafka("topic2").brokers("localhost:9092"))
 
 The default size for a batch is 1 Mb, but you can adjust it according to your requirements.
 
+When you'll stop your producer route, the producer will take care of flushing the remaining buffered messaged and complete the upload.
+
+In Streaming upload you'll be able restart the producer from the point where it left. It's important to note that this feature is critical only when using the progressive naming strategy.
+
+By setting the restartingPolicy to lastPart, you will restart uploading files and contents from the last part number the producer left.
+
+As example: 
+- Start the route with progressive naming strategy and keyname equals to camel.txt, with batchMessageNumber equals to 20, and restartingPolicy equals to lastPart
+- Send 70 messages.
+- Stop the route
+- On your S3 bucket you should now see 4 files: camel.txt, camel-1.txt, camel-2.txt and camel-3.txt, the first three will have 20 messages, while the last one only 10.
+- Restart the route
+- Send 25 messages
+- Stop the route
+- You'll now have 2 other files in your bucket: camel-5.txt and camel-6.txt, the first with 20 messages and second with 5 messages.
+- Go ahead
+
+This won't be needed when using the random naming strategy.
+
+On the opposite you can specify the override restartingPolicy. In that case you'll be able to override whatever you written before (for that particular keyName) on your bucket.
+
 [NOTE]
 ====
 In Streaming upload mode the only keyName option that will be taken into account is the endpoint option. Using the header will throw an NPE and this is done by design.