You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2021/04/08 12:24:11 UTC

[camel] 01/02: CAMEL-16469 - Camel-AWS2-S3 - Streaming upload: restart from the last index when using the progressive naming strategy - docs

This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git

commit a8c3bf769f82a0643ff0012de1bd5bdd00a5cf95
Author: Andrea Cosentino <an...@gmail.com>
AuthorDate: Thu Apr 8 14:04:47 2021 +0200

    CAMEL-16469 - Camel-AWS2-S3 - Streaming upload: restart from the last index when using the progressive naming strategy - docs
---
 .../src/main/docs/aws2-s3-component.adoc            | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
index 188dab6..843f297 100644
--- a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
+++ b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
@@ -578,6 +578,27 @@ from(kafka("topic2").brokers("localhost:9092"))
 
 The default size for a batch is 1 Mb, but you can adjust it according to your requirements.
 
+When you'll stop your producer route, the producer will take care of flushing the remaining buffered messaged and complete the upload.
+
+In Streaming upload you'll be able restart the producer from the point where it left. It's important to note that this feature is critical only when using the progressive naming strategy.
+
+By setting the restartingPolicy to lastPart, you will restart uploading files and contents from the last part number the producer left.
+
+As example: 
+- Start the route with progressive naming strategy and keyname equals to camel.txt, with batchMessageNumber equals to 20, and restartingPolicy equals to lastPart
+- Send 70 messages.
+- Stop the route
+- On your S3 bucket you should now see 4 files: camel.txt, camel-1.txt, camel-2.txt and camel-3.txt, the first three will have 20 messages, while the last one only 10.
+- Restart the route
+- Send 25 messages
+- Stop the route
+- You'll now have 2 other files in your bucket: camel-5.txt and camel-6.txt, the first with 20 messages and second with 5 messages.
+- Go ahead
+
+This won't be needed when using the random naming strategy.
+
+On the opposite you can specify the override restartingPolicy. In that case you'll be able to override whatever you written before (for that particular keyName) on your bucket.
+
 [NOTE]
 ====
 In Streaming upload mode the only keyName option that will be taken into account is the endpoint option. Using the header will throw an NPE and this is done by design.