You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@camel.apache.org by developpef <sr...@orange.fr> on 2012/01/20 17:18:11 UTC

Aggregator consumes messages?

Hello,

Here is my question : 

I have a route that polls a directory and sends the files found to a
ZipService 4 by 4 (the created zip have to contain all the files needed by
another program, like : data1.shx, data1.shp, data1.dbf, data1.prj then
data2.shx, data2.shp...).

So here is my route :
from("file://shp/?noop=true")
	// Group all files for zipping (expecting 4 files: .shp, .shx, .prj, .dbf)
	.aggregate(simple("${file:onlyname.noext}"), new
ZipfileAggregationStrategy())
	.completionSize(4)
	.log("Zipping")
	.setHeader("zipDestinationFolder", constant("/destination"))
	.to("bean:my.ZipService?method=zipFile")
	.log("Zipped : ${file:onlyname.noext}");

But because the considered files a very large (some Mo each), I cannot keep
them in memory as aggregated messages body. So my aggregation strategy only
return a list of files' paths :

public Exchange aggregate(Exchange oldEx, Exchange newEx) {
	if (oldEx == null) {
		return newEx;
	} else {
		Object oldIn = oldEx.getIn().getBody();
		ArrayList list = null;
		if(oldIn instanceof GenericFile) {
			list = new ArrayList();
			list.add(((GenericFile<File>) oldIn).getAbsoluteFilePath());
		} else if(oldIn instanceof ArrayList) {
			list = (ArrayList) oldIn;
		}
		list.add(newEx.getIn().getBody(GenericFile.class).getAbsoluteFilePath());
		newEx.getIn().setBody(list);
		return newEx;
	}
}

These paths are then sent to my ZipService that does the final work. But
when it wants to get a file to read it and zip it, an IOException is thrown
as the file does not exist anymore : it has been consumed by Camel. So to
avoid this problem, I have to set a "noop=true" param and then delete
manually the files.

Is this the expected behavior (aggregation consumes files) or do I do
something wrong?

Thank you in advance.

-----
http://developpef.blogspot.com
--
View this message in context: http://camel.465427.n5.nabble.com/Aggregator-consumes-messages-tp5160898p5160898.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Re: Aggregator consumes messages?

Posted by Claus Ibsen <cl...@gmail.com>.
Hi

Yes this is the expected behavior.


On Fri, Jan 20, 2012 at 5:18 PM, developpef <sr...@orange.fr> wrote:
> Hello,
>
> Here is my question :
>
> I have a route that polls a directory and sends the files found to a
> ZipService 4 by 4 (the created zip have to contain all the files needed by
> another program, like : data1.shx, data1.shp, data1.dbf, data1.prj then
> data2.shx, data2.shp...).
>
> So here is my route :
> from("file://shp/?noop=true")
>        // Group all files for zipping (expecting 4 files: .shp, .shx, .prj, .dbf)
>        .aggregate(simple("${file:onlyname.noext}"), new
> ZipfileAggregationStrategy())
>        .completionSize(4)
>        .log("Zipping")
>        .setHeader("zipDestinationFolder", constant("/destination"))
>        .to("bean:my.ZipService?method=zipFile")
>        .log("Zipped : ${file:onlyname.noext}");
>
> But because the considered files a very large (some Mo each), I cannot keep
> them in memory as aggregated messages body. So my aggregation strategy only
> return a list of files' paths :
>
> public Exchange aggregate(Exchange oldEx, Exchange newEx) {
>        if (oldEx == null) {
>                return newEx;
>        } else {
>                Object oldIn = oldEx.getIn().getBody();
>                ArrayList list = null;
>                if(oldIn instanceof GenericFile) {
>                        list = new ArrayList();
>                        list.add(((GenericFile<File>) oldIn).getAbsoluteFilePath());
>                } else if(oldIn instanceof ArrayList) {
>                        list = (ArrayList) oldIn;
>                }
>                list.add(newEx.getIn().getBody(GenericFile.class).getAbsoluteFilePath());
>                newEx.getIn().setBody(list);
>                return newEx;
>        }
> }
>
> These paths are then sent to my ZipService that does the final work. But
> when it wants to get a file to read it and zip it, an IOException is thrown
> as the file does not exist anymore : it has been consumed by Camel. So to
> avoid this problem, I have to set a "noop=true" param and then delete
> manually the files.
>
> Is this the expected behavior (aggregation consumes files) or do I do
> something wrong?
>
> Thank you in advance.
>
> -----
> http://developpef.blogspot.com
> --
> View this message in context: http://camel.465427.n5.nabble.com/Aggregator-consumes-messages-tp5160898p5160898.html
> Sent from the Camel - Users mailing list archive at Nabble.com.



-- 
Claus Ibsen
-----------------
FuseSource
Email: cibsen@fusesource.com
Web: http://fusesource.com
Twitter: davsclaus, fusenews
Blog: http://davsclaus.blogspot.com/
Author of Camel in Action: http://www.manning.com/ibsen/