You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@camel.apache.org by Joseph Kampf <jo...@gmail.com> on 2016/08/11 18:22:26 UTC

Re: Suggestions on how to cluster Camel?

We are running multiple Karaf nodes.  Each is independent of the other with the exception of a database, a load balancer and an Active MQ Broker cluster.

We wrote a web service that uses JMX to make sure all the Camel Contexts are up in our Karaf container.  That web service is used by our load balancer to tell if a node is available for traffic.

We use Active MQ with active/passive fail over.  For Topics we use Virtual topics so a message is only processed once per subscriber.

For things like FTP and SQL/JPA consumers we use the Idempotence consumer.  This prevents messages files or SQL rows from bing processed by multiple nodes (or even threads in the same node).  Our nodes are all pointed at the same database where the Idempotence consumer writes its rows.

We use Chef to provision our nodes with a  custom feature that deploys everything we need to run our app.

Joe




On 7/22/16, 2:38 PM, "David Hoffer" <dh...@gmail.com> wrote:

>We have a standalone Camel app (runs as daemon with no container) that we
>need to cluster and I'm looking for options on how to do this.
>
>Our Camel app handles file routing.  All inputs are files so exchanges deal
>with byte arrays and the file name.  Destinations are either file folders
>or web-services where we attach the file and call the service to publish.
>Also currently we use JMX to remote manage and monitor.
>
>So how best to cluster this?  Technically what is most important in the
>cluster feature set is fail-over so we can guarantee high availability but
>it would be nice to get load balancing too.
>
>Our app gets its input via local disk folders (which we can convert to
>network shares (e.g. vnx)) or via external SFTP endpoints.  The app has
>about 100 of these folders/sftp endpoints.  So when clustered all the
>routes would be using the network shares instead of local folders.
>
>I'm assuming that file and sftp endpoints should handle this well as they
>already use a file lock to prevent contention. However we would have to
>have a solution for stale file locks for clustered nodes that failed.  How
>would the other nodes know they can delete the locks for failed nodes (but
>only for failed nodes)?
>
>Also since we would now be processing routes concurrently we would have to
>determine if the receiving webservice endpoints can handle concurrent
>connections.  Ideally I'd like to be able to control/tune the concurrent
>nature of each route (across the cluster) so that if needed we could
>limit/stop concurrent processing of a route but still always have fail-over
>cluster node support.
>
>Then there is the JMX issue, right now we have apps to manage and monitor
>route traffic but somehow this would have to be aggregated across all nodes
>in the cluster.
>
>Are there any techniques or frameworks that could help us implement this?
>Any suggestions on approaches that work and what doesn't work?
>
>Thanks,
>-Dave