You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by David Marrow <dm...@cloudgrasp.com> on 2017/11/09 03:47:02 UTC

Flow file failover using a cluster.

Dev,

I know you are working on a solution to this but in the mean time I wanted to ask about how best to implement HA for flow files across a cluster.   One option I saw was to use an NFS mount and if a node should fail start up another node and point it to the failed nodes repo.   I also saw some comments about designing a workflow to auto recover.   My guess is to do something place the flow file in directory and for the last step of a flow delete it.  If it ages you know that it failed to process so reprocess.    Just wanted to get your input.

Also any updates on a solution/timeframe if you have them.

Dave