You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Peter Crowther <Pe...@melandra.com> on 2007/10/15 11:38:57 UTC

[OT] Replication (was RE: Copying large files around)

[Marked off-topic as this has, indeed, come a long long way from the
original question]

> From: Johnny Kewl [mailto:john@kewlstuff.co.za] 
> but yes if the user could consider replication 
> and the required
> dB design, much better than moving GB files around.

Not always.  For example, replication has the disadvantage that an
application logic error can get replicated and break both your live and
your backup database (although I accept that point-in-time log-based
recovery can mitigate this).  Point-in-time backups that are copied
between machines have a long and honourable history; they're also often
much easier for the ops team to get their heads round, where a
replicated system can often set heads spinning in the ops team.  You
also have to be very careful with your replication setup - and the
guarantees you give your client - in case of partial failures.  For
example, if commuinications to the second system fails, do you stop
accepting transactions on the primary (potential loss of new business on
an Internet site) or do you continue, but in the knowledge that if the
primary fails you're now losing transactions and hence violating ACID
principles from the client's point of view (potential loss of old
business on an Internet site)?

(As you can probably guess, I've designed and built my share of
replicated databases - and supported them!)

		- Peter

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: [OT] Replication (was RE: Copying large files around)

Posted by Johnny Kewl <jo...@kewlstuff.co.za>.
---------------------------------------------------------------------------
HARBOR: http://coolharbor.100free.com/index.htm
Now Tomcat is also a cool application server
---------------------------------------------------------------------------
----- Original Message ----- 
From: "Peter Crowther" <Pe...@melandra.com>
To: "Tomcat Users List" <us...@tomcat.apache.org>
Sent: Monday, October 15, 2007 11:38 AM
Subject: [OT] Replication (was RE: Copying large files around)


[Marked off-topic as this has, indeed, come a long long way from the
original question]

> From: Johnny Kewl [mailto:john@kewlstuff.co.za]
> but yes if the user could consider replication
> and the required
> dB design, much better than moving GB files around.

Not always.  For example, replication has the disadvantage that an
application logic error can get replicated and break both your live and
your backup database (although I accept that point-in-time log-based
recovery can mitigate this).  Point-in-time backups that are copied
between machines have a long and honourable history; they're also often
much easier for the ops team to get their heads round, where a
replicated system can often set heads spinning in the ops team.  You
also have to be very careful with your replication setup - and the
guarantees you give your client - in case of partial failures.  For
example, if commuinications to the second system fails, do you stop
accepting transactions on the primary (potential loss of new business on
an Internet site) or do you continue, but in the knowledge that if the
primary fails you're now losing transactions and hence violating ACID
principles from the client's point of view (potential loss of old
business on an Internet site)?

=================================================
Yes, exactly - all true, replication requires very careful design 
consideration, someone deletes one dB and it ripples through the other 10, 
yes, everything you say is true, and its the reason I made a Master - Master 
postgresql replication system. It tries to address all the weaknesses, while 
enjoying its benefits.
Its a long long time ago so I cant remember all the design details but thats 
what I tried to avoid... things like multiple masters to begin with, so if 
say the 10th floor goes down, those direct users, ie the ones talking 
directly to that dB are only affected, however when it comes back up, all 
the other masters will bring it back into alignment.
Also things like allowing one to make one dB none deletable... its a pure 
backup and even if one kills the main dB, this dB replicates updates, and 
inserts, but never deletes... it becomes a complete history.

However even with all this... to get it right, one needs to consider 
replication when designing the dB... still has to be a total system design, 
and sadly its not a plug and play technology. I use TC as a web enabled 
controller and originally I wanted to extend it to other dB's as well, but 
was exhausted after just designing it for postgres. I also tried to design 
it for multiple dispersed web clusters... ie update newyork and london will 
eventually sync up and visa versa, but never got around to testing it over 
the web.

But yes all you say is true, and although fascinating technology, the brunt 
of the thing is that if it wasnt considered up front, you probably resigned 
to good old fashioned backups.

Everything I build runs on good old Tomcat... so I think I'm still legal if 
I talk about this ;)

... the general knowledge in this group is fantastic... thanks


---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org