You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cayenne.apache.org by Tore Halset <ha...@pvv.ntnu.no> on 2006/10/02 12:19:49 UTC
[OT] sync large db over slow network
Hello.
We have a large database for a cayenne server application. The
database engine is currently MS SQL Server, but we will perhaps
switch over to PostgreSQL at some point in time.
For disaster recovery we want to have a mirrored system somewhere
else in the world. It is ok if the mirrored system are updated every
15. minute or so. The data volume are too large to be able to do a
full dump/restore of the database that often. The content of the
database do not change that much every 15. minute.
I have looked at sequoia. It injects at the jdbc-level and issue all
delete/update/insert-statements in the backup database as well as the
main database. I am afraid that the extra level om complexity will
degrade the main service. Anyone here have experience with sequoia
over network links that may be down from time to time..? I wanted to
have one controller on each site, but have read on the mail list that
controllers should have a reliable network between them.
I could do some things myself at the jdbc-level to send over changes
to the database. Small tables could be mirrored completely and have
some smart logic for larger tables. Perhaps use dataport and/or ROP.
Any ideas?
Regards,
- Tore.
Re: [OT] sync large db over slow network
Posted by Tore Halset <ha...@pvv.ntnu.no>.
Thanks, I will have another look at Slony. It does not handle oid
blobs, but I guess we can use bytea blobs as our blobs are not that
big. - we do not need streaming at the jdbc-level.
- Tore.
On Oct 2, 2006, at 18:30, Michael Gentry wrote:
> If you are switching to PostgreSQL, perhaps this would work for you?
>
> http://gborg.postgresql.org/project/slony1/projdisplay.php
>
> /dev/mrg
>
> PS. I have no hands-on experience with it. I have just heard of it
> many times.
>
>
>
> On 10/2/06, Tore Halset <ha...@pvv.ntnu.no> wrote:
>> Hello.
>>
>> We have a large database for a cayenne server application. The
>> database engine is currently MS SQL Server, but we will perhaps
>> switch over to PostgreSQL at some point in time.
>>
>> For disaster recovery we want to have a mirrored system somewhere
>> else in the world. It is ok if the mirrored system are updated every
>> 15. minute or so. The data volume are too large to be able to do a
>> full dump/restore of the database that often. The content of the
>> database do not change that much every 15. minute.
>>
>> I have looked at sequoia. It injects at the jdbc-level and issue all
>> delete/update/insert-statements in the backup database as well as the
>> main database. I am afraid that the extra level om complexity will
>> degrade the main service. Anyone here have experience with sequoia
>> over network links that may be down from time to time..? I wanted to
>> have one controller on each site, but have read on the mail list that
>> controllers should have a reliable network between them.
>>
>> I could do some things myself at the jdbc-level to send over changes
>> to the database. Small tables could be mirrored completely and have
>> some smart logic for larger tables. Perhaps use dataport and/or ROP.
>>
>> Any ideas?
>>
>> Regards,
>> - Tore.
>>
Re: [OT] sync large db over slow network
Posted by Michael Gentry <bl...@gmail.com>.
If you are switching to PostgreSQL, perhaps this would work for you?
http://gborg.postgresql.org/project/slony1/projdisplay.php
/dev/mrg
PS. I have no hands-on experience with it. I have just heard of it many times.
On 10/2/06, Tore Halset <ha...@pvv.ntnu.no> wrote:
> Hello.
>
> We have a large database for a cayenne server application. The
> database engine is currently MS SQL Server, but we will perhaps
> switch over to PostgreSQL at some point in time.
>
> For disaster recovery we want to have a mirrored system somewhere
> else in the world. It is ok if the mirrored system are updated every
> 15. minute or so. The data volume are too large to be able to do a
> full dump/restore of the database that often. The content of the
> database do not change that much every 15. minute.
>
> I have looked at sequoia. It injects at the jdbc-level and issue all
> delete/update/insert-statements in the backup database as well as the
> main database. I am afraid that the extra level om complexity will
> degrade the main service. Anyone here have experience with sequoia
> over network links that may be down from time to time..? I wanted to
> have one controller on each site, but have read on the mail list that
> controllers should have a reliable network between them.
>
> I could do some things myself at the jdbc-level to send over changes
> to the database. Small tables could be mirrored completely and have
> some smart logic for larger tables. Perhaps use dataport and/or ROP.
>
> Any ideas?
>
> Regards,
> - Tore.
>