You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-user@db.apache.org by Paul Henry <pa...@raytion.com> on 2008/07/18 14:52:51 UTC

Insert/Update Performance

Hi All,

We have been using an embedded derby data base within our application for
some time now. However Recently we have added a new customer that has a
significantly larger amount of data within the application than existing
customers. The customer has been complaining about performance. We have
tracked the issue to our persistance layer.

We are using Hibernate layered over Derby. The database is relatively
simple, just six or seven tables, and we have added indexes where ever
required. So the Query speed doesnt seem to be a problem. However when
saving a new item into the database (An item will create 1 or 2 entries in
most of the tables and around 30 entries in 2 of the tables) we have been
able to measure a decay in the insert time as the db grows in size.

1) With an empty db we had measured delays of 31-140 ms to write a new item
into the DB

2) When the DB contained 1 million items (so a couple of the tables have ~30
million rows) we measured delays of 1000-1500 ms to store a new item

3) When the DB contained 2 million items we measured delays of 1600-2000 ms
to store a new item

Similar delays were recorded when performing an update of existing data.

Has anyone had any experience tuning the combination of hibernate over Derby
for databases with a similar table size, or does anyone have any advice how
to improve performance, whilst retaining both hibernate and derby?
-- 
View this message in context: http://www.nabble.com/Insert-Update-Performance-tp18528545p18528545.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.


Re: Insert/Update Performance

Posted by Kristian Waagan <Kr...@Sun.COM>.
Paul Henry wrote:
> Hi All,
> 
> We have been using an embedded derby data base within our application for
> some time now. However Recently we have added a new customer that has a
> significantly larger amount of data within the application than existing
> customers. The customer has been complaining about performance. We have
> tracked the issue to our persistance layer.
> 
> We are using Hibernate layered over Derby. The database is relatively
> simple, just six or seven tables, and we have added indexes where ever
> required. So the Query speed doesnt seem to be a problem. However when
> saving a new item into the database (An item will create 1 or 2 entries in
> most of the tables and around 30 entries in 2 of the tables) we have been
> able to measure a decay in the insert time as the db grows in size.
> 
> 1) With an empty db we had measured delays of 31-140 ms to write a new item
> into the DB
> 
> 2) When the DB contained 1 million items (so a couple of the tables have ~30
> million rows) we measured delays of 1000-1500 ms to store a new item
> 
> 3) When the DB contained 2 million items we measured delays of 1600-2000 ms
> to store a new item
> 
> Similar delays were recorded when performing an update of existing data.
> 
> Has anyone had any experience tuning the combination of hibernate over Derby
> for databases with a similar table size, or does anyone have any advice how
> to improve performance, whilst retaining both hibernate and derby?

Hello Paul,

A few questions first to better understand your environment.

1) What's your Derby version? JVM? Platform?
2) Is this using a single connection to the database?
3) Is auto commit enabled (default) or disabled?
4) Are you sure indexes are being used when you insert/update data?
5) Can you try to compress the tables to make sure the statistics are up 
to date and see if that helps on a database where the insert/update 
performance is poor?
6) Since you are describing this as a performance problem over time, 
with growing database size, I don't think disk IO itself is the source 
problem but can you describe what kind of IO activity you see when 
inserting/updating data?

You might also want to post a bit of derby.log with statement logging 
enabled, if the table structure and/or data can be made public.


regards,
-- 
Kristian