You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Nathan Boy (JIRA)" <ji...@apache.org> on 2009/05/13 15:44:45 UTC

[jira] Issue Comment Edited: (DERBY-3009) Out of memory error when creating a very large table

    [ https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708918#action_12708918 ] 

Nathan Boy edited comment on DERBY-3009 at 5/13/09 6:43 AM:
------------------------------------------------------------

I have this problem as well, using both Derby 10.5.1.1 and 10.4.2.0 in an embedded client.  I have a schema of about 16 tables, a few of which generally have 200-300k rows.  All of the data is loaded in, and then foreign key constraints are added one by one.  I tried committing between each ADD CONSTRAINT statement, but this did not seem to have any effect.  I still run out of memory even when heap size is set to 2-3 gb.  I have not tried shutting down and starting up the database between each add constraint statement.  I will try this next.

      was (Author: nathanboy):
    I have this problem as well, in Derby 10.5.1.1 and 10.4.2.0.  I have a schema of about 16 tables, a few of which generally have 200-300k rows.  All of the data is loaded in, and then foreign key constraints are added one by one.  I tried committing between each ADD CONSTRAINT statement, but this did not seem to have any effect.  I still run out of memory even when heap size is set to 2-3 gb.  I have not tried shutting down and starting up the database between each add constraint statement.  I will try this next.
  
> Out of memory error when creating a very large table
> ----------------------------------------------------
>
>                 Key: DERBY-3009
>                 URL: https://issues.apache.org/jira/browse/DERBY-3009
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.2.2.0
>         Environment: Win XP Pro
>            Reporter: Nick Williamson
>         Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), IJ crashes with an out of memory error. The table can be created successfully if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.