You are viewing a plain text version of this content. The canonical link for it is here.
Posted to torque-user@db.apache.org by Brian McCallister <mc...@forthillcompany.com> on 2003/04/15 22:29:08 UTC
datadump on big tables
Is there a way to increase the memory available to the datadump task?
I am running it on a pretty big table (really big table) and am hitting
out of memory errors:
...
at
org.apache.velocity.texen.ant.TexenTask.execute(TexenTask.java:564)
at org.apache.tools.ant.Task.perform(Task.java:319)
at org.apache.tools.ant.Target.execute(Target.java:309)
at org.apache.tools.ant.Target.performTasks(Target.java:336)
at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
at
org.apache.tools.ant.Project.executeTargets(Project.java:1250)
at org.apache.tools.ant.Main.runBuild(Main.java:610)
at org.apache.tools.ant.Main.start(Main.java:196)
at org.apache.tools.ant.Main.main(Main.java:235)
Caused by: java.lang.OutOfMemoryError
--- Nested Exception ---
java.lang.OutOfMemoryError
on the ant task. I configured ant to run with 512 megs, but profiling
it it only is using 212 or so megs when the OOM error occurs - which
leads me to believe that the datadump is secretly forking or some such.
Anyone played with this much?
I am doing this with Torque 3.0 on OS X jdk 1.4.1 release against
postgres 7.3.2 with most recent org.postgresql.Driver driver.
Thanks,
Brian
Re: datadump on big tables
Posted by Brian McCallister <mc...@forthillcompany.com>.
Followup to this problem...
Apparently it does run directly in the ant process, no fork - OS X has
issues allocating more than 212 megs to the jvm is seems.
Moved it to Sun JDK 1.4.1_02 on linux and allocated 1 gigs per process
to the jvm and still ran out of memory. Profiling it - it uses the
memory - 4 processes using as much memory as I will throw at it (1 gig
per process in this case, 4 odd some gigabytes total memory usage) -
which is off the charts of absurdity, the database isn't *that* big - a
postgres dump (type c so pretty optimized, bzipping the dump only
shaves off a few percent) is only 70 megs.
Is the data dumper significantly different in 3.1?
-Brian
On Tuesday, April 15, 2003, at 04:29 PM, Brian McCallister wrote:
> Is there a way to increase the memory available to the datadump task?
>
> I am running it on a pretty big table (really big table) and am
> hitting out of memory errors:
>
> ...
> at
> org.apache.velocity.texen.ant.TexenTask.execute(TexenTask.java:564)
> at org.apache.tools.ant.Task.perform(Task.java:319)
> at org.apache.tools.ant.Target.execute(Target.java:309)
> at org.apache.tools.ant.Target.performTasks(Target.java:336)
> at
> org.apache.tools.ant.Project.executeTarget(Project.java:1306)
> at
> org.apache.tools.ant.Project.executeTargets(Project.java:1250)
> at org.apache.tools.ant.Main.runBuild(Main.java:610)
> at org.apache.tools.ant.Main.start(Main.java:196)
> at org.apache.tools.ant.Main.main(Main.java:235)
> Caused by: java.lang.OutOfMemoryError
> --- Nested Exception ---
> java.lang.OutOfMemoryError
>
> on the ant task. I configured ant to run with 512 megs, but profiling
> it it only is using 212 or so megs when the OOM error occurs - which
> leads me to believe that the datadump is secretly forking or some > such.
>
> Anyone played with this much?
>
> I am doing this with Torque 3.0 on OS X jdk 1.4.1 release against
> postgres 7.3.2 with most recent org.postgresql.Driver driver.
>
> Thanks,
>
> Brian
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
> For additional commands, e-mail: torque-user-help@db.apache.org
>
>