You are viewing a plain text version of this content. The canonical link for it is here.
Posted to j-users@xalan.apache.org by Lilantha Darshana <Li...@virtusa.com> on 2003/07/16 06:08:36 UTC

RE: FW: I'm getting java.lang.OutOfMemoryError when my XML (data) fil es is very large (260MB)

Even with SAX or Stream source I could not achieve much performance gain
but it was better than other sources which I managed to transform my data
file
within 31 min on a dual processor machine with 1GB physical memory &
2 GB of virtual memory.
I'll take a look at "pruning" and "filtering" as a alternate approach if 
applicable for my task.

Thanks
Lilantha

-----Original Message-----
From: Joseph Kesselman [mailto:keshlam@us.ibm.com]
Sent: Tuesday, July 15, 2003 8:16 PM
To: Lilantha Darshana
Cc: 'Simon Kitching'; 'xalan-j-users@xml.apache.org'
Subject: RE: FW: I'm getting java.lang.OutOfMemoryError when my XML
(data) fil es is very large (260MB)






Actually, if you pass Xalan-J a SAX or Stream source, we will use a DTM
model internally, which is more space-efficient than a typical DOM
implementation (and especially more efficient than our DOM-to-DTM bridge
code).

However, it is true that XSLT  in general requires that the whole document
be accessible at once, which generally means an in-memory data model.
Search archives of this list for discusion of "pruning" and "filtering";
there are definitely opportunities for deep analysis and optimization here,
but they're a moderately difficult problem and I don't know of any XSLT
implementation which is taking serious advantage of them yet.

______________________________________
Joe Kesselman, IBM Next-Generation Web Technologies: XML, XSL and more.
"The world changed profoundly and unpredictably the day Tim Berners Lee
got bitten by a radioactive spider." -- Rafe Culpin, in r.m.filk