You are viewing a plain text version of this content. The canonical link for it is here.
Posted to j-users@xalan.apache.org by Marco Büchler <ma...@studserv.uni-leipzig.de> on 2005/03/26 10:38:41 UTC

[problem with transforming large documents]

hi,

i'm evaluating some special problems to reduce connections to mysql. and 
so i'm trying for some special db-requests to use precompiled xml 
(reducing of complex join-operations).

the original file from mysql is about 4 GB and i think i can reduce 
this  file to about 1 GB. for this i want to write some xsl-filters 
working on the original file.

my code is (anywhere in the examples should be a similar one):

/*
 * StreamTransformer.java
 *
 * Created on 26. März 2005, 10:10
 */

package de.uni_leipzig.asv.marvin.test;

import javax.xml.transform.*;
import javax.xml.transform.stream.*;

/**
 *
 * @author  buechler
 */
public class StreamTransformer {
   
    /** Creates a new instance of StreamTransformer */
    public StreamTransformer() {
    }
   
    public void transform( String strInFile, String strXSLFile, String 
strOutFile ) throws TransformerException, TransformerConfigurationException{
        StreamSource objXSLSource = new StreamSource( strXSLFile );
        TransformerFactory objTransFactory = 
TransformerFactory.newInstance();
        Transformer objTransformer = objTransFactory.newTransformer( 
objXSLSource );
        objTransformer.transform( new StreamSource(strInFile), new 
StreamResult(strOutFile) );
    }
   
}

a small document (about 5 MB) can be transformed correctly. the 4 GB 
document produces the "java.lang.OutOfMemoryError". This seems to me 
that there are build an dom-tree but i used StreamSource and 
StreamResult. the target file is of course empty in this case.

does anybody know what's wrong? and have anybody some skills with 
process large files, especially to perfomance?


regards
mb