You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Sunitha Kambhampati (JIRA)" <de...@db.apache.org> on 2006/01/09 18:34:52 UTC

[jira] Created: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
--------------------------------------------------------------------------------------------------

         Key: DERBY-802
         URL: http://issues.apache.org/jira/browse/DERBY-802
     Project: Derby
        Type: Bug
  Components: JDBC  
    Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2    
 Environment: all
    Reporter: Sunitha Kambhampati


Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 

import java.sql.*;
import java.io.*;

/**
*
* @author greg
*/
public class derby_filewrite_fileread {
   
    private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
    private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
   
    /** Creates a new instance of derby_filewrite_fileread */
    public derby_filewrite_fileread() {       
       
    }
   
    public static void main(String args[]) {
        try {
            Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
            Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
            connection.setAutoCommit(false);
           
            Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
            ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
           
            // Create table if it doesn't already exists.
            boolean exist=false;
            while ( result.next() ) {
                if ("db_file".equalsIgnoreCase(result.getString(1)))
                    exist=true;
            }
            if ( !exist ) {
                System.out.println("Create table db_file.");
                statement.execute("CREATE TABLE db_file ("+
                                           "     name          VARCHAR(40),"+
                                           "     file          BLOB(2G) NOT NULL)");
                connection.commit();
            }
           
            // Read file from disk, write on DB.
            System.out.println("1 - Read file from disk, write on DB.");
            PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
            FileInputStream fileInputStream = new FileInputStream(file);
            preparedStatement.setString(1, file.getName());
            preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
            preparedStatement.execute();
            connection.commit();
            System.out.println("2 - END OF Read file from disk, write on DB.");
           
           
            // Read file from DB, and write on disk.
            System.out.println("3 - Read file from DB, and write on disk.");
            result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
            byte[] buffer = new byte [1024];
            result.next();
            BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
            FileOutputStream outputStream = new FileOutputStream(destinationFile);
            int readBytes = 0;
            while (readBytes!=-1) {
                readBytes=inputStream.read(buffer,0,buffer.length);
                if ( readBytes != -1 )
                    outputStream.write(buffer, 0, readBytes);
            }     
            inputStream.close();
            outputStream.close();
            System.out.println("4 - END OF Read file from DB, and write on disk.");
        }
        catch (Exception e) {
            e.printStackTrace(System.err);
        }
    }
}


It returns
1 - Read file from disk, write on DB.
2 - END OF Read file from disk, write on DB.
3 - Read file from DB, and write on disk.
java.lang.OutOfMemoryError
if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Sunitha Kambhampati (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12362225 ] 

Sunitha Kambhampati commented on DERBY-802:
-------------------------------------------

>From list, Dan suggested the following workaround  Change statement type to default ResultSet.TYPE_FORWARD_ONLY  and this works ok. 

===================

I modified the repro and tried to do a read of a 100mb blob,  did  a heap profiler run and found the following trace. 
         percent         live       alloc'ed  stack class
 rank   self  accum    bytes objs   bytes objs trace name
    1 59.59% 59.59% 30653040  935 31682912 1046  2050 [B
    2 32.61% 92.20% 16777232    1 33489040    9  7268 [B
    3  3.12% 95.33%  1606416   49 1737552   53  6935 [B
    4  0.35% 95.68%   181632  946  194688 1014  2014 org.apache.derby.impl.store.raw.data.StoredPage
......
 I havent looked very closely here but it seems to me that some materialization is happening by looking at this trace and code.  )

TRACE 7268:
	org.apache.derby.iapi.types.SQLBinary.readFromStream(SQLBinary.java:384)
	org.apache.derby.iapi.types.SQLBinary.readExternal(SQLBinary.java:301)
	org.apache.derby.iapi.types.SQLBinary.getValue(SQLBinary.java:194)
	org.apache.derby.iapi.types.SQLBinary.getClone(SQLBinary.java:498)


> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug
>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati

>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12421136 ] 
            
Andreas Korneliussen commented on DERBY-802:
--------------------------------------------

Thank you very much for the review comments. I agree that the undo-projection code can be improved, (something I tried to do in the v2 diff). Your suggested approach sounds promising, I will try it out, and add more tests to BLOBTest.

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-802
>                 URL: http://issues.apache.org/jira/browse/DERBY-802
>             Project: Derby
>          Issue Type: Bug
>          Components: JDBC
>    Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>         Environment: all
>            Reporter: Sunitha Kambhampati
>         Assigned To: Andreas Korneliussen
>            Priority: Minor
>         Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12420819 ] 

Andreas Korneliussen commented on DERBY-802:
--------------------------------------------

Withdrawing derby-802v2.diff, since it makes SURQueryMixTest.junit fail.
The first patch has been run with no failures in derbyall.
I will commit tomorrow, unless I receive any review comments.

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---------------------------------------

    Attachment: derby-802.diff
                derby-802.stat

Attached is a patch which should fix the OutOfMemory problem.

In ScrollInsensitiveResultSet.java and ProjectRestrictResultSet, the cloning has been removed.
When the ScrollInsensitiveResultSet inserts rows to the BackingStoreHashTable, it leaves it up to the BackingStoreHashTable to do cloning. If the row is too big to go into memory, BackingStoreHashTable will put it on disk.

BackingStoreHashTable had to be fixed to avoid unneccassry inmemory cloning there as well.

The changes in SQLBinary and its subclasses is to make the method estimateMemoryUsage() return a number which is at least as big as the memory the column actually will use in memory. Before this fix, the estimated memory usage for a 64MB blob was on a few bytes ~ 50 bytes.  

Finally, I found that when storing a SQLBinary (SQLBlob) to a conglomerate, the ExecRow which the column is within cannot be used again when backing the row to another conglomerate (if the row goes over multiple pages, I think). This is exactly what happens in ScrollInsensitiveResultset.updateRow(..). To get around this problem, I had do reassign some of the values in the updateRow(..) method to refer to the data backed to the BackingStoreHashTable.


> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Fernanda Pizzorno (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12421829 ] 
            
Fernanda Pizzorno commented on DERBY-802:
-----------------------------------------

All my comments have been addressed in patch (derby-802v3.diff).

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-802
>                 URL: http://issues.apache.org/jira/browse/DERBY-802
>             Project: Derby
>          Issue Type: Bug
>          Components: JDBC
>    Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>         Environment: all
>            Reporter: Sunitha Kambhampati
>         Assigned To: Andreas Korneliussen
>            Priority: Minor
>         Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff, derby-802v3.diff, derby-802v3.stat
>
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] Resolved: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen resolved DERBY-802.
----------------------------------------

    Fix Version/s: 10.2.0.0
       Resolution: Fixed
       Derby Info:   (was: [Patch Available])

Thanks for reviewing.

Committed revision 423034.


> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-802
>                 URL: http://issues.apache.org/jira/browse/DERBY-802
>             Project: Derby
>          Issue Type: Bug
>          Components: JDBC
>    Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>         Environment: all
>            Reporter: Sunitha Kambhampati
>         Assigned To: Andreas Korneliussen
>            Priority: Minor
>             Fix For: 10.2.0.0
>
>         Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff, derby-802v3.diff, derby-802v3.stat
>
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12417500 ] 

Andreas Korneliussen commented on DERBY-802:
--------------------------------------------

The problem here is caused by ScrollInsensitiveResultSet doing a clone of the rows before inserting the into the BackingStoreHashtable.  The cloning of a blob, will cause a new column to be created, which reads all the data of the blob into a byte-array in memory. So if there is a 1GB blob, all the data will be read into memory.

To fix, I intent to avoid the cloning in ScrollInsensitiveResultSet. If the row, based on estimated memory usage, can go into memory, it should be cloned in BackingStoreHashTable. If it cannot go into memory, it should be spilt to disk (in which case you do not need a clone.


> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor

>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Sunitha Kambhampati (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Sunitha Kambhampati updated DERBY-802:
--------------------------------------

    Priority: Minor  (was: Major)

changing priority to minor as workaround exists. 

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug
>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Priority: Minor

>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---------------------------------------

    Derby Info: [Patch Available]

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Assigned: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen reassigned DERBY-802:
------------------------------------------

    Assign To: Andreas Korneliussen

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor

>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Re: [jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by Andreas Korneliussen <An...@Sun.COM>.
Fernanda Pizzorno (JIRA) wrote:
>     [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12420883 ] 
> 
> Fernanda Pizzorno commented on DERBY-802:
> -----------------------------------------
> 
> I am reviewing your patch (derby-802.diff), but I won't have time to finish today. Can you wait a bit longer before you commit?
> 
Thanks.
Yes, I will wait for the review.

Andreas

[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Fernanda Pizzorno (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12420883 ] 

Fernanda Pizzorno commented on DERBY-802:
-----------------------------------------

I am reviewing your patch (derby-802.diff), but I won't have time to finish today. Can you wait a bit longer before you commit?

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---------------------------------------

    Attachment: derby-802v3.diff
                derby-802v3.stat

Attached is a patch (derby-802v3.diff and derby-802v3.stat) which uses projectmappings calculated from ProjectRestrictResultSet, and 4 new testcases with projections has been added to BLOBTest.junit

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-802
>                 URL: http://issues.apache.org/jira/browse/DERBY-802
>             Project: Derby
>          Issue Type: Bug
>          Components: JDBC
>    Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>         Environment: all
>            Reporter: Sunitha Kambhampati
>         Assigned To: Andreas Korneliussen
>            Priority: Minor
>         Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff, derby-802v3.diff, derby-802v3.stat
>
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] Commented: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Fernanda Pizzorno (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12421121 ] 
            
Fernanda Pizzorno commented on DERBY-802:
-----------------------------------------

I have looked at the patch (derby-802.diff), and I have a few comments/questions.

In ScrollInsensitiveResultSet you are building an array for mapping the positions in the original row to the positions on the projected row (origPos).

+			// Array of original position in row
+			final int[] origPos = new int[newRowData.length];
+			
+			for (int i=0; i<origPos.length; i++) {
+				origPos[i] = -1 ; 
+			}
+
+			// Make the origPos, by brute-force comparison of identity:
+			for (int i=0; i<newRowData.length; i++) {
+				for (int j=0; j<rowData.length; j++) {
+					if (rowData[j]==newRowData[i]) {
+						origPos[i] = j;
+						break;
+					}
+				}
+			}

[...]

+			for (int i=0; i<origPos.length; i++) {
+				if (origPos[i]>=0) {
+					rowData[origPos[i]] = backedData[i];
+				}
+			}

ProjectRestrictResultSet already contains (projectMapping) a similar array to the one you are building here (origPos), wouldn't it be better to build this array in ProjectRestrictRestultSet using "projectMapping" instead of using brute-force comparison of identity?

Suggestion:

ScrollInsensitiveResultSet:
+			// Array of original position in row
+			int[] origPos = ((ProjectRestrictResultSet)source).getBaseProjectMapping();

[...]

+			for (int i=0; i<origPos.length; i++) {
+				if (origPos[i]>=0) {
+					rowData[origPos[i] - 1] = backedData[i];
+				}
+			}

ProjectRestrictResultSet:
+    public int[] getBaseProjectMapping() {
+        int[] result;
+        if (source instanceof ProjectRestrictResultSet) {
+            result = new int[projectMapping.length];
+            ProjectRestrictResultSet prs = (ProjectRestrictResultSet) source;
+            int[] sourceMap = prs.getBaseProjectMapping();
+            for (int i=0; i<projectMapping.length; i++) {
+                if (projectMapping[i] > 0) {
+                    result[i] = sourceMap[projectMapping[i] - 1];
+                }
+            }            
+        } else {
+            result = projectMapping;
+        }
+        return result;
+    }

I have also looked into the tests added in DERBY-1477 and I found out that neither of them use projection. Since part of the changes made in DERBY-802 are for cases where you have a projection, it would be nice if you could add some tests where projections where being used.




> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-802
>                 URL: http://issues.apache.org/jira/browse/DERBY-802
>             Project: Derby
>          Issue Type: Bug
>          Components: JDBC
>    Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>         Environment: all
>            Reporter: Sunitha Kambhampati
>         Assigned To: Andreas Korneliussen
>            Priority: Minor
>         Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] Updated: (DERBY-802) OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE

Posted by "Andreas Korneliussen (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-802?page=all ]

Andreas Korneliussen updated DERBY-802:
---------------------------------------

    Attachment: derby-802v2.diff

The attached diff (derby-802v2.diff) has one change compared to the first diff:
* The logic for undoing the projection is moved to ProjectRestricResultSet and takes advantage of the projectMappings array already built there.

> OutofMemory Error when reading large blob when statement type is ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>          Key: DERBY-802
>          URL: http://issues.apache.org/jira/browse/DERBY-802
>      Project: Derby
>         Type: Bug

>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Andreas Korneliussen
>     Priority: Minor
>  Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
> Grégoire Dubois on the list reported this problem.  From his mail: the reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira