You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Sunitha Kambhampati (JIRA)" <de...@db.apache.org> on 2006/01/26 01:25:15 UTC

[jira] Updated: (DERBY-599) Using setBlob interface, should not materialize the entire blob value into memory.

     [ http://issues.apache.org/jira/browse/DERBY-599?page=all ]

Sunitha Kambhampati updated DERBY-599:
--------------------------------------

    Attachment: Derby599.diff.txt
                Derby599.stat.txt

Problem:
setBlob(i,blob) does not set the length of the stream in the blob and but instead passes a -1 for stream length. 
During the normalization process, setBlob.normalize(DTD,DVD) calls SQLBlob.setWith. 
The setWidth is called in order to compare the length of the blob value to the maximum width of the column and to throw a truncation error in case the value wont fit into the column.  setWidth() calls SQLBinary.getLength().  If tvalue is a stream, the getLength() method checks for streamLength value and if the streamLength value is set to -1 (ie unknown) , it calls getBytes().length() which calls getValue() and this is where the entire stream is getting materialized.  

This patch fixes DERBY-599 so using setBlob call, will not  materialize the entire blob into memory. 
- changes to setBlob to pass the length of the blob value instead of -1.  The length of the blob value passed into setBlob can be obtained by calling Blob.length() which returns a long.
- move the -ve length check from setBinaryStream to setBinaryStreamInternal since setBlob will not pass a -1 for length.
- change setBinaryStreamInternal to take the length parameter as a long instead of int.  
- Currently Derby allows max value of 2G-1 ( Max value of an int) for blobs. Add check to ensure that if a stream with a length > max value of int is passed, an error is thrown. Use an existing error message 
'The resulting value is outside the range for the data type {0}'

svn stat:
M      java\engine\org\apache\derby\impl\jdbc\EmbedPreparedStatement.java
M      java\testing\org\apache\derbyTesting\functionTests\tests\largedata\LobLimits.java
M      java\testing\org\apache\derbyTesting\functionTests\master\LobLimits.out

Tests
- Adds following testcases to largedata/LobLimits.java 
1) test for insert of 4Gb blob using setBlob api. This will throw the newly added error message. 
2) test for select of 2G blob and insert the 2G blob using setBlob api
3) test for select of 2G blob and update the 2G blob using setBlob api.
4) test for update of 2G blob with a 100MB blob.

Please note, the largedata/LobLimits.java does not run as part of derbyall as it requires large amounts of diskspace and takes a long time to run. This test needs to be run explicitly.

- The largedata/LobLimits test was run on a linux box - IBM 1.4.2 jvm/RHEL4.0/insane jars and it ran successfully with no errors. Without this patch, there would be an outofmemory exception for the test cases mentioned above (except for #1).

Ran derbyall on Win2k using classes directory with Sun JVM 1.4.2.  I ran the network tests -derbynetmats and derbynetclientmats separately. One test failed - derbynetclientmats/derbynetmats/derbynetmats.fail:derbynet/NSinSameJVM.java. This test seems to fail intermittently. I dont believe this is related to this patch. 

----------

I had earlier used a new error message for the case when blob length was greater than 2G, something like 'Blob/Clob length is greater than the supported length' as I didnt find anything appropriate in EmbedPreparedStatement, but looking more, it seemed to me - like this case would be covered by this existing message 'The resulting value is outside the range for the data type {0}' which I have used in this patch.  If someone feels otherwise, please let me know. 

Can someone please review this patch.  Thanks. 


> Using setBlob interface, should not materialize the entire blob value into memory.
> ----------------------------------------------------------------------------------
>
>          Key: DERBY-599
>          URL: http://issues.apache.org/jira/browse/DERBY-599
>      Project: Derby
>         Type: Bug
>   Components: JDBC
>     Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Sunitha Kambhampati
>      Fix For: 10.2.0.0
>  Attachments: Derby599.diff.txt, Derby599.stat.txt
>
> setBlob and blob.length() calls should not materialize blob into memory.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Re: [jira] Updated: (DERBY-599) Using setBlob interface, should not materialize the entire blob value into memory.

Posted by Mike Matrigali <mi...@sbcglobal.net>.
i am looking at reviewing/committing this patch.  If anyone else is
reviewing let me know.

Sunitha Kambhampati (JIRA) wrote:
>      [ http://issues.apache.org/jira/browse/DERBY-599?page=all ]
> 
> Sunitha Kambhampati updated DERBY-599:
> --------------------------------------
> 
>     Attachment: Derby599.diff.txt
>                 Derby599.stat.txt
> 
> Problem:
> setBlob(i,blob) does not set the length of the stream in the blob and but instead passes a -1 for stream length. 
> During the normalization process, setBlob.normalize(DTD,DVD) calls SQLBlob.setWith. 
> The setWidth is called in order to compare the length of the blob value to the maximum width of the column and to throw a truncation error in case the value wont fit into the column.  setWidth() calls SQLBinary.getLength().  If tvalue is a stream, the getLength() method checks for streamLength value and if the streamLength value is set to -1 (ie unknown) , it calls getBytes().length() which calls getValue() and this is where the entire stream is getting materialized.  
> 
> This patch fixes DERBY-599 so using setBlob call, will not  materialize the entire blob into memory. 
> - changes to setBlob to pass the length of the blob value instead of -1.  The length of the blob value passed into setBlob can be obtained by calling Blob.length() which returns a long.
> - move the -ve length check from setBinaryStream to setBinaryStreamInternal since setBlob will not pass a -1 for length.
> - change setBinaryStreamInternal to take the length parameter as a long instead of int.  
> - Currently Derby allows max value of 2G-1 ( Max value of an int) for blobs. Add check to ensure that if a stream with a length > max value of int is passed, an error is thrown. Use an existing error message 
> 'The resulting value is outside the range for the data type {0}'
> 
> svn stat:
> M      java\engine\org\apache\derby\impl\jdbc\EmbedPreparedStatement.java
> M      java\testing\org\apache\derbyTesting\functionTests\tests\largedata\LobLimits.java
> M      java\testing\org\apache\derbyTesting\functionTests\master\LobLimits.out
> 
> Tests
> - Adds following testcases to largedata/LobLimits.java 
> 1) test for insert of 4Gb blob using setBlob api. This will throw the newly added error message. 
> 2) test for select of 2G blob and insert the 2G blob using setBlob api
> 3) test for select of 2G blob and update the 2G blob using setBlob api.
> 4) test for update of 2G blob with a 100MB blob.
> 
> Please note, the largedata/LobLimits.java does not run as part of derbyall as it requires large amounts of diskspace and takes a long time to run. This test needs to be run explicitly.
> 
> - The largedata/LobLimits test was run on a linux box - IBM 1.4.2 jvm/RHEL4.0/insane jars and it ran successfully with no errors. Without this patch, there would be an outofmemory exception for the test cases mentioned above (except for #1).
> 
> Ran derbyall on Win2k using classes directory with Sun JVM 1.4.2.  I ran the network tests -derbynetmats and derbynetclientmats separately. One test failed - derbynetclientmats/derbynetmats/derbynetmats.fail:derbynet/NSinSameJVM.java. This test seems to fail intermittently. I dont believe this is related to this patch. 
> 
> ----------
> 
> I had earlier used a new error message for the case when blob length was greater than 2G, something like 'Blob/Clob length is greater than the supported length' as I didnt find anything appropriate in EmbedPreparedStatement, but looking more, it seemed to me - like this case would be covered by this existing message 'The resulting value is outside the range for the data type {0}' which I have used in this patch.  If someone feels otherwise, please let me know. 
> 
> Can someone please review this patch.  Thanks. 
> 
> 
> 
>>Using setBlob interface, should not materialize the entire blob value into memory.
>>----------------------------------------------------------------------------------
>>
>>         Key: DERBY-599
>>         URL: http://issues.apache.org/jira/browse/DERBY-599
>>     Project: Derby
>>        Type: Bug
>>  Components: JDBC
>>    Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0
>> Environment: all
>>    Reporter: Sunitha Kambhampati
>>    Assignee: Sunitha Kambhampati
>>     Fix For: 10.2.0.0
>> Attachments: Derby599.diff.txt, Derby599.stat.txt
>>
>>setBlob and blob.length() calls should not materialize blob into memory.
> 
>