You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Stefan Guggisberg (JIRA)" <de...@db.apache.org> on 2006/01/04 17:30:00 UTC

[jira] Created: (DERBY-797) ResultSet.getBinaryStream() fails to read chunks of size > 32k

ResultSet.getBinaryStream() fails to read chunks of size > 32k
--------------------------------------------------------------

         Key: DERBY-797
         URL: http://issues.apache.org/jira/browse/DERBY-797
     Project: Derby
        Type: Bug
  Components: JDBC  
    Versions: 10.1.2.1    
 Environment: Derby embedded engine
Windows 2k
JRE 1.4.2_03
    Reporter: Stefan Guggisberg


assume the following table:

create table TEST (TEST_ID integer not null, TEST_DATA blob not null);

insert a record with a blob value larger than 32k, e.g. of size 100000

read that record using a stmt like "select TEST_DATA from TEST where TEST_ID = ?"

the following code fragment demonstrates the issue:

InputStream in = resultSet.getBinaryStream(1);
byte[] buf = new byte[33000];
int n = in.read(buf);

==> n == 32668, i.e. < buf.length !

the problem occurs with all chunked reads that cross the boundary at offset 32668, e.g.

InputStream in = resultSet.getBinaryStream(1);
byte[] buf 1= new byte[32660];
int n = in.read(buf1);
// ok, n == buf1.length
byte[] buf 2= new byte[20];
n = in.read(buf2);
// n == 8, i.e. < buf2.length !


workarounds for this bug:

- read byte by byte i.e. using in.read()
- use resultSet.getBytes()


the faulty code seems to be in org.apache.derby.impl.store.raw.data.MemByteHolder





-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (DERBY-797) ResultSet.getBinaryStream() fails to read chunks of size > 32k

Posted by "Stefan Guggisberg (JIRA)" <de...@db.apache.org>.
    [ http://issues.apache.org/jira/browse/DERBY-797?page=comments#action_12361762 ] 

Stefan Guggisberg commented on DERBY-797:
-----------------------------------------

please resolve this issue as INVALID. the code in question is absolutely correct.
i was wrong about the semantics of InputStream.read(byte[]).

mea culpa, please excuse the noise.

> ResultSet.getBinaryStream() fails to read chunks of size > 32k
> --------------------------------------------------------------
>
>          Key: DERBY-797
>          URL: http://issues.apache.org/jira/browse/DERBY-797
>      Project: Derby
>         Type: Bug
>   Components: JDBC
>     Versions: 10.1.2.1
>  Environment: Derby embedded engine
> Windows 2k
> JRE 1.4.2_03
>     Reporter: Stefan Guggisberg

>
> assume the following table:
> create table TEST (TEST_ID integer not null, TEST_DATA blob not null);
> insert a record with a blob value larger than 32k, e.g. of size 100000
> read that record using a stmt like "select TEST_DATA from TEST where TEST_ID = ?"
> the following code fragment demonstrates the issue:
> InputStream in = resultSet.getBinaryStream(1);
> byte[] buf = new byte[33000];
> int n = in.read(buf);
> ==> n == 32668, i.e. < buf.length !
> the problem occurs with all chunked reads that cross the boundary at offset 32668, e.g.
> InputStream in = resultSet.getBinaryStream(1);
> byte[] buf 1= new byte[32660];
> int n = in.read(buf1);
> // ok, n == buf1.length
> byte[] buf 2= new byte[20];
> n = in.read(buf2);
> // n == 8, i.e. < buf2.length !
> workarounds for this bug:
> - read byte by byte i.e. using in.read()
> - use resultSet.getBytes()
> the faulty code seems to be in org.apache.derby.impl.store.raw.data.MemByteHolder

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Resolved: (DERBY-797) ResultSet.getBinaryStream() fails to read chunks of size > 32k

Posted by "Daniel John Debrunner (JIRA)" <de...@db.apache.org>.
     [ http://issues.apache.org/jira/browse/DERBY-797?page=all ]
     
Daniel John Debrunner resolved DERBY-797:
-----------------------------------------

    Resolution: Invalid

Invalid as Stefan says, since InputStream.read() is not guaranteed to fill the buffer.

> ResultSet.getBinaryStream() fails to read chunks of size > 32k
> --------------------------------------------------------------
>
>          Key: DERBY-797
>          URL: http://issues.apache.org/jira/browse/DERBY-797
>      Project: Derby
>         Type: Bug
>   Components: JDBC
>     Versions: 10.1.2.1
>  Environment: Derby embedded engine
> Windows 2k
> JRE 1.4.2_03
>     Reporter: Stefan Guggisberg

>
> assume the following table:
> create table TEST (TEST_ID integer not null, TEST_DATA blob not null);
> insert a record with a blob value larger than 32k, e.g. of size 100000
> read that record using a stmt like "select TEST_DATA from TEST where TEST_ID = ?"
> the following code fragment demonstrates the issue:
> InputStream in = resultSet.getBinaryStream(1);
> byte[] buf = new byte[33000];
> int n = in.read(buf);
> ==> n == 32668, i.e. < buf.length !
> the problem occurs with all chunked reads that cross the boundary at offset 32668, e.g.
> InputStream in = resultSet.getBinaryStream(1);
> byte[] buf 1= new byte[32660];
> int n = in.read(buf1);
> // ok, n == buf1.length
> byte[] buf 2= new byte[20];
> n = in.read(buf2);
> // n == 8, i.e. < buf2.length !
> workarounds for this bug:
> - read byte by byte i.e. using in.read()
> - use resultSet.getBytes()
> the faulty code seems to be in org.apache.derby.impl.store.raw.data.MemByteHolder

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira