You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by Kathey Marsden <km...@sbcglobal.net> on 2005/05/25 20:22:05 UTC

DERBY-255 Closing a resultset after retrieving a large > 32K alue with Network Server does not release locks

Currently, even though network server materializes the LOB to the
client, it uses  getBlob or  getClob to retrieve the large object. This
holds locks until the end of the transaction. 

I would like to change Network Server to:  
    - Use getCharacterStream and getBinaryStream instead of getClob
       and getBlob to avoid holding the locks after the result set is
closed..
    - Always  use 8 bytes for the FD:OCA place holder so we don't have
to calculate the length

Does anyone see any issues with this, especially for other clients such
as ODBC?

Some background for this:
When a LOB is sent,  an FD:OCA place holder is sent instead of a length
of the value.  The placeholder specifies how many bytes are needed to
specify the length and can be 2, 4, or 8 bytes. 
>From the protocol specification at
    http://www.opengroup.org/publications/catalog/c043.htm :
----
5.5.3  Late Group Descriptors
.....
The placeholder size must be large enough for holding the maximum
possible length of a value belonging to the corresponding LOB column.
However, the sender is allowed to use a placeholder size larger than the
minimum necessary for the LOB column.
---









Re: DERBY-255 Closing a resultset after retrieving a large > 32K alue with Network Server does not release locks

Posted by Kathey Marsden <km...@sbcglobal.net>.
Kathey Marsden wrote:

On how to get stream length of the ResultSet.getBinaryStream().  I
talked about two options.

>    1)  I have a fix in a maintenance branch of an old Cloudscape
>release which I could port.  This does getString() or getBytes() to get
>the value and then has the associated length.
>    2)  I'd actually have to call getBinaryStream twice, once to get the
>length (with available(), skip()) and again to stream
>          the data to the client.  Maybe this is not so bad since
>Blob.length()  does something similar for large values, but actually
>reads the data,  So I suppose for large values this might be faster than
>what we do now.
>
>  
>

Option #2 just doesn't seem to work because I cannot figure out how to
reset the input stream.  A second call to
rs.getBinaryStream() gives me:
SQLState:   XJ001
Severity: 0
Message:  Java exception: ': java.io.EOFException'.
java.io.EOFException
        at
org.apache.derby.impl.jdbc.BinaryToRawStream.<init>(BinaryToRawStream.java:53)
        at
org.apache.derby.impl.jdbc.EmbedResultSet.getBinaryStream(EmbedResultSet.java:1160)
        at derby255.LargeDataLocks.testBinaryData(LargeDataLocks.java:84)
        at derby255.LargeDataLocks.testLocks(LargeDataLocks.java:58)
        at derby255.LargeDataLocks.main(LargeDataLocks.java:39)

mark/reset is not supported.  I am leaning toward option 1, but wanted
to give one last appeal before I go that route.
Any ideas on how to reset the input stream after getting  the length?

Kathey



Re: DERBY-255 Closing a resultset after retrieving a large > 32K alue with Network Server does not release locks

Posted by Kathey Marsden <km...@sbcglobal.net>.
Kathey Marsden wrote:

>Currently, even though network server materializes the LOB to the
>client, it uses  getBlob or  getClob to retrieve the large object. This
>holds locks until the end of the transaction. 
>
>I would like to change Network Server to:  
>    - Use getCharacterStream and getBinaryStream instead of getClob
>       and getBlob to avoid holding the locks after the result set is
>closed..
>    - Always  use 8 bytes for the FD:OCA place holder so we don't have
>to calculate the length
>
>Does anyone see any issues with this, especially for other clients such
>as ODBC?
>  
>
Focusing on Blobs first ....

Well, it looks like  the  DDMWriter.writeScalarStream() logic is heavily
dependent on the length of the LOB.  ,  I changed the extended length
number of bytes to always be 8, but  it looks like I still need the
length of the InputStream before I send it.  Actually from a
specification point of view I don't think that is required and the
length is not written out to the stream, but I am having trouble
figuring out how to rework writeScalarStream and company to eliminate
the need for it.    Of particular concern is padScalarStreamForError()
which pads out the full stream length in the event of an error.

 Does anyone
    a) Have any ideas on how to rework writeScalarStream  and company to
eliminate the need for the length or ..
     b) Have time today to walk through this code with me on IRC, to
better understand what needs to be done. or ...
     c) Have a better idea all together

In the punt category I have two possible solutions.

    1)  I have a fix in a maintenance branch of an old Cloudscape
release which I could port.  This does getString() or getBytes() to get
the value and then has the associated length.
    2)  I'd actually have to call getBinaryStream twice, once to get the
length (with available(), skip()) and again to stream
          the data to the client.  Maybe this is not so bad since
Blob.length()  does something similar for large values, but actually
reads the data,  So I suppose for large values this might be faster than
what we do now.

Thanks for any ideas you have.

Kathey