You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ofbiz.apache.org by Scott Gray <sc...@hotwaxmedia.com> on 2010/09/11 09:23:09 UTC

Blobs, byte[] and ByteBuffer

Currently JdbcValueHandler.BlobJdbcValueHandler will accept byte arrays and Blobs (reluctantly) but not ByteBuffers.  The problem is that the service engine uses ByteBuffer as the attribute type for attributes created based on blob entity fields and crud services fail due to this because they don't attempt to convert them before using something like <create-value/>.

So what is the correct behavior?  Should the service engine be using a different type or should the JdbcValueHandler be more flexible?

Thanks
Scott


Re: Blobs, byte[] and ByteBuffer

Posted by Adrian Crum <ad...@yahoo.com>.
--- On Sat, 9/18/10, Scott Gray <sc...@hotwaxmedia.com> wrote:
> On 13/09/2010, at 12:09 PM, Adrian
> Crum wrote:
> 
> > --- On Sun, 9/12/10, Scott Gray <sc...@hotwaxmedia.com>
> wrote:
> >> On 12/09/2010, at 2:57 AM, Adrian
> >> Crum wrote:
> >> 
> >>> --- On Sat, 9/11/10, Scott Gray <sc...@hotwaxmedia.com>
> >> wrote:
> >>>> Currently
> >>>> JdbcValueHandler.BlobJdbcValueHandler will
> accept
> >> byte
> >>>> arrays and Blobs (reluctantly) but not
> >> ByteBuffers. 
> >>>> The problem is that the service engine
> uses
> >> ByteBuffer as
> >>>> the attribute type for attributes created
> based on
> >> blob
> >>>> entity fields and crud services fail due
> to this
> >> because
> >>>> they don't attempt to convert them before
> using
> >> something
> >>>> like <create-value/>.
> >>>> 
> >>>> So what is the correct behavior? 
> Should the
> >> service
> >>>> engine be using a different type or should
> the
> >>>> JdbcValueHandler be more flexible?
> >>> 
> >>> I'm pretty sure the current JdbcValueHandler
> code
> >> contains the same logic as the original switch
> statement,
> >> but I might have missed something.
> >> 
> >> Thanks for the info, it looks like the code used
> to check
> >> for byte[] then ByteBuffer and then finally Blob,
> snippet
> >> below from about a year ago:
> >> 
> >>         
>    case 12:
> >>             
>    if
> >> (fieldValue instanceof byte[]) {
> >>             
>   
> >>     sqlP.setBytes((byte[])
> fieldValue);
> >>             
>    }
> >> else if (fieldValue instanceof ByteBuffer) {
> >>             
>   
> >> 
>    sqlP.setBytes(((ByteBuffer)
> >> fieldValue).array());
> >>             
>    }
> >> else {
> >>             
>   
> >> 
>    sqlP.setValue((java.sql.Blob)
> fieldValue);
> >>             
>    }
> >>             
>   
> >> break;
> > 
> > That's interesting from a historical perspective. But,
> where do we go from there? What is the ByteBuffer being used
> for? Can we make the service definition or whatever more
> specific?
> 
> Sorry for the delay, bad form to start a conversation and
> not continue it.  ByteBuffer is the type used by the
> service engine for entity auto-attributes when the entity
> field type is Blob.  Don't ask me why, I'm just
> reporting what I see.
> 
> > In other words, instead of adding ByteBuffer support
> to JdbcValueHandler, can we be more specific about what
> we're storing in the BLOB - and use another data type?
> 
> Only if the entity definitions use something other than
> Blob, the case I'm looking at is the DataResource entities,
> I think it was ImageDataResource.
> 
> > Maybe ByteBuffer support SHOULD be added to
> JdbcValueHandler - since there is a theoretical performance
> advantage.
> > 
> > Like I said before - it's worth discussing.
> Personally, I have no preference. All I know is that the
> BLOB handling is a bit muddled and it would be nice to get
> things more clearly defined. That's why I introduced the new
> data types - in the hope of bringing clarity.
> 
> But what are our real options here?  Either we choose
> more specific objects than generic byte array wrappers such
> as Blob and ByteBuffer or we should just use byte[] for the
> java type and have the entity and service engines perform
> the conversions from the wrappers to the array?

The options are to change the ImageDataResource.imageData field type to byte[] or object, or add ByteArray support to JdbcValueHandler.

If you add ByteArray support to JdbcValueHandler, do not use the previous code. According to the API, the array() method is optional, so some platforms might not support it. Use get(byte[]) instead.

-Adrian



      

Re: Blobs, byte[] and ByteBuffer

Posted by Scott Gray <sc...@hotwaxmedia.com>.
On 13/09/2010, at 12:09 PM, Adrian Crum wrote:

> --- On Sun, 9/12/10, Scott Gray <sc...@hotwaxmedia.com> wrote:
>> On 12/09/2010, at 2:57 AM, Adrian
>> Crum wrote:
>> 
>>> --- On Sat, 9/11/10, Scott Gray <sc...@hotwaxmedia.com>
>> wrote:
>>>> Currently
>>>> JdbcValueHandler.BlobJdbcValueHandler will accept
>> byte
>>>> arrays and Blobs (reluctantly) but not
>> ByteBuffers. 
>>>> The problem is that the service engine uses
>> ByteBuffer as
>>>> the attribute type for attributes created based on
>> blob
>>>> entity fields and crud services fail due to this
>> because
>>>> they don't attempt to convert them before using
>> something
>>>> like <create-value/>.
>>>> 
>>>> So what is the correct behavior?  Should the
>> service
>>>> engine be using a different type or should the
>>>> JdbcValueHandler be more flexible?
>>> 
>>> I'm pretty sure the current JdbcValueHandler code
>> contains the same logic as the original switch statement,
>> but I might have missed something.
>> 
>> Thanks for the info, it looks like the code used to check
>> for byte[] then ByteBuffer and then finally Blob, snippet
>> below from about a year ago:
>> 
>>             case 12:
>>                 if
>> (fieldValue instanceof byte[]) {
>>                
>>     sqlP.setBytes((byte[]) fieldValue);
>>                 }
>> else if (fieldValue instanceof ByteBuffer) {
>>                
>>     sqlP.setBytes(((ByteBuffer)
>> fieldValue).array());
>>                 }
>> else {
>>                
>>     sqlP.setValue((java.sql.Blob) fieldValue);
>>                 }
>>                
>> break;
> 
> That's interesting from a historical perspective. But, where do we go from there? What is the ByteBuffer being used for? Can we make the service definition or whatever more specific?

Sorry for the delay, bad form to start a conversation and not continue it.  ByteBuffer is the type used by the service engine for entity auto-attributes when the entity field type is Blob.  Don't ask me why, I'm just reporting what I see.

> In other words, instead of adding ByteBuffer support to JdbcValueHandler, can we be more specific about what we're storing in the BLOB - and use another data type?

Only if the entity definitions use something other than Blob, the case I'm looking at is the DataResource entities, I think it was ImageDataResource.

> Maybe ByteBuffer support SHOULD be added to JdbcValueHandler - since there is a theoretical performance advantage.
> 
> Like I said before - it's worth discussing. Personally, I have no preference. All I know is that the BLOB handling is a bit muddled and it would be nice to get things more clearly defined. That's why I introduced the new data types - in the hope of bringing clarity.

But what are our real options here?  Either we choose more specific objects than generic byte array wrappers such as Blob and ByteBuffer or we should just use byte[] for the java type and have the entity and service engines perform the conversions from the wrappers to the array?

Regards
Scott

Re: Blobs, byte[] and ByteBuffer

Posted by Adrian Crum <ad...@yahoo.com>.
--- On Sun, 9/12/10, Scott Gray <sc...@hotwaxmedia.com> wrote:
> On 12/09/2010, at 2:57 AM, Adrian
> Crum wrote:
> 
> > --- On Sat, 9/11/10, Scott Gray <sc...@hotwaxmedia.com>
> wrote:
> >> Currently
> >> JdbcValueHandler.BlobJdbcValueHandler will accept
> byte
> >> arrays and Blobs (reluctantly) but not
> ByteBuffers. 
> >> The problem is that the service engine uses
> ByteBuffer as
> >> the attribute type for attributes created based on
> blob
> >> entity fields and crud services fail due to this
> because
> >> they don't attempt to convert them before using
> something
> >> like <create-value/>.
> >> 
> >> So what is the correct behavior?  Should the
> service
> >> engine be using a different type or should the
> >> JdbcValueHandler be more flexible?
> > 
> > I'm pretty sure the current JdbcValueHandler code
> contains the same logic as the original switch statement,
> but I might have missed something.
> 
> Thanks for the info, it looks like the code used to check
> for byte[] then ByteBuffer and then finally Blob, snippet
> below from about a year ago:
> 
>             case 12:
>                 if
> (fieldValue instanceof byte[]) {
>                
>     sqlP.setBytes((byte[]) fieldValue);
>                 }
> else if (fieldValue instanceof ByteBuffer) {
>                
>     sqlP.setBytes(((ByteBuffer)
> fieldValue).array());
>                 }
> else {
>                
>     sqlP.setValue((java.sql.Blob) fieldValue);
>                 }
>                
> break;

That's interesting from a historical perspective. But, where do we go from there? What is the ByteBuffer being used for? Can we make the service definition or whatever more specific?

In other words, instead of adding ByteBuffer support to JdbcValueHandler, can we be more specific about what we're storing in the BLOB - and use another data type?

Maybe ByteBuffer support SHOULD be added to JdbcValueHandler - since there is a theoretical performance advantage.

Like I said before - it's worth discussing. Personally, I have no preference. All I know is that the BLOB handling is a bit muddled and it would be nice to get things more clearly defined. That's why I introduced the new data types - in the hope of bringing clarity.

-Adrian



      

Re: Blobs, byte[] and ByteBuffer

Posted by Scott Gray <sc...@hotwaxmedia.com>.
On 12/09/2010, at 2:57 AM, Adrian Crum wrote:

> --- On Sat, 9/11/10, Scott Gray <sc...@hotwaxmedia.com> wrote:
>> Currently
>> JdbcValueHandler.BlobJdbcValueHandler will accept byte
>> arrays and Blobs (reluctantly) but not ByteBuffers. 
>> The problem is that the service engine uses ByteBuffer as
>> the attribute type for attributes created based on blob
>> entity fields and crud services fail due to this because
>> they don't attempt to convert them before using something
>> like <create-value/>.
>> 
>> So what is the correct behavior?  Should the service
>> engine be using a different type or should the
>> JdbcValueHandler be more flexible?
> 
> I'm pretty sure the current JdbcValueHandler code contains the same logic as the original switch statement, but I might have missed something.

Thanks for the info, it looks like the code used to check for byte[] then ByteBuffer and then finally Blob, snippet below from about a year ago:

            case 12:
                if (fieldValue instanceof byte[]) {
                    sqlP.setBytes((byte[]) fieldValue);
                } else if (fieldValue instanceof ByteBuffer) {
                    sqlP.setBytes(((ByteBuffer) fieldValue).array());
                } else {
                    sqlP.setValue((java.sql.Blob) fieldValue);
                }
                break;

> That code is a bit messy because it's trying to convert multiple types to a BLOB - in an effort to maintain backward compatibility. You can see remarks in there about that.
> 
> I introduced new data types - Object and byte[], to help make things less ambiguous and hopefully provide a path toward cleaning some of that up.
> 
> So, a discussion about which data type to use or support would be worthwhile. I think the answer lies in analyzing what is being stored as a BLOB and use the correct Java data type for it. If it's a serialized Object, then use an Object data type. If it's a byte array that holds the contents of a file, then use a byte array, etc. I imagine such an analysis would break down in the Content component - where the Java data type might be unknown.
> 
> I know this doesn't answer your question, I'm just sharing things I've learned and thoughts I've had on the subject.
> 
> -Adrian


Re: Blobs, byte[] and ByteBuffer

Posted by Adrian Crum <ad...@yahoo.com>.
--- On Sat, 9/11/10, Scott Gray <sc...@hotwaxmedia.com> wrote:
> Currently
> JdbcValueHandler.BlobJdbcValueHandler will accept byte
> arrays and Blobs (reluctantly) but not ByteBuffers. 
> The problem is that the service engine uses ByteBuffer as
> the attribute type for attributes created based on blob
> entity fields and crud services fail due to this because
> they don't attempt to convert them before using something
> like <create-value/>.
> 
> So what is the correct behavior?  Should the service
> engine be using a different type or should the
> JdbcValueHandler be more flexible?

I'm pretty sure the current JdbcValueHandler code contains the same logic as the original switch statement, but I might have missed something.

That code is a bit messy because it's trying to convert multiple types to a BLOB - in an effort to maintain backward compatibility. You can see remarks in there about that.

I introduced new data types - Object and byte[], to help make things less ambiguous and hopefully provide a path toward cleaning some of that up.

So, a discussion about which data type to use or support would be worthwhile. I think the answer lies in analyzing what is being stored as a BLOB and use the correct Java data type for it. If it's a serialized Object, then use an Object data type. If it's a byte array that holds the contents of a file, then use a byte array, etc. I imagine such an analysis would break down in the Content component - where the Java data type might be unknown.

I know this doesn't answer your question, I'm just sharing things I've learned and thoughts I've had on the subject.

-Adrian