You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mina.apache.org by Sven Panko <Sv...@proximity.de> on 2006/12/06 16:17:03 UTC

[1.0] Three questions concerning object serialization and MINA

Hello all,

first of all I like to thank of all you guys - MINA is a really excellent 
piece of software! After using it for more than a year I am fascinated how 
easy it is to implement complex client-server applications without much 
effort! But now to my questions concerning the object serialization 
features.

I was wondering if the ObjectSerializationCodecFactory may be used safely 
in production environments when the only data that is transferred between 
clients and the server are objects (i.e. a custom codec doesn't seem to be 
necessary to me)? The reason I am asking is because the JavaDoc 
description of the class states that the "codec is very useful when you 
have to prototype your application rapidly without any specific codec". 
This sounds as if it should only be used for testing purposes but not 
within a production environment.

Furthermore I was wondering why the decoder and ByteBuffer.getObject() use 
a classloader to check, if the transferred class is available on the 
receiver's platform (deserialization will fail without the check in every 
case when the class to be deserialized isn't available to the receiver, 
won't it?). I am developing a client application using Eclipse RCP and 
class loading is a bit difficult here because the OSGi platform is used to 
control class loading in the Eclipse platform. I wasn't able to use the 
ObjectSerializationCodecFactory out of the box because of NoClassDefFound 
errors produced by the Eclipse class loaders (which are not present if one 
simply uses the ObjectInputStream without specifying a separate 
classloader). Object serialization works find without the additional class 
checks, so I created a mere copy of the decoder and the 
ObjectSerializationCodecFactory to prevent the usage of explicit 
classloaders - but are there any risks involved in doing so which I don't 
see and you are aware of?

My last question concerns the different default max object sizes in the 
en- and decoder implementations - is there a reason why the encoder may 
encode objects up to Integer.MAX_VALUE, but the decoder refuses anything 
above 1MB? Are you aware of some known issues concerning memory 
consumption if I set the max object size of the decoder to 
Integer.MAX_VALUE as well?

Thanks in advance,

Sven




Information contained in this message is confidential and may be legally privileged. If you are not the addressee indicated in this message (or responsible for the delivery of the message to such person), you may not copy, disclose or deliver this message or any part of it to anyone, in any form. In such case, you should delete this message and kindly notify the sender by reply Email. Opinions, conclusions and other information in this message that do not relate to the official business of Proximity shall be understood as neither given nor endorsed by it.

Re: [1.0] Three questions concerning object serialization and MINA

Posted by Trustin Lee <tr...@gmail.com>.
Hi Sven,

On 12/7/06, Sven Panko <Sv...@proximity.de> wrote:
>
> > > My last question concerns the different default max object sizes in
> the
> > > en- and decoder implementations - is there a reason why the encoder
> may
> > > encode objects up to Integer.MAX_VALUE, but the decoder refuses
> anything
> > > above 1MB? Are you aware of some known issues concerning memory
> > > consumption if I set the max object size of the decoder to
> > > Integer.MAX_VALUE as well?
> >
> >
> > I thought decoder should be more restrictive in receiving a big object
> > because of the rick of DoS attack.  That's all.  If there's consensus on
> > changing the default value, we can change it, too.  :)
>
> Ok, just what I thought. The default value is fine - I think a short note
> in the JavaDoc stating that the max object size in decoder is set to a
> lower value because of possible DoS attacks would be nice. The reason that
> this doesn't affect me directly at the object serialization level is
> because of the fact I use SSL with client certs and the SSL filter
> prevents connections with invalid certs prior of a possible DoS attack (or
> am I mistaken?).


You are right.  We need to update the documentation.

Trustin
-- 
what we call human nature is actually human habit
--
http://gleamynode.net/
--
PGP key fingerprints:
* E167 E6AF E73A CBCE EE41  4A29 544D DE48 FE95 4E7E
* B693 628E 6047 4F8F CFA4  455E 1C62 A7DC 0255 ECA6

Re: [1.0] Three questions concerning object serialization and MINA

Posted by Sven Panko <Sv...@proximity.de>.
Hi Trustin,

> [...]
> 
> We overrided read/writeClassDescriptor() of ObjectInput/OutputStream to 
save
> the bandwidth.  When a Java object is serialized, the descriptor of the
> object's class is serialized together.  The descriptor contains a lot of
> meta-information related with the class and it's huge comparing to the
> actual data we want to exchange because it contains long strings such as
> type name and field name.  It's sometimes ten times bigger, and then we 
are
> wasting 90% of bandwidth.  That's why we chose to override
> read/writeClassDescriptor() method.
> 
> Calling getObject() with explicit class loader specified might help you:
> 
> MyMessageToReceive m = buffer.getObject(
> MyMessageToReceive.class.getClassLoader());
> 
> Please let me know if this works for you.  Otherwise, we need to find a
> better solution.

I'll try this solution on the client side by providing a special class 
loader - maybe it works. If it does, I'll post my findings so that others 
may use object serialization with Eclipse RCP as well.

> 
> > My last question concerns the different default max object sizes in 
the
> > en- and decoder implementations - is there a reason why the encoder 
may
> > encode objects up to Integer.MAX_VALUE, but the decoder refuses 
anything
> > above 1MB? Are you aware of some known issues concerning memory
> > consumption if I set the max object size of the decoder to
> > Integer.MAX_VALUE as well?
> 
> 
> I thought decoder should be more restrictive in receiving a big object
> because of the rick of DoS attack.  That's all.  If there's consensus on
> changing the default value, we can change it, too.  :)

Ok, just what I thought. The default value is fine - I think a short note 
in the JavaDoc stating that the max object size in decoder is set to a 
lower value because of possible DoS attacks would be nice. The reason that 
this doesn't affect me directly at the object serialization level is 
because of the fact I use SSL with client certs and the SSL filter 
prevents connections with invalid certs prior of a possible DoS attack (or 
am I mistaken?).

Thanks for all your help!

Greetz,

Sven


Information contained in this message is confidential and may be legally privileged. If you are not the addressee indicated in this message (or responsible for the delivery of the message to such person), you may not copy, disclose or deliver this message or any part of it to anyone, in any form. In such case, you should delete this message and kindly notify the sender by reply Email. Opinions, conclusions and other information in this message that do not relate to the official business of Proximity shall be understood as neither given nor endorsed by it.

Re: [1.0] Three questions concerning object serialization and MINA

Posted by Trustin Lee <tr...@gmail.com>.
Hello Sven,

On 12/7/06, Sven Panko <Sv...@proximity.de> wrote:
>
> first of all I like to thank of all you guys - MINA is a really excellent
> piece of software! After using it for more than a year I am fascinated how
> easy it is to implement complex client-server applications without much
> effort! But now to my questions concerning the object serialization
> features.


Wow, I didn't know that you've been using MINA for that long time.  I hope
you enjoyed your time with MINA.

I was wondering if the ObjectSerializationCodecFactory may be used safely
> in production environments when the only data that is transferred between
> clients and the server are objects (i.e. a custom codec doesn't seem to be
> necessary to me)? The reason I am asking is because the JavaDoc
> description of the class states that the "codec is very useful when you
> have to prototype your application rapidly without any specific codec".
> This sounds as if it should only be used for testing purposes but not
> within a production environment.


It's saying that the codec is very useful in prototyping phase, but it's not
saying that it's not usefule in other phases.  So it can be useful in any
phase, even in production phase.  In general, object serialization is slower
than customized codec in bandwidth  consumption and performance.  If you
have enough bandwidth and your application performs enough with object
serialization filter, it's absolutely fine to use it.

Furthermore I was wondering why the decoder and ByteBuffer.getObject() use
> a classloader to check, if the transferred class is available on the
> receiver's platform (deserialization will fail without the check in every
> case when the class to be deserialized isn't available to the receiver,
> won't it?). I am developing a client application using Eclipse RCP and
> class loading is a bit difficult here because the OSGi platform is used to
> control class loading in the Eclipse platform. I wasn't able to use the
> ObjectSerializationCodecFactory out of the box because of NoClassDefFound
> errors produced by the Eclipse class loaders (which are not present if one
> simply uses the ObjectInputStream without specifying a separate
> classloader). Object serialization works find without the additional class
> checks, so I created a mere copy of the decoder and the
> ObjectSerializationCodecFactory to prevent the usage of explicit
> classloaders - but are there any risks involved in doing so which I don't
> see and you are aware of?


We overrided read/writeClassDescriptor() of ObjectInput/OutputStream to save
the bandwidth.  When a Java object is serialized, the descriptor of the
object's class is serialized together.  The descriptor contains a lot of
meta-information related with the class and it's huge comparing to the
actual data we want to exchange because it contains long strings such as
type name and field name.  It's sometimes ten times bigger, and then we are
wasting 90% of bandwidth.  That's why we chose to override
read/writeClassDescriptor() method.

Calling getObject() with explicit class loader specified might help you:

MyMessageToReceive m = buffer.getObject(
MyMessageToReceive.class.getClassLoader());

Please let me know if this works for you.  Otherwise, we need to find a
better solution.

My last question concerns the different default max object sizes in the
> en- and decoder implementations - is there a reason why the encoder may
> encode objects up to Integer.MAX_VALUE, but the decoder refuses anything
> above 1MB? Are you aware of some known issues concerning memory
> consumption if I set the max object size of the decoder to
> Integer.MAX_VALUE as well?


I thought decoder should be more restrictive in receiving a big object
because of the rick of DoS attack.  That's all.  If there's consensus on
changing the default value, we can change it, too.  :)

HTH,
Trustin
-- 
what we call human nature is actually human habit
--
http://gleamynode.net/
--
PGP key fingerprints:
* E167 E6AF E73A CBCE EE41  4A29 544D DE48 FE95 4E7E
* B693 628E 6047 4F8F CFA4  455E 1C62 A7DC 0255 ECA6