You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Mark Brouwer <ma...@cheiron.org> on 2007/06/15 11:57:07 UTC

Controlling the object identification layer implementation

Already a long time on my list of things to tackle is customization of
the object identification layer.

The case I have is that in general all services deployed in one JVM
reuse the same ServerEndpoints so that all services can utilize the same
port number, to save from having to configure each service or to prevent
from using random ports which are not stable over time.

The result of this is that when one service is down but others are still
exported in the same JVM and an inbound request is received the client
will get a java.rmi.NoSuchObjectException. I won't start over the
discussion how a client should deal with that, the consensus seems to be
that it is a definite exception so we assume the client will give up.
Well that is fine, but that means that when I bring a 'persistent'
service down for a bugfix so that it is only gone for a limited amount
of time that the object identification layer should send another type of
RemoteException, e.g. a java.rmi.ConnectException as that shouldn't be
considered a definite exception by the client.

One could even come up with custom standardized RemoteExceptions for
these cases in which you mention the expected time the service will be
online again, but for now I'm more interested in getting rid of
NoSuchObjectException.

 From reading the specs at the net.jini.jeri package documentation it
seems that there is not much plugability for the object identification
layer: "In order to use a different implementation of the object
identification layer, a deployer needs to use a custom Jini ERI exporter
class, which should support specifying an InvocationLayerFactory and
ServerEndpoint for controlling the invocation and transport layer
implementations."

I don't want to clone BasicJeriExporter while all I need is some
interception mechanism when an object endpoint identifier is missing in
the internal object table. When missing the interceptor should be called
and allow the implementation of that interceptor decide whether it can
provide a customized exception.

Looking through the spec and source code I couldn't find any code that
seemed to be responsible for throwing the NoSuchObjectException directly
until I stumbled into BasicObjectEndpoint which seem to create the
NoSuchObjectException at the client side based on the first byte (0x00)
send by the server.

There are various possibilities to achieve what I want but as I don't
want to modify the specs for BasicEndPoint I'm thinking of a custom
dispatcher (which I already have) that will write a proper
RemoteException in the output stream based on certain conditions. The
point is how to make BasicJeriExporter aware of the interception
mechanism I require:

a) clone BasicJeriExporter and sort it out myself, allows for a very
quick and dirty solution but no reuse by others;

b) a service provider mechanism that search for the first service
provider that implements a com.sun (for now) specific interface. That
service provider can implements its own initialization mechanism to hook
in the required logic that verifies whether the object identifier
unknown to the internal export tables need custom dispatching.

c) something I haven't thought of given me being a virgin in this part
of the code/spec.
-- 
Mark


Re: Controlling the object identification layer implementation

Posted by Mark Brouwer <ma...@cheiron.org>.
Hi all,

Together with the "hook in PreferredClassProvider to determine class
boomerang" I think I come a long way towards being able to evolve
persistent services in time, as well as the ability to provide more
context information in case a client can't communicate with the server
due to maintenance/upgrades e.g..

However so far what I've been working on doesn't provide custom
exceptions that could be utilized by a client to react upon, i.e. to
distinguish between some failure cases for which the current set of
RemoteExceptions are not adequate IMHO.

Therefore I now came up with 2 custom RemoteExceptions that I need to
be able to develop client side failure handling logic, either internally
for usage by specialized proxies or directly for client side utilities.
As they are not specific to my Platform and could be seen as general I
believe they belong in the net.jini namespace (net.jini.jeri ?).
Attached you will find the code for them, although they are still in the
org.cheiron.seven.proxy namespace.

I decided to extend current subclasses of RemoteException and not to
introduce a new branch in the RemoteException hierarchy to be able to
play nice with current common failure handling logic out in the field.
Although I'm not sure whether this is the best thing to do, or whether I
took the right RemoteExceptions as a base class, any input is welcome.

First we have the OfflineException to be thrown by the RMI runtime when
a remote method invocation is performed for a service that is currently
off-line but denoted as a persistent service and as such is expected to
be on-line at some point in the future. The exception allows you to
specify the expected on-line date so client retry logic can take this
into account. Clients can utilize this exception to decide whether they
want to retry, if so when to perform the first retry. If you look for
example to a lookup server implementation, and there is a remote event
listener registered and the lookup server encounters this exception it
can check whether the on-line time is beyond the time the associated
lease expires, if this is the case it could drop the event registration
directly saving itself from performing retries and consuming resources
to keep the events in memory, persist them, etc. A transaction manager
service can utilize this exception to schedule any invocation on
TransactionParticipant and so on.

The other exception is the ObjectIncompatibleException that is thrown
when the specialized proxy has a particular version of mobile code that
is not compatible with the version of the remote object and that such an
invocation is not allowed to take place (due to an evolving codebase
e.g.). As part of the exception it is possible to pass in an object that
might allow the client to restore proper communication with the remote
object.

As an example, if a transaction manager service performs a call to
TransactionParticipant and ObjectIncompatibleException is thrown it
knows that the transaction participant proxy is no longer able to
communicate properly with its backend even while it is on-line,
therefore this is a definite failure. Although if
ObjectIncompatibleException.getProxy() returns an object that implements
TransactionParticipant it can utilize that proxy (likely after proxy
preparation) to continue its operations against the transaction participant.

I'm curious to find out about how others think of these 2 exceptions.
-- 
Mark






Re: Controlling the object identification layer implementation

Posted by Mark Brouwer <ma...@cheiron.org>.
Mark Brouwer wrote:

> There are various possibilities to achieve what I want but as I don't
> want to modify the specs for BasicEndPoint I'm thinking of a custom
> dispatcher (which I already have) that will write a proper
> RemoteException in the output stream based on certain conditions. The
> point is how to make BasicJeriExporter aware of the interception
> mechanism I require:
> 
> a) clone BasicJeriExporter and sort it out myself, allows for a very
> quick and dirty solution but no reuse by others;
> 
> b) a service provider mechanism that search for the first service
> provider that implements a com.sun (for now) specific interface. That
> service provider can implements its own initialization mechanism to hook
> in the required logic that verifies whether the object identifier
> unknown to the internal export tables need custom dispatching.
> 
> c) something I haven't thought of given me being a virgin in this part
> of the code/spec.

For those interested attached you will find the spec for
ObjectIdentifierInterceptor (feel free to come up with a better name)
for which currently the implementation can be specified through a system
property (good enough for me) and that is instantiated by
com.sun.jini.jeri.internal.runtime.ObjectTable, changes to ObjectTable
are in the diff.

Custom exception dispatching has been implemented and this made me very
happy, codebase evolution will require some substantial work in Seven so
no experience with the usability of the API (postObjectIdCheck) has been
obtained yet.

Another thing I realized when writing the spec is that because the
interceptor is always called even when the object identifier is in the
table it allows for the interceptor to work as a router. With this it is
possible to redirect request against one service (dispatcher) to another
service (dispatcher) running in the JVM. This allows for live upgrading
of services of a certain type. One can imagine one installs a new
version of a service, deploy [1] it and instruct the interceptor to
divert any incoming requests from the 'old' service to the new one. No
interruption from the perspective of the client, and after a while you
bring down the old one. I realize this is extremely hard for statefull
services, although not impossible, but in the past I've written a few
services for which this feature would have made sense.

[1] this assumes the possibility to reuse the server endpoint and
similar constraints on the invocation dispatcher, but that ain't a
problem in my case.
-- 
Mark





Re: Controlling the object identification layer implementation

Posted by Mark Brouwer <ma...@cheiron.org>.
Mark Brouwer wrote:
> Already a long time on my list of things to tackle is customization of
> the object identification layer.

While sitting in the bath tub I had a Eureka moment as I also believe
customization of the object identification layer brings me something
else that I'm fighting with for a few years.

First the problem, although the use case refers to Seven the problem
itself is not specific to Seven, it is just to provide some context for
the problem I'm trying to solve.

Assume you develop a service for which the downloadable code evolves
over time. The hosting environment (in this case Seven) evolves as well.
The service is deployed as a persistent service, that is it will
maintain the same ServiceID and object identifier for the endpoint even
while sometimes it crashes (unplanned downtime) or is upgraded, either
due to fixes new features in the service or due to upgrading of the
hosting platform (planned downtime).

Assume that as part of the planned maintenance downloadable code has
changed and as a result the codebase annotation changes in time as well
(Seven makes with each upgrade a full analysis of download JAR files to
see whether the codebase needs a change). In case evolution is displayed
as t0-3 and the codebase annotation changes as A-D. It is good to
understand that the codebase served for a service will be available to
clients.

   annotation:  A       B       C       D
                |-------|-------|-------|
   time:        t0      t1      t2      t3

So the first time at t0 we deploy our persistent service and the
codebase annotation is A. Meaning that when the client receives the
smart proxy to its service it will create a class loader for which the
implementation classes are obtained from A.

Then we find out that we want to enhance or bug-fix our service and
bring it down for planned maintenance (this also relates to my previous
posting). After the upgrade Seven finds out a new codebase annotation is
needed, so clients that find the service after t1 will see a codebase
annotation B and they create a class loader for the proxy implementation
classes from B. No problem here this client is happy.

The problem occur with the clients that saw codebase A, because even
when they can communicate with the upgraded server, the marshalled
stream will contain codebase annotations B. The net effect is that these
classes will be created in a new class loader and are type incompatible
with the classes the client already has defined in the class loader
related to A.

Assume that the server could in the above case make sure that the
annotations used for classes defined in the service class loader ar enot
annotated with B but with A. The client could (likely) to continue
working with the service, with the 'disadvantage' that it won't see any
changes in the newer classes (B).

So what is required from the perspective of the framework is a way to
find out where on the evolution time scale a smart proxy was handed to a
client. That identifier could be used by the class annotation mechanism
to provide the right codebase annotation.

In the past I've been thinking of a new version of the protocol that
describes how marshalling and unmarshalling of all the request/response
data takes place, to allow for adding an additional identifier over
which you can have control over and that is persisted as part of the
BasicInvocationHandler.

But this is where the bath tub came into play. I realized that I can
achive the same thing through the object identification layer. Assume
that with the time evolution of the service I use a different Uuid for
each time the service exports (t0-t3) and that I also keep track of all
the Uuids used for exporting the service it would mean that depending on
when on the time scale obtained the smart proxies have different object
identifiers, you end up with these relations:

   annotation:  A       B       C       D
                |       |       |       |
   object id:   ef...   3c...   1a...   ba...
                |       |       |       |
                |-------|-------|-------|
   time:        t0      t1      t2      t3


In case at the object identification layer a request comes for a Uuid I
can check whether it is in the export table, if not whether it
represents history of the exported service. If the latter is the case it
should redirect to the current Uuid under which it is exported and it
can populate the Collection object representing the context as passed in
to InvocationDispatcher.dispatch(Remote, InboundRequest, Collection).
The invocation dispatcher can then use the context information (if any)
to set some thread local variable that can be utilized by the (in my
case) context aware class annotator and arrange for providing the
correct codebase annotation.

I realize this doesn't solve all problems related to codebase versioning
(e.g. not the cases where intermediate services pass around objects),
but it does solve the most common case for which I ran into troubles
with upgrading of services.

In case there are really incompatible upgrades and these are defined for
the service as such, this mechanism will also allow throwing an
exception that indicates the client should give up talking to the server
(throwing NoSuchObjectException?, or maybe one that indicates it is
unusable due to an incompatible evolution).
-- 
Mark