You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Peter Firmstone <ji...@zeus.net.au> on 2010/04/23 01:30:50 UTC

Jini Spec API changes - Advise needed

I've created several new classes / interfaces, currently these reside in 
the net.jini name space, I need community advise on which of these 
belong in Jini's API and good places for those that don't.

The changes will be committed shortly, after my qa tests results.

It'd be neat if we could set up some sort of javadoc diff to monitor 
changes, does anyone have experience with it?

Regards,

Peter.

Re: Jini Spec API changes - Advise needed

Posted by Peter Firmstone <ji...@zeus.net.au>.
Christopher Dolan wrote:
> In general (I have not looked at Peter's recent changes yet) I vote for
> simple binary compatibility by adding @Deprecated to old methods and
> adding new methods with the altered signatures.  Instead of rearranging
> method arguments, I favor changing the method name, or creating a whole
> new class/interface and deprecating the old one if the changes are
> extensive.  
I'm tending to agree with you.

When two methods or constructors are similar, it's the caller that is 
ambiguous due to null parameters, which means code that currently 
compiles doesn't when new similar methods or constructors are added, 
unless null parameters are cast to their intended type. I guess if 
someone was going to the effort to compile, it wouldn't be much to add 
some casts, and makes the replacement very obvious.


> But where that's impossible (like the refactoring to reduce
> java.rmi dependencies) I'm not sure what to propose.  The most important
> thing to me is that I any River 2.2.x code I write will be able talk to
> my Jini 2.1 code.
>   
I figured there would be significant pressure to maintain backward 
compatibility, so far I think I'm succeeding. I think this is the reason 
why Gregg's improvements to Reggie and ServiceRegistrar never made it 
into River. However I think I can give Gregg what he needs, without 
breaking backward compatibility.

Commit to follow.

Cheers,

Peter.


> I don't like the sound of the ASM post-processing technique you propose.
> It sounds fragile and it will make debugging harder since the source
> won't match the bytecode.  But I'll keep an open mind if others have had
> positive experience with such an approach.
>
> Sorry to be negative.
>
> Chris
>
> -----Original Message-----
> From: Peter Firmstone [mailto:jini@zeus.net.au] 
> Sent: Friday, April 23, 2010 10:13 PM
> To: river-dev@incubator.apache.org
> Subject: Re: Jini Spec API changes - Advise needed
>
> Thanks Chris,
>
> I'll look into what's needed to make it an ant build option.
>
> On the subject of API changes, there is one particular bugbear I have 
> when it comes to maintaining Binary compatibility:
>
>     * You can't change a method signature's parameters, not even to a
>       superclass - any type change breaks binary compatibility (
>       exceptions aren't part of method signatures), which is annoying,
>       since changing a method to a superclass doesn't break compile time
>       compatibility and only requires a simple recompile for application
>       upgrades.
>
>
> However maintaining Binary compatibility requires maintaining the 
> original method and adding a new method, often with the parameters moved
>
> around to avoid compile time method signature ambiguity, existing 
> applications now require, not only a full recompile, but editing of all 
> occurrences of the old method signature in source code, which is far 
> less likely to happen.
>
> *So I pose these questions:
> *
>
>     * What sort of Compatibility do you want to maintain? 
>     * Is compile time enough or do you want binary as well? 
>     * Or do you want to have your cake and eat it too?
>
> *Possible Solutions:*
>
>     * We could create a tool that utilises ASM to rewrite method
>       signatures of existing binary's to be compatible.
>     * Or is there some kind of annotation that we could use to have ASM
>       add the old method signature to Apache River after compilation? 
>       Then we don't have to change existing application binaries, a
>       simple recompile means new binaries for existing applications now
>       link to the new methods.  If anyone has any ideas for such an
>       annotation, or if someone has done this before, please advise.
>       (This would only work for classes, not interfaces).
>
>
> Breaking Binary compatibility doesn't break Serialization 
> compatibility.  However it does bring with it issues for distributed 
> computing, such as ensuring the local JVM has the right binary version, 
> that is compatible locally, in the correct ClassLoader, but for now, 
> I'll save that issue for another thread.
>
> In River, we have three compatibility concerns:
>
>    1. JVM local Binary Compatibility.
>    2. Compile time Source Compatibility.
>    3. Distributed Serialization Compatibility.
>
> It would be preferable to maintain binary and source level compatibility
>
> with the Jini spec, in order to prevent forklift upgrade requirements 
> for existing installations, however if someone can show there is a 
> significant reason not to then I'll consider that too.
>
> Note I'm only referring to the net.jini.* namespace.
>
> Best Regards,
>
> Peter Firmstone.
>
> Christopher Dolan wrote:
>   
>> I recommend http://www.jdiff.org/
>> Here's an example of use:
>>
>> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
>> -apiname testng5.7 -sourcepath '..\testng-5.7\testng-5.7\src\main'
>> org.testng
>> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
>> -apiname testng5.8 -sourcepath '..\testng-5.8\testng-5.8\src\main'
>> org.testng
>> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
>>     
> -oldapi
>   
>> testng5.7 -newapi testng5.8 org.testng
>> % open changes.html
>>
>> Chris
>>
>> -----Original Message-----
>> From: Peter Firmstone [mailto:jini@zeus.net.au] 
>> Sent: Thursday, April 22, 2010 6:31 PM
>> To: river-dev@incubator.apache.org
>> Subject: Jini Spec API changes - Advise needed
>>
>> I've created several new classes / interfaces, currently these reside
>>     
> in
>   
>> the net.jini name space, I need community advise on which of these 
>> belong in Jini's API and good places for those that don't.
>>
>> The changes will be committed shortly, after my qa tests results.
>>
>> It'd be neat if we could set up some sort of javadoc diff to monitor 
>> changes, does anyone have experience with it?
>>
>> Regards,
>>
>> Peter.
>>
>>   
>>     
>
>
>   


Re: Jini Spec API changes - ServiceRegistrar AND OR <> Entry comparison Filtering

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hi Tim,

Awesome to hear from you, glad your back, hope everything's going well 
for you now.

I'm after all ideas, what I'm thinking about is something Similar to 
Mike Warres codebase service, with a twist:

   1. During Marshalling of a Service, collect package names and
      versions (Utilising the Java Package Versioning Spec, works for
      OSGi too) and annotate the service proxy (requires a new URL
      implementation).
   2. The client checks if any local packages satisfy the package name
      and version.
   3. If not found locally, the client queries a codebase resolution
      service with the package name, version and a list of public key
      certificates (these might be common to both the client and the
      service).
   4. The codebase resolution service returns a message digest of a jar
      file that satisfies the package name, version and signers (public
      key certificates) if possible over a secure connection.
   5. The client uses this hashcode digest to request the jar file from
      a codebase service.
   6. When the jar file arrives, the client checks the message digest,
      followed by the signers.
   7. Trust, (permission grants) is based on the identity of the signers
      public key certificates.

Codebase annotation loss can be prevented by generating it from the 
local package version during marshalled, ensuring local code also gets 
an annotation.

What I'm wondering is, it would be neat if the Service could apply a 
constraint on the signers of the jar file, perhaps the codebase 
resolution service could be a mediator?  The client takes the greatest 
risk by using downloaded code, however it would be nice if there were 
some way the Service could verify the code is signed by someone reputable.

I'm thinking about utilising an HMAC digest and utilising the bouncy 
castle security provider.

Cheers,

Peter.


Tim Blackman wrote:
> On Apr 29, 2010, at 3:23 PM, Gregg Wonderly wrote:
>
>   
>> Peter Firmstone wrote:
>>     
>>> I don't know how to enable the Service to specify a constraint on the signer of the downloaded codebase if not originating from the service, any ideas?
>>>       
>> The HTTPMD protocol handler (URLStreamHandler) does this by requiring that you know the MD5 sum of the jar that you want to download.  If you try and download the jar, and the sum is different, you can know that the content is not what you originally knew it to be.
>>
>> Not directly signing, but a mechanism that is similar and provides a fairly secured indication of "source" based on what you knew at the moment you acquired the MD5 sum.
>>     
>
> As long as you use a strong enough message digest -- SHA-1 or something still stronger would be better choices these days now that the safety of MD5 is uncertain -- the security of HTTPMD is just as good as that of code signing.
>
> - Tim
>   


Re: Jini Spec API changes - ServiceRegistrar AND OR <> Entry comparison Filtering

Posted by Tim Blackman <ti...@gmail.com>.
On Apr 29, 2010, at 3:23 PM, Gregg Wonderly wrote:

> Peter Firmstone wrote:
>> I don't know how to enable the Service to specify a constraint on the signer of the downloaded codebase if not originating from the service, any ideas?
> 
> The HTTPMD protocol handler (URLStreamHandler) does this by requiring that you know the MD5 sum of the jar that you want to download.  If you try and download the jar, and the sum is different, you can know that the content is not what you originally knew it to be.
> 
> Not directly signing, but a mechanism that is similar and provides a fairly secured indication of "source" based on what you knew at the moment you acquired the MD5 sum.

As long as you use a strong enough message digest -- SHA-1 or something still stronger would be better choices these days now that the safety of MD5 is uncertain -- the security of HTTPMD is just as good as that of code signing.

- Tim

Re: Jini Spec API changes - ServiceRegistrar AND OR <> Entry comparison Filtering

Posted by Gregg Wonderly <gr...@wonderly.org>.
Peter Firmstone wrote:
> I don't know how to 
> enable the Service to specify a constraint on the signer of the 
> downloaded codebase if not originating from the service, any ideas?

The HTTPMD protocol handler (URLStreamHandler) does this by requiring that you 
know the MD5 sum of the jar that you want to download.  If you try and download 
the jar, and the sum is different, you can know that the content is not what you 
originally knew it to be.

Not directly signing, but a mechanism that is similar and provides a fairly 
secured indication of "source" based on what you knew at the moment you acquired 
the MD5 sum.

Gregg Wonderly

Jini Spec API changes - ServiceRegistrar AND OR <> Entry comparison Filtering

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hi,

Have a look at the latest commit, while I haven't enabled anything other 
than exact matching for Entry's in Registrar, I have created some 
interfaces that:

    * Allows unmarshalling to be delayed by the client.
    * Allow selected Entry's to be unmarshalled for filtering, see
      MarshalledServiceItem, an abstract class that extends ServiceItem,
      see also StreamServiceRegistrar.
    * Created a new class that utilises multiple combinations or chains
      of  ServiceItemFilter filters, so these might be combined in a
      selectable user interface with AND OR logic.  Developers who have
      created ServiceItemFilter's over the years will be able to utilise
      them in new ways.  Filters that only compare Entries can be
      utilised prior to unmarshalling, those that require access to the
      proxy to apply MethodConstraints filters or Proxy Verification
      filters can be applied after unmarshalling.

I intend to enable very large result sets to be incrementally returned 
while also allowing filters and comparisons based on Entry's to be 
executed locally prior to unmarshalling the service, so that 
unmarshalling is reduced to the utmost minimum.

The existing Registrar unmarshalling semantics and proxy verification 
etc can be preserved, without exposing the implementation in River's 
Jini public API.

I would like comments from the Original Authors (if possible & time 
permitting), so that I can fully understand any implications of these 
changes.

These changes are intended to make it possible to implement lookup 
globally.  Result sets can be narrowed down significantly using 
filtering before unmarshalling and operations can be performed on 
batches of lookup results (services) that can be discarded after 
operation, allowing Garbage Collection to clean up de-referenced Class 
files and ClassLoaders during ongoing lookup result processing, while 
removing unnecessary codebase downloads. 

I'd also like to further reduce codebase downloads by allowing packages 
and codebases to be shared based on package versioning.  Security 
implications are that a codebase would have to be signed by a trusted 
party, treated separately from the service, which might need to 
authenticate itself if required by the client, I don't know how to 
enable the Service to specify a constraint on the signer of the 
downloaded codebase if not originating from the service, any ideas?

Note that this implementation is intended to not break backward 
compatibility.

Best Regards,

Peter Firmstone.

Re: Jini Spec API changes - Advise needed

Posted by Peter Firmstone <ji...@zeus.net.au>.
Just a small clarification:

In the presence of Jini 2.1 nodes (includes River 2.1.2) a River 2.2.0 
Reggie cannot be the default group, that must be performed by a Reggie 
prior to v2.2.0.  Otherwise ClassNotFoundException and the like will be 
thrown by the earlier nodes.

The River 2.2.0 nodes don't need to utilise the existing 
ServiceRegistrar interface at all, any earlier Reggie proxy's will be 
wrapped by a Facade automatically by River, mapping the methods of 
StreamServiceRegistrar to ServiceRegistrar, this excludes Java CDC which 
won't be able to talk to earlier versions of Reggie at all, simply 
solved by having a second Reggie of v2.2.0 or later, with a cdc group or 
something similar.

Cheers,

Peter.

Peter Firmstone wrote:
> Christopher Dolan wrote
>> The most important thing to me is that I any River 2.2.x code I write 
>> will be able talk to
>> my Jini 2.1 code.
>>
>>
>>   
> I've been thinking about the impact of recent changes surrounding 
> ServiceRegistrar and DiscoveryManager, it will be possible to have a 
> binary compatible migration / upgrade path from 2.1 to 2.2.
>
> Reggie's implementation for 2.2.x (currently experimental and subject 
> to change) will be different from earlier versions, existing Jini 
> application code will use the new implementation via a facade if 
> running directly on the Apache River 2.2.x platform.
>
> The impact; while Jini 2.1 nodes exist in a djinn, you will have to 
> use at least one Reggie implementation prior to Apache River 2.2.0, 
> the new nodes can utilise earlier versions of Reggie.  Application 
> code (Services and clients) Running on 2.2.x, can use new 
> StreamServiceRegistrar methods, and Apache River 2.2.x will wrap a 
> facade around any existing Jini 2.1 Reggie's, although results will 
> not be available in marshalled form, so there won't be a performance 
> advantage unless you utilise the new Apache River 2.2.x Reggie.  The 
> good news is that you can write new application code for the later 
> Reggie version, while using the former (from a 2.2.x node) and then 
> get the performance benefits when you upgrade.
>
> Jini 2.1 nodes won't be able to join any groups that utilise a 2.2.x 
> Reggie.
>
> Existing application code, migrated from Jini 2.1 will work on Apache 
> River 2.2.x and doesn't need an earlier Reggie version, as the 
> platform will provide a facade to access the new Reggie via the old 
> interface.
>
> Best bet; have a look at what I've done so far & raise any concerns, 
> or suggest improvements.
>
> I'll post a javadoc diff in my personal apache web area, time permitting.
>
> Cheers,
>
> Peter.
>
>


Re: Jini Spec API changes - Design Decision

Posted by Peter Firmstone <ji...@zeus.net.au>.
net.jini.core.lookup.ResultStream<T> is a simple interface for 
simulating an object stream, it has two methods

T get(); returns one object.
void close(); allows the implementer to close any resources, threads 
file handles, remote objects, etc before the user nullifies the reference.

There's are new utility classes too (pseudo code):

net.jini.lookup.ServiceResultStreamFiler implements 
ResultStream<ServiceItem> {
    // Constructor:
    public ServiceResultStreamFilter(ResultStream<ServiceItem> rs, 
ServiceItemFilter[] sf)
    public ServiceItem get()
}

Unlike ServiceDiscoveryManager's use of ServiceItem[] arrays for return 
results, ServiceResultStreamFilter allows you to chain multiple filter 
implementations to the results from StreamServiceRegistrar.

Some filters, may just allow through those ServiceItem's with a desired 
set of method constraints, such as utilising a secure communication. 
Other filters might only be allowing the services with a greater than 
method applied to an Entry attribute.

Chained filters represent AND statements, while nested (those passed 
into the constructor) filters represent OR statements.

This utility class only concerns itself with unmarshalling any 
MarshalledServiceItems in the ResultStream, any unmarshalled 
ServiceItem's pass through untouched.

net.jini.lookup.ServiceResultStreamUnmarshaller implements 
ResultStream<ServiceItem> {
    public ServiceResultStreamUnmarshaller(ResultStream<ServiceItem> rs)
    public ServiceItem get()
}

Previously a ServiceFilter only had one bite of the cherry, so different 
concerns had to be combined in one filter, now your Entry filters need 
not be tied to your constraint filters, constraint filters can be shared 
among many lookup queries, and other operations that might query the 
service directly can be performed after all the other filters have been 
applied so unmarshalling is reduced to the bare minimum.

Oh you can utilise all your existing filters too.

Cheers,

Peter.

Peter Firmstone wrote:
> When I wanted a way of returning marshalled or semi marshalled 
> ServiceItem results from StreamServiceRegistrar, I chose to extend 
> ServiceItem, and add two methods:
>
> Object getService()
> Entry[] getEntries()
>
> I called this class MarshalledServiceItem.
>
> Here's a new method from StreamServiceRegistrar:
>
> ResultStream<ServiceItem> lookup(ServiceTemplate tmpl,
>        Class<? extends Entry>[] unmarshalledEntries, int maxBatchSize) 
> throws RemoteException;
>
> The array of unmarshalledEntries is to request these entry classes be 
> unmarshalled and available in ServiceItem.attributeSets.
>
> ServiceItem.serviceID is always unmarshalled.
> ServiceItem.service is null, if delayed unmarshalling is supported.
>
> I chose to make MarshalledServiceItem abstract, the reason: 
> ServiceItem implements Serializable, which would cause the Registrar's 
> implementation to be published in Jini's API, so while 
> MarshalledServiceItem extends ServiceItem, none of its methods are 
> mutator methods and only return the unmarshalled service and complete 
> Entry's.
>
> I have another utility class that constructs an unmarshalled 
> ServiceItem, from MarshalledServiceItem.
>
> Check out the code.
>
> Cheers,
>
> Peter.
>


Re: Jini Activation Framework - A sub project -edited?

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hi Gregg,

Just thought I'd re-edit this, I wasn't very clear:

I'm thinking about how to structure the next distribution release.
Namely, creating a platform release artifact that excludes
Activation and Some Services and creating an Activation Framework release.

Reasoning:
	* Activation's original design intent was preserving
	  valuable resources for multiple occasionally used
	  services running on one server, when server
  	  memory was limited to 512MB and
           the average desktop had 32 - 64MB. Today with
	  server memory of 32GB and clients with 4GB,
	  Activation adds complexity, but the benefits
	  are no longer clear.
	* There is an obvious need to
	  preserve backward compatibility for existing
	  applications that utilise Activation.
	* Activation is heavily tied to Java SE RMI,
	  which appears no longer available on Java CDC platforms.
	* Minimise code differences for Java platform support,
	  to minimise maintenance, support and provide maximum
           commonality of Jini API between different Java
	  platforms.

I had figured a Mahalo implementation should be included in the Apache
Release (without Activation), I want to remove the dependency on
Activation being present in the Classpath.  I'll have to do this for a
Java CDC release anyway.

Your use of Norm is interesting, I didn't want to include Norm in the
base release.  It sounds like your using Norm to control the liveliness
of your exports, it also sound like these proxy's aren't registered with
the lookup service, tell me more?

My thoughts were:

    * A separate release artifact for the Activation Framework.
    * Platform release artifacts for Java SE and Java CDC.

Your comments are making me think:

    * A separate release artifact for the Activation Framework (phoenix,
      specific to Java SE).
    * Basic platform release artifacts (With very wide platform support, 	
      a Java SE 5+ artifact, a separate one for Java CDC 1.11+).
    * Service release artifacts, I want Reggie to take
      advantage of the latest java language and concurrency features.
      But I want to be able to install other services without
      requiring the Activation Framework release to also be installed,
      eg a Transaction Service. I'm still figuring this bit out.
    * Services can be JVM implementation specific, the client doesn't
      need to worry about what happens on the server.  It would be
      desirable though for all proxy's to enjoy wide platform support.
    * Any Proxy's that cannot be supported on earlier platforms won't
      unmarshall and will be returned null in ServiceItem's from lookup
      on those platforms.

Cheers,

Peter.

Gregg Wonderly wrote:
> I use norm and mahalo all the time without activation.  I use a leased smart proxy instead of DGC so that all of the details of proxy management are under my control and I use transactions without activation for mahalos lifecycle.
>
> Gregg wonderly
>
> Sent from my iPad
>
> On May 2, 2010, at 8:37 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>
>   
>> My reasoning for removal from the platform spec or making it optional: Activation is a Service implementation detail.
>>
>> If there are no objections, I'd like to move it in the near future.
>>
>> Regards,
>>
>> Peter.
>>
>> Peter Firmstone wrote:
>>     
>>> Can we move the Activation Framework to a subproject of Apache River?  So it isn't part of the platform?
>>>
>>> The Activation Framework could be optional and include the following:
>>>
>>>   * Phoenix - Activation Service
>>>   * Norm - Lease Service (This doesn't make much sense outside Activation)
>>>   * Activatable Fiddler - Lookup Discovery Service
>>>   * Activatable Reggie - Service Registrar
>>>   * Activatable Javaspaces - Outrigger FrontEndSpace.
>>>   * Mahalo - Transaction Service (We can create a Non-Activatable
>>>     implementation for the platform)
>>>   * Mecury - Event mailbox (We can create a Non-Activatable
>>>     implementation for the platform)
>>>
>>> These could be bundled together as an Activation Framework Release
>>>
>>> Existing interfaces that are specific to Activation in the net.jini namespace (exclusive of net.jini.activation) could be depreciated and copied to another package namespace, giving existing applications time to transition.
>>>
>>> Then the activation framework becomes something that runs on top of Jini / Apache River, rather than part of it, making Jini / River conceptually simpler to new application developers.
>>>
>>> What are your thoughts?
>>>
>>> Regards,
>>>
>>> Peter.
>>>
>>>
>>>
>>>
>>>       
>
>   



Minimise Codebase downloads Was: Re: StreamServiceRegistrar

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hmm, Gregg, I'm guessing you've got something in mind?  If you do, 
please donate it ;)

My general Ramblings, WARNING: may veer wildly off course, thoughts 
subject to change based on good suggestions too:

Yes, I'm thinking about a URL structure to annotate marshalled data with 
the package name and version number. This should assist people utilising 
OSGi to control ClassLoader visibility using jar Manifests and your new 
CodebaseAccessClassLoader, without River requiring it (what a mouth 
full). OSGi doesn't specify how to deal with deserializing objects, I 
suspect that's why R-OSGi (A Separate entity than OSGi which is a spec) 
has it's own binary serialization mechanism, which is proprietary, it 
doesn't preclude the use of another protocol though.

R-OSGi clearly, is heavily influenced by Jini, however OSGi and it's 
lookup semantics are best suited to their original design focus, JVM 
local modularity.  The OSGi lookup semantics, when applied to 
distributed computing cause problems during deserialization, R-OSGi 
despite having it's own binary serialization, exposes issues with 
ClassLoaders and class visibility because the registrar doesn't declare 
all associated classes (superclasses, parameters, method returns), only 
one interface by name, whereas Jini lookup semantics don't prevent 
determination of these classes allowing for better control of class 
visibility (although this hasn't been done yet other than Preferred 
Classes).  For that reason OSGi Service lookup semantics don't map well 
to distributed systems, but it does do a superb job of providing local 
jvm services, but were concerned with distributed services and they're 
very different beasties.

Perhaps it is fair to say that both Jini and OSGi registrars target 
their intended scope appropriately.  Therefore in an OSGi framework, 
where an application can utilise both Jini and OSGi services, should not 
attempt to map a local OSGi service to a Jini distributed service and 
vice versa, but instead use each for it's intended purpose.

Which brings me back to Service Interfaces, and a past river-dev 
discussion about Maven dependencies, I haven't used Maven, so can't 
comment too much, but you rightly pointed out that the dependencies are 
not on the *-dl.jar but instead the Service Interfaces defined in the 
Jini spec and hence the jsk-platform.jar. This I think, underpins most 
of the misunderstanding surrounding Jini technology, it is the Service 
Interface on which everything depends.  I've thought about this and 
believe that for non platform services, the Service Interface and any 
return or parameter interfaces / classes, should be packaged  (in a jar 
or jar's) separately from service implementations, as indeed, Jini 
service implementations are.  The service implementations (service and 
service-dl jar's) then should depend on the ServiceInterface.jar (SI.jar)

This is where Package Versioning comes in, when you vary your service 
implementation, you want a specific version linking your service.jar to 
your service-dl.jar, well you could just rename both jar's I suppose, 
but that doesn't fit well with some frameworks.  The service 
implementation (service.jar and service-dl.jar) is entirely a private 
concern, no classes that form any part of any public API belong within 
it, you can do whatever you like with any interfaces contained within an 
implementation without harming any external software, so long as the 
service.jar (server) and service-dl.jar (client) versions match.

Everything within a Service Interface jar (SI.jar) should be public API 
interfaces (whether from another jar, java or jini platform API 
classes), it must also be stateless and not require any form of 
persistence whatsoever.

Then ClassLoader visibility should be:

SI.jar ClassLoaders should be made visible to everything utilising it, 
as it forms contracts of compatibility between different Service 
implementations and clients, just like platform API or classes.  When we 
want to extend a Service Interface, perhaps by adding a new interface 
and method, we can increment its version, the version scheme publishes 
the expected level of backward compatibility, that way existing Services 
still work with the new version and the latest and greatest versions 
utilising new interfaces within the SI.jar don't break by loading an 
earlier version and get looked up by both new and old clients.

As an example if we had a Sales Broker Service we might have:

SalesBrokerService.jar - The service interface API.

BobTheBroker.jar - Bob's service implementation.
BobTheBroker-dl.jar - Bob's proxy.

This doesn't prevent Bill also providing the same service:
BillsBrokerage.jar - Bill's service implementation
BillsBrokerage-dl.jar - Bill's proxy.

All implementers use the same SalesBrokerService.jar, the clients do 
too.  The proxy's ClassLoaders are not directly visible to the Client 
Application ClassLoader, instead the client holds a reference to the 
proxy via the SalesBrokerService Classloader class type's, which are 
visible to both the Proxy ClassLoader and Client Application ClassLoader.

This brings me to Codebase downloads and proxy sharing, Bill and Bob, 
don't share proxy implementations, however Bill might provide a number 
of fail over services and want all his clients to use the same codebase.

Common codebase schemes could be broken up in a couple of different ways:

One way to share codebase is utilise the same codebase in different 
ClassLoaders, Bill might want to do this if he uses static class 
variables, specific to each service server node in his proxies (in this 
case Bill's the Principal):
CodebaseA->ClassLoader1->proxy1
CodebaseA->ClassLoader2->proxy2
CodebaseA->ClassLoader3->proxy3

However Bob (The Principal), might be happy to have all of his proxy 
object instances share the same ClassLoader and the same permissions.
CodebaseA->ClassLoader1->proxy1
CodebaseA->ClassLoader1->proxy2
CodebaseA->ClassLoader1->proxy3

The above apply only to smart proxy's.

For dumb proxy's all proxy's must be loaded in the ServiceInterface 
ClassLoader as they are just Java Reflective proxy's and don't require 
additional classes.

Dumb proxy's can be loaded like this:

CodebaseSI->ClassLoaderSI->proxy1- Bob's Service proxy
CodebaseSI->ClassLoaderSI->proxy2 - Bill's Service proxy etc.
CodebaseSI->ClassLoaderSI->proxy3
CodebaseSI->ClassLoaderSI->proxy4
CodebaseSI->ClassLoaderSI->proxy5

And can belong to anyone.

I have exactly no idea at this stage how to communicate these models 
into their respective semantics for determining class loading schemes 
during unmarshalling.

Anyone with ideas don't be afraid to post.

Now something handy that OSGi does is each bundle contains a list of 
permissions it requires, if we adopt this format for permissions for 
Service-dl.jar implementations and perhaps SI.jar too, it enables us to 
specifically restrict permission grants, it is like a contract of trust, 
the proxy tells you prior to loading it how much trust you must bestow 
upon it for full functionality, you might decide to have a set of grants 
tighter than those requested, but that's up to you, the client.

But one thing is clear, we can't afford to download a particular jar 
more than once.

Any new implementations must also play well within an existing Jini 
cluster too, so a Service might register two identical proxy's with 
different  ServiceRegistrar's, one with the old httpmd: URL scheme, and 
one with a new Package Version URL scheme that requires a codebase to be 
looked up.  The actual Service-dl.jar will be the same, just downloaded 
in different ways and loaded in different classloader trees by different 
client nodes.

The interesting part of Jini lookup ServiceTemplate's is it's basically 
looking for instanceof SomeServiceInterface.  The Marshalled proxy needs 
to commmunicate all packages and versions required for unmarshalling at 
the client, this could include any number of jar files to be downloaded.

So it's really all about how we package our services.

Then we can create an upload site with public ServiceInterface source 
and jar files that many people and companies can sign, forming webs of 
trust.  We also need a pool of common Entry classes that people can 
utilise.  That way if we're using delayed proxy unmarshalling, entries 
can be unmarshalled for filtering operations without downloading any 
proxy codebases.

Now we can have an OSGi compatible versioning scheme and simplified 
class loader framework without requiring OSGi (no OSGi Services, no OSGi 
bundle stop / start / persistence), perhaps even utilising some felix 
code in River for people that want versioning but not OSGi, but we 
should also provide the pieces for applications to fully utilise OSGi 
frameworks if they wish too, without requiring other nodes to do so.

Cheers,

Peter.


Gregg Wonderly wrote:
> One of the the things that I played around with was a Protocol handler which would use a URL structure that specified such versioning information.  It would lookup services implementing CodeBaseAccess and ask them if the could provide such a jar file.
>
> This kind of thing makes it easier to deal with some issues about total number of codebase sources, but I am still not sure that it solves the problem you are thinking about.
>
> Gregg Wonderly
>
> Sent from my iPad
>
> On May 5, 2010, at 9:00 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>
>   
>> The other thing I'm working on is a PackageVersion annotation, using the implementation version and package name from the Java Package Version spec, so developers can version their proxy's allowing sharing of compatible bytecode for reduced codebase downloads.
>>
>> I'm hoping that these things combined will assist to enable lookup over the internet.
>>
>> Peter Firmstone wrote:
>>     
>>> Gregg Wonderly wrote:
>>>       
>>>> Many of my service APIs have streaming sockets needed for I/O based activities.  For example, remote event monitoring happens through an ObjectInputStream that is proxied through the smart proxy on the client to a socket end point that the proxy construction provided the details of on the server.
>>>>         
>>> This too is interesting Gregg,  I've done something similar with the StreamServiceRegistrar; I've created a new interface called ResultStream, to mimic an ObjectInputStream, which is returned from lookup.  The idea is to provide a simple interface and minimise network requests by allowing a smart proxy implementation to request and cache larger chunks.  The main advantage of the Stream like behaviour, is to enable incremental filtering stages and delay unmarshalling of proxy's until after initial Entry filtering, then to control the progress of unmarshalling, so your only dealing with one proxy at at time. Further filtering can be performed after each unmarshalling, such as checking method constraints.  Any unsuitable proxy's can be thrown away before the next is unmarshalled, allowing garbage collection to clean as you go and prevent memory exhaustion.
>>>
>>> The StreamServiceRegistrar lookup method also takes parameters for Entry classes that are to be unmarshalled for initial filtering, allowing delayed unmarshalling of uninteresting entries.
>>>
>>> Unmarshalling will still be performed by the Registrar implementation, the client just gets to chose when it happens.
>>>
>>> Cheers,
>>>
>>> Peter.
>>>
>>>       
>
>   


Re: StreamServiceRegistrar Was: Re: Jini Activation Framework - A sub project?

Posted by Gregg Wonderly <gr...@gmail.com>.
One of the the things that I played around with was a Protocol handler which would use a URL structure that specified such versioning information.  It would lookup services implementing CodeBaseAccess and ask them if the could provide such a jar file.

This kind of thing makes it easier to deal with some issues about total number of codebase sources, but I am still not sure that it solves the problem you are thinking about.

Gregg Wonderly

Sent from my iPad

On May 5, 2010, at 9:00 PM, Peter Firmstone <ji...@zeus.net.au> wrote:

> The other thing I'm working on is a PackageVersion annotation, using the implementation version and package name from the Java Package Version spec, so developers can version their proxy's allowing sharing of compatible bytecode for reduced codebase downloads.
> 
> I'm hoping that these things combined will assist to enable lookup over the internet.
> 
> Peter Firmstone wrote:
>> Gregg Wonderly wrote:
>>> Many of my service APIs have streaming sockets needed for I/O based activities.  For example, remote event monitoring happens through an ObjectInputStream that is proxied through the smart proxy on the client to a socket end point that the proxy construction provided the details of on the server.
>> 
>> This too is interesting Gregg,  I've done something similar with the StreamServiceRegistrar; I've created a new interface called ResultStream, to mimic an ObjectInputStream, which is returned from lookup.  The idea is to provide a simple interface and minimise network requests by allowing a smart proxy implementation to request and cache larger chunks.  The main advantage of the Stream like behaviour, is to enable incremental filtering stages and delay unmarshalling of proxy's until after initial Entry filtering, then to control the progress of unmarshalling, so your only dealing with one proxy at at time. Further filtering can be performed after each unmarshalling, such as checking method constraints.  Any unsuitable proxy's can be thrown away before the next is unmarshalled, allowing garbage collection to clean as you go and prevent memory exhaustion.
>> 
>> The StreamServiceRegistrar lookup method also takes parameters for Entry classes that are to be unmarshalled for initial filtering, allowing delayed unmarshalling of uninteresting entries.
>> 
>> Unmarshalling will still be performed by the Registrar implementation, the client just gets to chose when it happens.
>> 
>> Cheers,
>> 
>> Peter.
>> 
> 

StreamServiceRegistrar Was: Re: Jini Activation Framework - A sub project?

Posted by Peter Firmstone <ji...@zeus.net.au>.
The other thing I'm working on is a PackageVersion annotation, using the 
implementation version and package name from the Java Package Version 
spec, so developers can version their proxy's allowing sharing of 
compatible bytecode for reduced codebase downloads.

I'm hoping that these things combined will assist to enable lookup over 
the internet.

Peter Firmstone wrote:
> Gregg Wonderly wrote:
>> Many of my service APIs have streaming sockets needed for I/O based 
>> activities.  For example, remote event monitoring happens through an 
>> ObjectInputStream that is proxied through the smart proxy on the 
>> client to a socket end point that the proxy construction provided the 
>> details of on the server.
>
> This too is interesting Gregg,  I've done something similar with the 
> StreamServiceRegistrar; I've created a new interface called 
> ResultStream, to mimic an ObjectInputStream, which is returned from 
> lookup.  The idea is to provide a simple interface and minimise 
> network requests by allowing a smart proxy implementation to request 
> and cache larger chunks.  The main advantage of the Stream like 
> behaviour, is to enable incremental filtering stages and delay 
> unmarshalling of proxy's until after initial Entry filtering, then to 
> control the progress of unmarshalling, so your only dealing with one 
> proxy at at time. Further filtering can be performed after each 
> unmarshalling, such as checking method constraints.  Any unsuitable 
> proxy's can be thrown away before the next is unmarshalled, allowing 
> garbage collection to clean as you go and prevent memory exhaustion.
>
> The StreamServiceRegistrar lookup method also takes parameters for 
> Entry classes that are to be unmarshalled for initial filtering, 
> allowing delayed unmarshalling of uninteresting entries.
>
> Unmarshalling will still be performed by the Registrar implementation, 
> the client just gets to chose when it happens.
>
> Cheers,
>
> Peter.
>


Re: Jini Activation Framework - A sub project?

Posted by Peter Firmstone <ji...@zeus.net.au>.
Gregg Wonderly wrote:
> Many of my service APIs have streaming sockets needed for I/O based 
> activities.  For example, remote event monitoring happens through an 
> ObjectInputStream that is proxied through the smart proxy on the 
> client to a socket end point that the proxy construction provided the 
> details of on the server.

This too is interesting Gregg,  I've done something similar with the 
StreamServiceRegistrar; I've created a new interface called 
ResultStream, to mimic an ObjectInputStream, which is returned from 
lookup.  The idea is to provide a simple interface and minimise network 
requests by allowing a smart proxy implementation to request and cache 
larger chunks.  The main advantage of the Stream like behaviour, is to 
enable incremental filtering stages and delay unmarshalling of proxy's 
until after initial Entry filtering, then to control the progress of 
unmarshalling, so your only dealing with one proxy at at time. Further 
filtering can be performed after each unmarshalling, such as checking 
method constraints.  Any unsuitable proxy's can be thrown away before 
the next is unmarshalled, allowing garbage collection to clean as you go 
and prevent memory exhaustion.

The StreamServiceRegistrar lookup method also takes parameters for Entry 
classes that are to be unmarshalled for initial filtering, allowing 
delayed unmarshalling of uninteresting entries.

Unmarshalling will still be performed by the Registrar implementation, 
the client just gets to chose when it happens.

Cheers,

Peter.

Re: Jini Activation Framework - A sub project?

Posted by Peter Firmstone <ji...@zeus.net.au>.
Thanks Gregg,

I need more of your kind insight into these issues. Very Interesting 
comment about life cycle management and Rio, I wonder if Dennis is 
reading this thread and would like to comment?  An alternative to 
Activation sounds interesting.

Longer term for Java SE, I would like to split jsk-platform.jar, see 
below and remove the coupling of Jini Service Implementations to 
activation, or at least make available some that aren't coupled to it, 
then look at improving Scalability and degradation under load for Reggie.

jsk-platform.jar -less activation.
activation.jar - net.jini.activation.*

I concede your probably right, I need to have a separate Apache River 
Java CDC branch & release.

Even doing so, it should be possible for most if not all jar archives in 
lib-dl to be useable on Java CDC, provided they are compiled with 
source=1.5, target=jsr14 , even if none of the standard Jini services 
run on CDC to begin with, they can be ported later, less activation.

I'm going to need a good build system, so other components can be 
compiled with source=1.5, target=1.5

Cheers,

Peter.

Gregg Wonderly wrote:
> Peter Firmstone wrote:
>> I'm thinking about is how to create the next distribution release 
>> artifacts, namely, creating a platform release artifact that excludes 
>> Activation.  The Java CDC release artifact (zip archive) will be 
>> identical apart from lacking any depreciated classes or methods.
>
> I think that trimming things for the sake of CDC should be focused 
> into a CDC specific spec/artifact-set.  If we splinter the release 
> artifacts that we have today into separate groups, we risk having to 
> do a lot of work to manage the impact this has on existing applications.
>
>> I had figured a Mahalo implementation should be included in the 
>> Apache Release (without Activation), I want to remove the dependency 
>> on Activation being present in the Classpath.  I'll have to do this 
>> for a Java CDC release anyway.
>
> Activation, being used, within the existing pieces of river, as 
> references to the activation framework, does represent an issue.  
> However, are you trying to manage how a CDC client references and uses 
> river artifacts, or are you trying to manage how all of river can 
> compile under CDC limitations?
>
>> Your use of Norm is interesting, I didn't want to include Norm in the 
>> base release.  It sounds like your using Norm to control the 
>> liveliness of your exports, it also sound like these proxy's aren't 
>> registered with the lookup service, tell me more?
>
> If you look at http://pastion.dev.java.net you will see a "PAM" based 
> login mechanism which allows a client to send their login credentials 
> for a particular machine in order to authenticate.  There are now 
> several different products that provide PAM plugins for linux that 
> then utilize some external authentication system such as Active 
> Directory.
>
> My customers wanted to manage access to services using that 
> mechanism.  Pastion has a per call mechanism shown, but I almost 
> always use a "factory" mechanism where the user authenticates and gets 
> back a LeasedSmartProxy subclass.  On the server side, I use 
> java.lang.reflect.InvocationHandler implementations to hook the 
> exported java.lang.reflect.Proxy object into the server.  The 
> LeasedSmartProxy wraps the java.lang.reflect.Proxy and the Lease 
> object and just provides a delegation based implementation of the 
> service interfaces for the client.
>
> Many of my service APIs have streaming sockets needed for I/O based 
> activities.  For example, remote event monitoring happens through an 
> ObjectInputStream that is proxied through the smart proxy on the 
> client to a socket end point that the proxy construction provided the 
> details of on the server.
>
>> My thoughts were:
>>
>>    * A separate release artifact for the Activation Framework.
>>    * Platform release artifacts for Java SE and Java CDC.
>>
>> Your comments are making me think:
>>
>>    * A separate release artifact for the Activation Framework (phoenix,
>>      specific to Java SE).
>>    * Basic platform release artifact (With a very wide platform support
>>      base, one for Java SE 5+, one for Java CDC 1.11).
>>    * One or more Service release artifacts, I want Reggie to take
>>      advantage of the latest java language and concurrency features,
>>      however I want to be able to install other services without
>>      requiring the Activation Framework release to also be installed,
>>      I'm still figuring this bit out.
>
> There are references to the activation framework in the services 
> because they interact with it, rather than being contained by it.  
> Rio, for example shows a way that the lifecycle and deployment 
> constraints can be separated from the service itself.  I don't know if 
> we want to completely separate Activation or if we should just remove 
> the activation framework and make the effort to point at Rio as a 
> lifecycle management system that provides features that manage this 
> issue in a way that allows a simpler POJO kind of service development.
>
> Gregg Wonderly
>
>
>> Cheers,
>>
>> Peter.
>>
>> Gregg Wonderly wrote:
>>> I use norm and mahalo all the time without activation.  I use a 
>>> leased smart proxy instead of DGC so that all of the details of 
>>> proxy management are under my control and I use transactions without 
>>> activation for mahalos lifecycle.
>>>
>>> Gregg wonderly
>>>
>>> Sent from my iPad
>>>
>>> On May 2, 2010, at 8:37 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>>>
>>>  
>>>> My reasoning for removal from the platform spec or making it 
>>>> optional: Activation is a Service implementation detail.
>>>>
>>>> If there are no objections, I'd like to move it in the near future.
>>>>
>>>> Regards,
>>>>
>>>> Peter.
>>>>
>>>> Peter Firmstone wrote:
>>>>   
>>>>> Can we move the Activation Framework to a subproject of Apache 
>>>>> River?  So it isn't part of the platform?
>>>>>
>>>>> The Activation Framework could be optional and include the following:
>>>>>
>>>>>   * Phoenix - Activation Service
>>>>>   * Norm - Lease Service (This doesn't make much sense outside 
>>>>> Activation)
>>>>>   * Activatable Fiddler - Lookup Discovery Service
>>>>>   * Activatable Reggie - Service Registrar
>>>>>   * Activatable Javaspaces - Outrigger FrontEndSpace.
>>>>>   * Mahalo - Transaction Service (We can create a Non-Activatable
>>>>>     implementation for the platform)
>>>>>   * Mecury - Event mailbox (We can create a Non-Activatable
>>>>>     implementation for the platform)
>>>>>
>>>>> These could be bundled together as an Activation Framework Release
>>>>>
>>>>> Existing interfaces that are specific to Activation in the 
>>>>> net.jini namespace (exclusive of net.jini.activation) could be 
>>>>> depreciated and copied to another package namespace, giving 
>>>>> existing applications time to transition.
>>>>>
>>>>> Then the activation framework becomes something that runs on top 
>>>>> of Jini / Apache River, rather than part of it, making Jini / 
>>>>> River conceptually simpler to new application developers.
>>>>>
>>>>> What are your thoughts?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Peter.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>       
>>>
>>>   
>>
>>
>
>


Re: Jini Activation Framework - A sub project?

Posted by Gregg Wonderly <ge...@cox.net>.
Peter Firmstone wrote:
> I'm thinking about is how to create the next distribution release 
> artifacts, namely, creating a platform release artifact that excludes 
> Activation.  The Java CDC release artifact (zip archive) will be 
> identical apart from lacking any depreciated classes or methods.

I think that trimming things for the sake of CDC should be focused into a CDC 
specific spec/artifact-set.  If we splinter the release artifacts that we have 
today into separate groups, we risk having to do a lot of work to manage the 
impact this has on existing applications.

> I had figured a Mahalo implementation should be included in the Apache 
> Release (without Activation), I want to remove the dependency on 
> Activation being present in the Classpath.  I'll have to do this for a 
> Java CDC release anyway.

Activation, being used, within the existing pieces of river, as references to 
the activation framework, does represent an issue.  However, are you trying to 
manage how a CDC client references and uses river artifacts, or are you trying 
to manage how all of river can compile under CDC limitations?

> Your use of Norm is interesting, I didn't want to include Norm in the 
> base release.  It sounds like your using Norm to control the liveliness 
> of your exports, it also sound like these proxy's aren't registered with 
> the lookup service, tell me more?

If you look at http://pastion.dev.java.net you will see a "PAM" based login 
mechanism which allows a client to send their login credentials for a particular 
machine in order to authenticate.  There are now several different products that 
provide PAM plugins for linux that then utilize some external authentication 
system such as Active Directory.

My customers wanted to manage access to services using that mechanism.  Pastion 
has a per call mechanism shown, but I almost always use a "factory" mechanism 
where the user authenticates and gets back a LeasedSmartProxy subclass.  On the 
server side, I use java.lang.reflect.InvocationHandler implementations to hook 
the exported java.lang.reflect.Proxy object into the server.  The 
LeasedSmartProxy wraps the java.lang.reflect.Proxy and the Lease object and just 
provides a delegation based implementation of the service interfaces for the client.

Many of my service APIs have streaming sockets needed for I/O based activities. 
  For example, remote event monitoring happens through an ObjectInputStream that 
is proxied through the smart proxy on the client to a socket end point that the 
proxy construction provided the details of on the server.

> My thoughts were:
> 
>    * A separate release artifact for the Activation Framework.
>    * Platform release artifacts for Java SE and Java CDC.
> 
> Your comments are making me think:
> 
>    * A separate release artifact for the Activation Framework (phoenix,
>      specific to Java SE).
>    * Basic platform release artifact (With a very wide platform support
>      base, one for Java SE 5+, one for Java CDC 1.11).
>    * One or more Service release artifacts, I want Reggie to take
>      advantage of the latest java language and concurrency features,
>      however I want to be able to install other services without
>      requiring the Activation Framework release to also be installed,
>      I'm still figuring this bit out.

There are references to the activation framework in the services because they 
interact with it, rather than being contained by it.  Rio, for example shows a 
way that the lifecycle and deployment constraints can be separated from the 
service itself.  I don't know if we want to completely separate Activation or if 
we should just remove the activation framework and make the effort to point at 
Rio as a lifecycle management system that provides features that manage this 
issue in a way that allows a simpler POJO kind of service development.

Gregg Wonderly


> Cheers,
> 
> Peter.
> 
> Gregg Wonderly wrote:
>> I use norm and mahalo all the time without activation.  I use a leased 
>> smart proxy instead of DGC so that all of the details of proxy 
>> management are under my control and I use transactions without 
>> activation for mahalos lifecycle.
>>
>> Gregg wonderly
>>
>> Sent from my iPad
>>
>> On May 2, 2010, at 8:37 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>>
>>  
>>> My reasoning for removal from the platform spec or making it 
>>> optional: Activation is a Service implementation detail.
>>>
>>> If there are no objections, I'd like to move it in the near future.
>>>
>>> Regards,
>>>
>>> Peter.
>>>
>>> Peter Firmstone wrote:
>>>    
>>>> Can we move the Activation Framework to a subproject of Apache 
>>>> River?  So it isn't part of the platform?
>>>>
>>>> The Activation Framework could be optional and include the following:
>>>>
>>>>   * Phoenix - Activation Service
>>>>   * Norm - Lease Service (This doesn't make much sense outside 
>>>> Activation)
>>>>   * Activatable Fiddler - Lookup Discovery Service
>>>>   * Activatable Reggie - Service Registrar
>>>>   * Activatable Javaspaces - Outrigger FrontEndSpace.
>>>>   * Mahalo - Transaction Service (We can create a Non-Activatable
>>>>     implementation for the platform)
>>>>   * Mecury - Event mailbox (We can create a Non-Activatable
>>>>     implementation for the platform)
>>>>
>>>> These could be bundled together as an Activation Framework Release
>>>>
>>>> Existing interfaces that are specific to Activation in the net.jini 
>>>> namespace (exclusive of net.jini.activation) could be depreciated 
>>>> and copied to another package namespace, giving existing 
>>>> applications time to transition.
>>>>
>>>> Then the activation framework becomes something that runs on top of 
>>>> Jini / Apache River, rather than part of it, making Jini / River 
>>>> conceptually simpler to new application developers.
>>>>
>>>> What are your thoughts?
>>>>
>>>> Regards,
>>>>
>>>> Peter.
>>>>
>>>>
>>>>
>>>>
>>>>       
>>
>>   
> 
> 


Re: Jini Activation Framework - A sub project?

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hi Gregg,

I'm thinking about is how to create the next distribution release 
artifacts, namely, creating a platform release artifact that excludes 
Activation.  The Java CDC release artifact (zip archive) will be 
identical apart from lacking any depreciated classes or methods.

I had figured a Mahalo implementation should be included in the Apache 
Release (without Activation), I want to remove the dependency on 
Activation being present in the Classpath.  I'll have to do this for a 
Java CDC release anyway.

Your use of Norm is interesting, I didn't want to include Norm in the 
base release.  It sounds like your using Norm to control the liveliness 
of your exports, it also sound like these proxy's aren't registered with 
the lookup service, tell me more?

My thoughts were:

    * A separate release artifact for the Activation Framework.
    * Platform release artifacts for Java SE and Java CDC.

Your comments are making me think:

    * A separate release artifact for the Activation Framework (phoenix,
      specific to Java SE).
    * Basic platform release artifact (With a very wide platform support
      base, one for Java SE 5+, one for Java CDC 1.11).
    * One or more Service release artifacts, I want Reggie to take
      advantage of the latest java language and concurrency features,
      however I want to be able to install other services without
      requiring the Activation Framework release to also be installed,
      I'm still figuring this bit out.

Cheers,

Peter.

Gregg Wonderly wrote:
> I use norm and mahalo all the time without activation.  I use a leased smart proxy instead of DGC so that all of the details of proxy management are under my control and I use transactions without activation for mahalos lifecycle.
>
> Gregg wonderly
>
> Sent from my iPad
>
> On May 2, 2010, at 8:37 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>
>   
>> My reasoning for removal from the platform spec or making it optional: Activation is a Service implementation detail.
>>
>> If there are no objections, I'd like to move it in the near future.
>>
>> Regards,
>>
>> Peter.
>>
>> Peter Firmstone wrote:
>>     
>>> Can we move the Activation Framework to a subproject of Apache River?  So it isn't part of the platform?
>>>
>>> The Activation Framework could be optional and include the following:
>>>
>>>   * Phoenix - Activation Service
>>>   * Norm - Lease Service (This doesn't make much sense outside Activation)
>>>   * Activatable Fiddler - Lookup Discovery Service
>>>   * Activatable Reggie - Service Registrar
>>>   * Activatable Javaspaces - Outrigger FrontEndSpace.
>>>   * Mahalo - Transaction Service (We can create a Non-Activatable
>>>     implementation for the platform)
>>>   * Mecury - Event mailbox (We can create a Non-Activatable
>>>     implementation for the platform)
>>>
>>> These could be bundled together as an Activation Framework Release
>>>
>>> Existing interfaces that are specific to Activation in the net.jini namespace (exclusive of net.jini.activation) could be depreciated and copied to another package namespace, giving existing applications time to transition.
>>>
>>> Then the activation framework becomes something that runs on top of Jini / Apache River, rather than part of it, making Jini / River conceptually simpler to new application developers.
>>>
>>> What are your thoughts?
>>>
>>> Regards,
>>>
>>> Peter.
>>>
>>>
>>>
>>>
>>>       
>
>   


Re: Jini Activation Framework - A sub project?

Posted by Peter Firmstone <ji...@zeus.net.au>.
Zsolt Kúti wrote:
> On Mon, 3 May 2010 11:41:58 -0500
> Gregg Wonderly <gr...@gmail.com> wrote:
>
> Hi Gregg,
>
>   
>> I use norm and mahalo all the time without activation.  I use a
>> leased smart proxy instead of DGC so that all of the details of proxy
>>     
> Is there an example for this in any of your public projects?
>
>   
>> management are under my control and I use transactions without
>> activation for mahalos lifecycle.
>>     
> Would you explain this latter sentence, I dont really get it.
>   
Activation means that Mahalo isn't started until it is required, but 
this requires Phoenix to run as a background process.
Mahalo can be constructed to run without activation.

   /**
     * Constructs a non-activatable transaction manager.
     *
     * @param args Service configuration options
     *
     * @param lc <code>LifeCycle</code> reference used for callback
     */
    TxnManagerImpl(String[] args, LifeCycle lc, boolean persistent)
    throws Exception

Re: Jini Activation Framework - A sub project?

Posted by Zsolt Kúti <la...@gmail.com>.
On Tue, 04 May 2010 12:00:38 -0500
Gregg Wonderly <gr...@wonderly.org> wrote:

> Zsolt Kúti wrote:
> > On Mon, 3 May 2010 11:41:58 -0500
> > Gregg Wonderly <gr...@gmail.com> wrote:
> > 
> > Hi Gregg,
> > 
> >> I use norm and mahalo all the time without activation.  I use a
> >> leased smart proxy instead of DGC so that all of the details of
> >> proxy
> 
> > Is there an example for this in any of your public projects?
> 
> http://pastion.dev.java.net has a version of LeasedSmartProxy and
> related classes visible.  It's not well documented, I just pushed it
> out, hoping to get back to cleaning it up.  That hasn't happened...
> 
> >> management are under my control and I use transactions without
> >> activation for mahalos lifecycle.
>  >
> > Would you explain this latter sentence, I dont really get it.
> 
> I just start mahalo using com.sun.jini.start, without activation.
> The use of the Lease in the smart proxy simulates DGCs features.
> There are APIs to listen to the activities of DGC on the server so
> you can see a proxy become unexported. I prefer to get the DGC
> conversation out of the JERI stream that my service interface is
> using.  I've seen cases where DGC has become stuck for long period of
> times doing checks for liveness.  The use of a Lease, for me, unifies
> the use of a feature already available, and it fits into my needs for
> debugging as well, as I can see the lease renewals from the client
> and see the lease renewal fail on the client etc.

Thanks for the link and explanation, Gregg!

Zsolt

Re: Jini Activation Framework - A sub project?

Posted by Gregg Wonderly <gr...@wonderly.org>.
Zsolt Kúti wrote:
> On Mon, 3 May 2010 11:41:58 -0500
> Gregg Wonderly <gr...@gmail.com> wrote:
> 
> Hi Gregg,
> 
>> I use norm and mahalo all the time without activation.  I use a
>> leased smart proxy instead of DGC so that all of the details of proxy

> Is there an example for this in any of your public projects?

http://pastion.dev.java.net has a version of LeasedSmartProxy and related 
classes visible.  It's not well documented, I just pushed it out, hoping to get 
back to cleaning it up.  That hasn't happened...

>> management are under my control and I use transactions without
>> activation for mahalos lifecycle.
 >
> Would you explain this latter sentence, I dont really get it.

I just start mahalo using com.sun.jini.start, without activation.  The use of 
the Lease in the smart proxy simulates DGCs features.  There are APIs to listen 
to the activities of DGC on the server so you can see a proxy become unexported. 
  I prefer to get the DGC conversation out of the JERI stream that my service 
interface is using.  I've seen cases where DGC has become stuck for long period 
of times doing checks for liveness.  The use of a Lease, for me, unifies the use 
of a feature already available, and it fits into my needs for debugging as well, 
as I can see the lease renewals from the client and see the lease renewal fail 
on the client etc.

Gregg Wonderly

Re: Jini Activation Framework - A sub project?

Posted by Zsolt Kúti <la...@gmail.com>.
On Mon, 3 May 2010 11:41:58 -0500
Gregg Wonderly <gr...@gmail.com> wrote:

Hi Gregg,

> I use norm and mahalo all the time without activation.  I use a
> leased smart proxy instead of DGC so that all of the details of proxy
Is there an example for this in any of your public projects?

> management are under my control and I use transactions without
> activation for mahalos lifecycle.
Would you explain this latter sentence, I dont really get it.

Thanks!
Zsolt

Re: Jini Activation Framework - A sub project?

Posted by Gregg Wonderly <gr...@gmail.com>.
I use norm and mahalo all the time without activation.  I use a leased smart proxy instead of DGC so that all of the details of proxy management are under my control and I use transactions without activation for mahalos lifecycle.

Gregg wonderly

Sent from my iPad

On May 2, 2010, at 8:37 PM, Peter Firmstone <ji...@zeus.net.au> wrote:

> My reasoning for removal from the platform spec or making it optional: Activation is a Service implementation detail.
> 
> If there are no objections, I'd like to move it in the near future.
> 
> Regards,
> 
> Peter.
> 
> Peter Firmstone wrote:
>> Can we move the Activation Framework to a subproject of Apache River?  So it isn't part of the platform?
>> 
>> The Activation Framework could be optional and include the following:
>> 
>>   * Phoenix - Activation Service
>>   * Norm - Lease Service (This doesn't make much sense outside Activation)
>>   * Activatable Fiddler - Lookup Discovery Service
>>   * Activatable Reggie - Service Registrar
>>   * Activatable Javaspaces - Outrigger FrontEndSpace.
>>   * Mahalo - Transaction Service (We can create a Non-Activatable
>>     implementation for the platform)
>>   * Mecury - Event mailbox (We can create a Non-Activatable
>>     implementation for the platform)
>> 
>> These could be bundled together as an Activation Framework Release
>> 
>> Existing interfaces that are specific to Activation in the net.jini namespace (exclusive of net.jini.activation) could be depreciated and copied to another package namespace, giving existing applications time to transition.
>> 
>> Then the activation framework becomes something that runs on top of Jini / Apache River, rather than part of it, making Jini / River conceptually simpler to new application developers.
>> 
>> What are your thoughts?
>> 
>> Regards,
>> 
>> Peter.
>> 
>> 
>> 
>> 
> 

Re: Jini Activation Framework - A sub project?

Posted by Peter Firmstone <ji...@zeus.net.au>.
My reasoning for removal from the platform spec or making it optional: 
Activation is a Service implementation detail.

If there are no objections, I'd like to move it in the near future.

Regards,

Peter.

Peter Firmstone wrote:
> Can we move the Activation Framework to a subproject of Apache River?  
> So it isn't part of the platform?
>
> The Activation Framework could be optional and include the following:
>
>    * Phoenix - Activation Service
>    * Norm - Lease Service (This doesn't make much sense outside 
> Activation)
>    * Activatable Fiddler - Lookup Discovery Service
>    * Activatable Reggie - Service Registrar
>    * Activatable Javaspaces - Outrigger FrontEndSpace.
>    * Mahalo - Transaction Service (We can create a Non-Activatable
>      implementation for the platform)
>    * Mecury - Event mailbox (We can create a Non-Activatable
>      implementation for the platform)
>
> These could be bundled together as an Activation Framework Release
>
> Existing interfaces that are specific to Activation in the net.jini 
> namespace (exclusive of net.jini.activation) could be depreciated and 
> copied to another package namespace, giving existing applications time 
> to transition.
>
> Then the activation framework becomes something that runs on top of 
> Jini / Apache River, rather than part of it, making Jini / River 
> conceptually simpler to new application developers.
>
> What are your thoughts?
>
> Regards,
>
> Peter.
>
>
>
>


Jini Activation Framework - A sub project?

Posted by Peter Firmstone <ji...@zeus.net.au>.
Can we move the Activation Framework to a subproject of Apache River?  
So it isn't part of the platform?

The Activation Framework could be optional and include the following:

    * Phoenix - Activation Service
    * Norm - Lease Service (This doesn't make much sense outside Activation)
    * Activatable Fiddler - Lookup Discovery Service
    * Activatable Reggie - Service Registrar
    * Activatable Javaspaces - Outrigger FrontEndSpace.
    * Mahalo - Transaction Service (We can create a Non-Activatable
      implementation for the platform)
    * Mecury - Event mailbox (We can create a Non-Activatable
      implementation for the platform)

These could be bundled together as an Activation Framework Release

Existing interfaces that are specific to Activation in the net.jini 
namespace (exclusive of net.jini.activation) could be depreciated and 
copied to another package namespace, giving existing applications time 
to transition.

Then the activation framework becomes something that runs on top of Jini 
/ Apache River, rather than part of it, making Jini / River conceptually 
simpler to new application developers.

What are your thoughts?

Regards,

Peter.




Re: Jini Spec API changes - Design Decision - Important

Posted by Peter Firmstone <ji...@zeus.net.au>.
You might be wondering why my recent changes weren't implemented 
earlier, put simply, vision of hindsight is 20:20.

After reading about the reasons why certain decisions have been over the 
evolution of Jini and now River, I can say that River's API is simply 
brilliant, it was put together by some very bright minds, this doesn't 
mean that it's perfect (anything perfect is obsolete), it does have its 
warts, but it's the closest thing to the future of computing I'm aware of.

On the subject of Design decisions, here's a big one I'd like to propose:

    * Depreciate the Lease Service and include the Jini Surrogate
      Architecture with River.

Why?

Well a Lease Service renews a Lease for another Service, however if that 
service fails, the remote Lease Service continues renewing its leases.  
I don't think anything is so reliable that we can guarantee it will not 
fail, therefore the Lease Service contributes to unreliability as it 
violates the Lease contract, that is: when something fails, a lease is 
not renewed and the resources are reclaimed.  Instead a Surrogate could 
maintain a lease for a non java architecture service, when the service 
goes away, so can the Lease.

This discussion excludes the LeaseRenewalManager, which doesn't violate 
the Lease contract, as it renews Leases for remote resources on behalf 
of local applications, simplifying programming.

Best Regards,

Peter.

Re: Jini Spec API changes - Design Decision

Posted by Peter Firmstone <ji...@zeus.net.au>.
When I wanted a way of returning marshalled or semi marshalled 
ServiceItem results from StreamServiceRegistrar, I chose to extend 
ServiceItem, and add two methods:

Object getService()
Entry[] getEntries()

I called this class MarshalledServiceItem.

Here's a new method from StreamServiceRegistrar:

ResultStream<ServiceItem> lookup(ServiceTemplate tmpl,
        Class<? extends Entry>[] unmarshalledEntries, int maxBatchSize) 
throws RemoteException;

The array of unmarshalledEntries is to request these entry classes be 
unmarshalled and available in ServiceItem.attributeSets.

ServiceItem.serviceID is always unmarshalled.
ServiceItem.service is null, if delayed unmarshalling is supported.

I chose to make MarshalledServiceItem abstract, the reason: ServiceItem 
implements Serializable, which would cause the Registrar's 
implementation to be published in Jini's API, so while 
MarshalledServiceItem extends ServiceItem, none of its methods are 
mutator methods and only return the unmarshalled service and complete 
Entry's.

I have another utility class that constructs an unmarshalled 
ServiceItem, from MarshalledServiceItem.

Check out the code.

Cheers,

Peter.

Re: Jini Spec API changes - Advise needed

Posted by Peter Firmstone <ji...@zeus.net.au>.
Christopher Dolan wrote
> The most important thing to me is that I any River 2.2.x code I write will be able talk to
> my Jini 2.1 code.
>
>
>   
I've been thinking about the impact of recent changes surrounding 
ServiceRegistrar and DiscoveryManager, it will be possible to have a 
binary compatible migration / upgrade path from 2.1 to 2.2.

Reggie's implementation for 2.2.x (currently experimental and subject to 
change) will be different from earlier versions, existing Jini 
application code will use the new implementation via a facade if running 
directly on the Apache River 2.2.x platform.

The impact; while Jini 2.1 nodes exist in a djinn, you will have to use 
at least one Reggie implementation prior to Apache River 2.2.0, the new 
nodes can utilise earlier versions of Reggie.  Application code 
(Services and clients) Running on 2.2.x, can use new 
StreamServiceRegistrar methods, and Apache River 2.2.x will wrap a 
facade around any existing Jini 2.1 Reggie's, although results will not 
be available in marshalled form, so there won't be a performance 
advantage unless you utilise the new Apache River 2.2.x Reggie.  The 
good news is that you can write new application code for the later 
Reggie version, while using the former (from a 2.2.x node) and then get 
the performance benefits when you upgrade.

Jini 2.1 nodes won't be able to join any groups that utilise a 2.2.x Reggie.

Existing application code, migrated from Jini 2.1 will work on Apache 
River 2.2.x and doesn't need an earlier Reggie version, as the platform 
will provide a facade to access the new Reggie via the old interface.

Best bet; have a look at what I've done so far & raise any concerns, or 
suggest improvements.

I'll post a javadoc diff in my personal apache web area, time permitting.

Cheers,

Peter.


RE: Jini Spec API changes - Advise needed

Posted by Christopher Dolan <ch...@avid.com>.
In general (I have not looked at Peter's recent changes yet) I vote for
simple binary compatibility by adding @Deprecated to old methods and
adding new methods with the altered signatures.  Instead of rearranging
method arguments, I favor changing the method name, or creating a whole
new class/interface and deprecating the old one if the changes are
extensive.  But where that's impossible (like the refactoring to reduce
java.rmi dependencies) I'm not sure what to propose.  The most important
thing to me is that I any River 2.2.x code I write will be able talk to
my Jini 2.1 code.

I don't like the sound of the ASM post-processing technique you propose.
It sounds fragile and it will make debugging harder since the source
won't match the bytecode.  But I'll keep an open mind if others have had
positive experience with such an approach.

Sorry to be negative.

Chris

-----Original Message-----
From: Peter Firmstone [mailto:jini@zeus.net.au] 
Sent: Friday, April 23, 2010 10:13 PM
To: river-dev@incubator.apache.org
Subject: Re: Jini Spec API changes - Advise needed

Thanks Chris,

I'll look into what's needed to make it an ant build option.

On the subject of API changes, there is one particular bugbear I have 
when it comes to maintaining Binary compatibility:

    * You can't change a method signature's parameters, not even to a
      superclass - any type change breaks binary compatibility (
      exceptions aren't part of method signatures), which is annoying,
      since changing a method to a superclass doesn't break compile time
      compatibility and only requires a simple recompile for application
      upgrades.


However maintaining Binary compatibility requires maintaining the 
original method and adding a new method, often with the parameters moved

around to avoid compile time method signature ambiguity, existing 
applications now require, not only a full recompile, but editing of all 
occurrences of the old method signature in source code, which is far 
less likely to happen.

*So I pose these questions:
*

    * What sort of Compatibility do you want to maintain? 
    * Is compile time enough or do you want binary as well? 
    * Or do you want to have your cake and eat it too?

*Possible Solutions:*

    * We could create a tool that utilises ASM to rewrite method
      signatures of existing binary's to be compatible.
    * Or is there some kind of annotation that we could use to have ASM
      add the old method signature to Apache River after compilation? 
      Then we don't have to change existing application binaries, a
      simple recompile means new binaries for existing applications now
      link to the new methods.  If anyone has any ideas for such an
      annotation, or if someone has done this before, please advise.
      (This would only work for classes, not interfaces).


Breaking Binary compatibility doesn't break Serialization 
compatibility.  However it does bring with it issues for distributed 
computing, such as ensuring the local JVM has the right binary version, 
that is compatible locally, in the correct ClassLoader, but for now, 
I'll save that issue for another thread.

In River, we have three compatibility concerns:

   1. JVM local Binary Compatibility.
   2. Compile time Source Compatibility.
   3. Distributed Serialization Compatibility.

It would be preferable to maintain binary and source level compatibility

with the Jini spec, in order to prevent forklift upgrade requirements 
for existing installations, however if someone can show there is a 
significant reason not to then I'll consider that too.

Note I'm only referring to the net.jini.* namespace.

Best Regards,

Peter Firmstone.

Christopher Dolan wrote:
> I recommend http://www.jdiff.org/
> Here's an example of use:
>
> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
> -apiname testng5.7 -sourcepath '..\testng-5.7\testng-5.7\src\main'
> org.testng
> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
> -apiname testng5.8 -sourcepath '..\testng-5.8\testng-5.8\src\main'
> org.testng
> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
-oldapi
> testng5.7 -newapi testng5.8 org.testng
> % open changes.html
>
> Chris
>
> -----Original Message-----
> From: Peter Firmstone [mailto:jini@zeus.net.au] 
> Sent: Thursday, April 22, 2010 6:31 PM
> To: river-dev@incubator.apache.org
> Subject: Jini Spec API changes - Advise needed
>
> I've created several new classes / interfaces, currently these reside
in
>
> the net.jini name space, I need community advise on which of these 
> belong in Jini's API and good places for those that don't.
>
> The changes will be committed shortly, after my qa tests results.
>
> It'd be neat if we could set up some sort of javadoc diff to monitor 
> changes, does anyone have experience with it?
>
> Regards,
>
> Peter.
>
>   


Re: Jini Spec API changes - Advise needed

Posted by Peter Firmstone <ji...@zeus.net.au>.
Thanks Chris,

I'll look into what's needed to make it an ant build option.

On the subject of API changes, there is one particular bugbear I have 
when it comes to maintaining Binary compatibility:

    * You can't change a method signature's parameters, not even to a
      superclass - any type change breaks binary compatibility (
      exceptions aren't part of method signatures), which is annoying,
      since changing a method to a superclass doesn't break compile time
      compatibility and only requires a simple recompile for application
      upgrades.


However maintaining Binary compatibility requires maintaining the 
original method and adding a new method, often with the parameters moved 
around to avoid compile time method signature ambiguity, existing 
applications now require, not only a full recompile, but editing of all 
occurrences of the old method signature in source code, which is far 
less likely to happen.

*So I pose these questions:
*

    * What sort of Compatibility do you want to maintain? 
    * Is compile time enough or do you want binary as well? 
    * Or do you want to have your cake and eat it too?

*Possible Solutions:*

    * We could create a tool that utilises ASM to rewrite method
      signatures of existing binary's to be compatible.
    * Or is there some kind of annotation that we could use to have ASM
      add the old method signature to Apache River after compilation? 
      Then we don't have to change existing application binaries, a
      simple recompile means new binaries for existing applications now
      link to the new methods.  If anyone has any ideas for such an
      annotation, or if someone has done this before, please advise.
      (This would only work for classes, not interfaces).


Breaking Binary compatibility doesn't break Serialization 
compatibility.  However it does bring with it issues for distributed 
computing, such as ensuring the local JVM has the right binary version, 
that is compatible locally, in the correct ClassLoader, but for now, 
I'll save that issue for another thread.

In River, we have three compatibility concerns:

   1. JVM local Binary Compatibility.
   2. Compile time Source Compatibility.
   3. Distributed Serialization Compatibility.

It would be preferable to maintain binary and source level compatibility 
with the Jini spec, in order to prevent forklift upgrade requirements 
for existing installations, however if someone can show there is a 
significant reason not to then I'll consider that too.

Note I'm only referring to the net.jini.* namespace.

Best Regards,

Peter Firmstone.

Christopher Dolan wrote:
> I recommend http://www.jdiff.org/
> Here's an example of use:
>
> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
> -apiname testng5.7 -sourcepath '..\testng-5.7\testng-5.7\src\main'
> org.testng
> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
> -apiname testng5.8 -sourcepath '..\testng-5.8\testng-5.8\src\main'
> org.testng
> % javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar' -oldapi
> testng5.7 -newapi testng5.8 org.testng
> % open changes.html
>
> Chris
>
> -----Original Message-----
> From: Peter Firmstone [mailto:jini@zeus.net.au] 
> Sent: Thursday, April 22, 2010 6:31 PM
> To: river-dev@incubator.apache.org
> Subject: Jini Spec API changes - Advise needed
>
> I've created several new classes / interfaces, currently these reside in
>
> the net.jini name space, I need community advise on which of these 
> belong in Jini's API and good places for those that don't.
>
> The changes will be committed shortly, after my qa tests results.
>
> It'd be neat if we could set up some sort of javadoc diff to monitor 
> changes, does anyone have experience with it?
>
> Regards,
>
> Peter.
>
>   


RE: Jini Spec API changes - Advise needed

Posted by Christopher Dolan <ch...@avid.com>.
I recommend http://www.jdiff.org/
Here's an example of use:

% javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
-apiname testng5.7 -sourcepath '..\testng-5.7\testng-5.7\src\main'
org.testng
% javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar'
-apiname testng5.8 -sourcepath '..\testng-5.8\testng-5.8\src\main'
org.testng
% javadoc -doclet jdiff.JDiff -docletpath 'jdiff.jar;xerces.jar' -oldapi
testng5.7 -newapi testng5.8 org.testng
% open changes.html

Chris

-----Original Message-----
From: Peter Firmstone [mailto:jini@zeus.net.au] 
Sent: Thursday, April 22, 2010 6:31 PM
To: river-dev@incubator.apache.org
Subject: Jini Spec API changes - Advise needed

I've created several new classes / interfaces, currently these reside in

the net.jini name space, I need community advise on which of these 
belong in Jini's API and good places for those that don't.

The changes will be committed shortly, after my qa tests results.

It'd be neat if we could set up some sort of javadoc diff to monitor 
changes, does anyone have experience with it?

Regards,

Peter.