You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Sim IJskes - QCG <si...@qcg.nl> on 2010/01/12 14:44:25 UTC

Re: Lookup Service Discovery using DNS? (revised)

Peter Firmstone wrote:
> Anyone got any opinions about Lookup Service Discovery?
> 
> How could lookup service discovery be extended to encompass the 
> internet?   Could we utilise DNS to return locations of Lookup Services?
> 
> For world wide lookup services, our current lookup service might return 
> a massive array with too many service matches. Queries present the 
> opportunity to reduce the size of returned results, however security 
> issues from code execution on the lookup service present problems.

I haven't seen any world wide deployments yet, at least not on my bench.
:-) And i would like to reserve my definite judgement before i have had
an actual production deployment of such a service.

When we have several reggies running on the internet, it would be
handy to find them via DNS SRV records. It would only create an extra 
layer of indirection. With DDNS zone updates one could build some kind 
of super registry, but i would prefer just to code it in java.

About MDNS (Bonjour) SRV records. MDNS is a multicast protocol
so it has the same pattern as the multicast discovery as currently
implemented. In a broad sense one could suggest that this might suffer
from the same deployment issues as een MDNS discovery would suffer.
Except that MDNS is more mainstream and on more sites there might be a
working infrastructure(substrate?) for it. (like MDNS holes in personal
firewalls). But still MDNS does not work in the internet.

But anyhow, would this solve the problem. In order for a 
world-wide-registry to work, wouldn't we need to have clearly defined 
services? I wouldn't connect to a registry which i cannot verify for 
source and meaning. To be honest, i can only look as far as using jini 
scoped for projects under my control, and wouldn't dare to connect to 
outside services, unless there are clear agreements.

Running Jini on a LAN, creates no problems, as the LAN is under central 
authority. But to venture out on the internet, which services would i 
like to expose? I guess we have to define internet-jini. Is the internet 
a global anonymous crowd, or is the internet a way to connect isolated 
LAN's? Or just a happy few, romaing the internet, having the right 
certificate chain to identify them.

Gr. Sim

P.S. Just tell me, am i a scared conservative, blocking the way of progress?

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: Lookup Service Discovery using DNS? (revised)

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>> Anyone got any opinions about Lookup Service Discovery?
>>
>> How could lookup service discovery be extended to encompass the 
>> internet?   Could we utilise DNS to return locations of Lookup Services?
>>
>> For world wide lookup services, our current lookup service might 
>> return a massive array with too many service matches. Queries present 
>> the opportunity to reduce the size of returned results, however 
>> security issues from code execution on the lookup service present 
>> problems.
>
> I haven't seen any world wide deployments yet, at least not on my bench.
> :-) And i would like to reserve my definite judgement before i have had
> an actual production deployment of such a service.
>
> When we have several reggies running on the internet, it would be
> handy to find them via DNS SRV records. It would only create an extra 
> layer of indirection. With DDNS zone updates one could build some kind 
> of super registry, but i would prefer just to code it in java.
>
> About MDNS (Bonjour) SRV records. MDNS is a multicast protocol
> so it has the same pattern as the multicast discovery as currently
> implemented. In a broad sense one could suggest that this might suffer
> from the same deployment issues as een MDNS discovery would suffer.
> Except that MDNS is more mainstream and on more sites there might be a
> working infrastructure(substrate?) for it. (like MDNS holes in personal
> firewalls). But still MDNS does not work in the internet.
>
> But anyhow, would this solve the problem. In order for a 
> world-wide-registry to work, wouldn't we need to have clearly defined 
> services? I wouldn't connect to a registry which i cannot verify for 
> source and meaning. To be honest, i can only look as far as using jini 
> scoped for projects under my control, and wouldn't dare to connect to 
> outside services, unless there are clear agreements.
>
> Running Jini on a LAN, creates no problems, as the LAN is under 
> central authority. But to venture out on the internet, which services 
> would i like to expose? I guess we have to define internet-jini. Is 
> the internet a global anonymous crowd, or is the internet a way to 
> connect isolated LAN's? Or just a happy few, romaing the internet, 
> having the right certificate chain to identify them.
>
Internet-Jini:  My thoughts are that it is a Global anonymous crowd, 
where trust relationships are used to allow collaboration to occur in 
proportion to the degree of trust established.

A Global Registry Service, provided it's interface only uses built in 
Java or core River types, will never require remote code to be executed.

For Jini services to be distributed over the internet, they will require 
signed bundles provided by codebase services and trust agreements.  
Bundle signing creates an identity on which a level of trust is based, 
predefined by agreement between parties.  A service client can use, 
signed, trusted bytecode, that doesn't originate from the untrusted 
service (all services are initially untrusted until identified), then 
the client can be assured that the bytecode used for unmarshalling is 
trustworthy within the bounds of predefined trust agreement.  Although 
for full security we need to check Serialized objects defensively copy 
and verify unmarshalled data (we need a source code auditing process for 
serialization). 

The client of a codebase service would have a trust relationship with 
that codebase service ( A company might run their own codebase service ) 
and the codebase service would itself have trust relationships with 
signers who upload code.  A company might require a source code pass an 
audit process for client code of service implementations prior to use.  
The codebase service would itself sign the bundles if they are already 
signed by fully trusted parties and have the correct message digest.  
The trust relationships are based on the OSGi model of trust agreements.

Using bundles for client service code (currently provided in a single 
jar file), might mean better code reuse, additional bundles can be 
downloaded based on dependency requirements and package imports.  
General purpose components could be vetted for security, performing 
common functions for untrusted client code, enough to make code useable 
without breaching security.

Clear trust agreements can be made between parties, companies etc, the 
bundles downloaded can contain required permissions.  Untrusted bundles 
would be sandboxed and all downloaded bytecode would have to pass the 
JVM verification process.

Once bytecode is verified, other verification processes can be used, 
such as user authentication certificates, if required.

For this to succeed, Type must become divorced from originating codebase 
URL's, hence the requirement for a codebase service.  Marshalled Object 
Type must be compared based on fully qualified class names and 
originating bundles, based on OSGi's versioning system.  Therefore, if 
bundle version is compatible and the fully qualified class name equal, 
then two marshalled objects have the same type.

Perhaps upon establishing the Software infrastructure, we could create a 
staging area for submission of client bundle source code, for vetting 
and signing by multiple parties?

Cheers,

Peter.



Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>> Similar to a VPN, but without the Private Part, VPN's use IPSec, 
>> which is a low level OS kernel and router implementation that TCP/IP 
>> utilises which require forward planning and administration.  
>
> I'm not clear on what the point is here, but it is my intention to 
> create something that can be used in a CUG deployment. So that the 
> owner of the infrastructure can prohibit use of their 'proxyservers' 
> to systems outside the CUG.
>
>> Instead we could communicate over ordinary public networks without
>> any special network admin intervention.
>
> I'm with you!
>
>> This does raise privacy issues though for serialization or message
>> streams, however secure JERI has mechanisms to handle those.
>
> Exactly, you cannot provide proxies in the wild without some kind of 
> verification. Thats why i keep returning to things analogous to 
> tunnels/VPNs etc. Because i want a TLS session end to end.
>
>> That is of course if Serialization is used for communication. 
>> Compressed Serialized binary data is the fastest way to communicate
>> over bandwidth restricted and high latency networks.
>
> BTW pluggable marshallers, this could provide us for a place to put an 
> auto-exporter in. We could with annotations/interfaces signal verify 
> the intent. (i'm sure i'm not the first one thinking that).
>
>
>> For smart proxy implementations we would want the client to be able 
>> to download the marshalled smart proxy from a lookup service and 
>> download the bytecode from a codebase service (it would be easier if 
>> the code base is public), where the smart proxy itself uses its 
>> internal reflective proxy (RMI JERI) to communicate with the private 
>> service via the listening post.  The listening post would just be 
>> relaying the methods / messages while keeping the communication lines 
>> (NAT gateway ports) open between it, the smart proxy and its server.
>
>> Perhaps JERI itself could utilise some sort of Relay listening post 
>> service?
>
> Exactly, i was only talking about a proxy in terms of JERI. I think we 
> now have the proper name for it. Jeri Relay Service (JRS)?
>
> We can also create a standard codebase service, which can (off-course) 
> also be exported over the JRS.
>
> Gr. Sim
>


Re: roadmap NAT-PMP vs uPnP

Posted by Peter Firmstone <ji...@zeus.net.au>.
NAT-PMP is another protocol, developed by Apple, released in 2005 to 
allow Routers to assign external ports, advise changes etc, it isn't as 
prevalent as Upnp is today.

If we were to pick one or the other it's probably smarter to go with 
NAT-PMP longer term.  Which brings me back to where I started.

For now, I think I'll take the other options, p2p TCP and a Jeri Relay 
service, see my earlier posts for details.

Excerpt from the Internet draft at: 
http://files.dns-sd.org/draft-cheshire-nat-pmp.txt

Internet Draft          NAT Port Mapping Protocol        16th April 2008


9.  Noteworthy Features of NAT Port Mapping Protocol

[Temporary Authors' Note (not to be included in published RFC):
The intent of this section is not to bash UPnP, but to be a fair and
accurate comparison of NAT-PMP and IGD. NAT-PMP is frequently compared
to IGD, because superficially it might appear that they perform much the
same task, so it would be an omission for this document to ignore that
and try to pretend the issue doesn't exist. The purpose of this section
is to point out the relevant differences so that implementors can make
an informed decision. If we have any errors or omissions in our
descriptions of how IGD works for creating port mappings, we invite and
welcome feedback from IGD experts who can help us correct those
mistakes.]

   Some readers have asked how NAT-PMP compares to other similar
   solutions, particularly the UPnP Forum's Internet Gateway Device
   (IGD) Device Control Protocol [IGD].

   The answer is that although UPnP IGD is often used as a way for
   client devices to create port mappings programmatically, that's not
   what it was created for. Whereas NAT-PMP was designed to be used
   primarily by software entities managing their own port mappings, UPnP
   IGD was designed to be used primarily by humans configuring all the
   settings of their gateway using some user interface tool. This
   different target audience leads to protocol differences. For example,
   while it is reasonable and sensible to require software entities to
   renew their mappings periodically to prove that they are still there,
   it's not reasonable to require the same thing of a human user. When
   a human user configures their gateway, they expect it to stay
   configured that way until they decide to change it. If they configure
   a port mapping, they expect it to stay configured until they decide
   to delete it.

   Because of this focus on being a general administation protocol for
   all aspects of home gateway configuration, UPnP IGD is a large and
   complicated collection of protocols (360 pages of specification
   spread over 13 separate documents, not counting supporting protocol
   specifications like SSDP and XML). While it may be a fine way for
   human users to configure their home gateways, it is not especially
   suited to the task of programmatically creating port mappings.

   The requirements for a good port mapping protocol, requirements which
   are met by NAT-PMP, are outlined below:


9.1.  Simplicity

   Many home gateways, and many of the devices that connect to them,
   are small, low-cost devices, with limited RAM, flash memory, and CPU
   resources. Protocols they use should be considerate of this,
   supporting a small number of simple operations that can be ...




Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
This is why upnp ICD will remain a Home Gateway implementation in the 
near future:  Cisco doesn't support upnp.

Information from http://www.sbbi.net/site/upnp/index.html

Security problems

Some security problems have been found with some UPNP™ implementations ( 
guess who :o) ). Most of the security flaws are implementation 
independant and do not concern UPNPLib. However a DDOS attack can be 
acheived due to a protocol flaw. UPNPLib has been developped to do not 
allow ( or at least limit ) such kind of attacks. You can read more 
about it here <http://www.goland.org/Tech/upnp_security_flaws.htm>. The 
official MS bug report is here 
<http://www.microsoft.com/technet/security/bulletin/MS01-059.mspx> and 
the security bulletin 
<http://www.eeye.com/html/Research/Advisories/AD20011220.html> from the 
company that discovered the issue.

UPNPLib is not concerned with these flaws, future will tell if UPNPLib 
other security issues will be found.

Devices security

Another problem with UPNP™ is that there is no protocol built-in ACL to 
define who can access and send orders to UPNP™ devices.

UPNP™ forum came with a solution 
<http://www.upnp.org/standardizeddcps/security.asp> to fix this issue 
but unfortunatly we did not find devices compliant with this spec to 
integrate this ACL and security layer in the library. We hope we will be 
able to do it anytime soon with some other tools.

This means that this library will not work with devices implementing and 
using such security services.



Peter Firmstone wrote:
> Good call Gregg, an Apache v1.1 library for Upnp already exists, this 
> will be a good start: http://www.sbbi.net/site/upnp/index.html
>
> How's this for a Preferred order for publicly visible services:
>
>   1. Public Address
>   2. Upnp NAT - All the home routers
>   3. STUN TCP - The majority of Enterprise NAT / Firewalls
>   4. TURN TCP - Whatever is left over.
>
> Where / how should this integrate with secure JERI and the utility 
> services (DnsSdRegistrar, JeriUpnp, JeriRendezvous, JeriRelay), 
> Abstracted from any Service utilising it?
>
> Should it be an SPI?
>
> Cheers,
>
> Peter.


Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
Good call Gregg, an Apache v1.1 library for Upnp already exists, this 
will be a good start: http://www.sbbi.net/site/upnp/index.html

How's this for a Preferred order for publicly visible services:

   1. Public Address
   2. Upnp NAT - All the home routers
   3. STUN TCP - The majority of Enterprise NAT / Firewalls
   4. TURN TCP - Whatever is left over.

Where / how should this integrate with secure JERI and the utility 
services (DnsSdRegistrar, JeriUpnp, JeriRendezvous, JeriRelay), 
Abstracted from any Service utilising it?

Should it be an SPI?

Cheers,

Peter.

Peter Firmstone wrote:
> It's a good idea to support UPNP IGD Standardized Device Control 
> Protocol V, since it would allow the service to be supported directly, 
> it would be the preferred option when it existed, but we still need 
> some fall back for enterprise environments where its usually disabled.
>
> I'll look into it further.
>
> Peter.
>
>> Maybe we need an endpoint implementation which knows how to use uPnP 
>> for port forwarding configuration on consumer routers?  More and more 
>> software is using uPnP for port forwarding.
>>
>> Microsofts Home Server knows how to do this, and there are others 
>> that I've seen doing this to provide appropriate port forwarding 
>> changes.
>>
>> Gregg Wonderly
>>
>
>


Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
It's a good idea to support UPNP IGD Standardized Device Control 
Protocol V, since it would allow the service to be supported directly, 
it would be the preferred option when it existed, but we still need some 
fall back for enterprise environments where its usually disabled.

I'll look into it further.

Peter.

> Maybe we need an endpoint implementation which knows how to use uPnP 
> for port forwarding configuration on consumer routers?  More and more 
> software is using uPnP for port forwarding.
>
> Microsofts Home Server knows how to do this, and there are others that 
> I've seen doing this to provide appropriate port forwarding changes.
>
> Gregg Wonderly
>


Re: sketches

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Patrick Wright wrote:
> I personally think that the concurrency libraries in 1.5 are a good
> reason to move to 1.5 as a minimum. And personally I'm in favor of 1.6
> now that 1.5 has been EOL'd.

+1

Gr. Sim

Re: sketches

Posted by Patrick Wright <pd...@gmail.com>.
On Sat, Feb 6, 2010 at 4:10 PM, Christopher Dolan
<ch...@avid.com> wrote:
> Honest question: considering that Sun end-of-lifed Java 1.5 back in
> October 2009, what's the value in continuing to support the Java 1.4
> platform in River?

We discussed this on the list awhile back. I was of the opinion that
in Jini was oriented towards communication between all sorts of
Java-enabled devices, including high-end servers on the one end and
mobile and embedded devices on the other. Hence if it were important
to the community to support the SDK on devices that didn't include 1.5
features, we should continue to support 1.4.

However, the discussion on that thread indicated there weren't really
strong advocates for keeping 1.4 support. I was raising it more as an
item to discuss and make a decision about, specifically regarding Jini
compatibility across the wider Java VM landscape; we ourselves use
Jini in a server environment, already 1.6 across the board for several
years now and low-end devices aren't in our game plan at all.

I personally think that the concurrency libraries in 1.5 are a good
reason to move to 1.5 as a minimum. And personally I'm in favor of 1.6
now that 1.5 has been EOL'd.


Patrick

Re: River Core Platform [was: Re: sketches]

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>> Christopher Dolan wrote:
>>> Honest question: considering that Sun end-of-lifed Java 1.5 back in
>>> October 2009, what's the value in continuing to support the Java 1.4
>>> platform in River?
>>>   
>> Honest Answer: Just a few billion blue ray players, set top boxes and 
>> multifunction network printers and any other device that run Java cdc.
>
> Shall we go for a full use of JDK6 facilities, with a pre-processor to 
> create a (or multiple) minimal version?
Might as well, you might want to consider supporting 1.4 with any proxy 
code, your choice though.  Feel free to use generics.  You can still use 
@Override too.  In other words, the server side service is free to use 
those features.

We need to consider what the minimum requirements are to consume or 
export a Jini service. (Protocols, classes) and make that the core 
platform, that has to support Java 1.4.

 From there it would be recommended to make smart proxy classes and 
Service Interfaces Java 1.4 compliant, however use later Language 
classes where it makes sense to do so.  We need to annotate the class 
version (bytecode) and Package version into MarshalledInstance to ensure 
non compatible clients and servers aren't mixed.

Cheers,

Peter.

>
> We can prototype this with a ant/sed/awk script, We could strip the 
> @overrides, selectively include JDK6 dependend facilities, etc.
>
Wouldn't worry about it just yet, lets see how we go first.

Retrotanslator can convert Java 5 Bytecode, however it needs custom 
Object Marshallers Input Output Streams written for serialization 
support.  It does support Annotations, Generics and the Concurrent 
Utilities.

 JSR14 seems are more sensible option to begin with.

> Really, i do sometimes miss something like the C preprocessor, in 
> order to postprocess for different deployment scenarios. A //#ifdef 
> maybe?
>
> Gr. Sim
>



Re: River Core Platform [was: Re: sketches]

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
> Christopher Dolan wrote:
>> Honest question: considering that Sun end-of-lifed Java 1.5 back in
>> October 2009, what's the value in continuing to support the Java 1.4
>> platform in River?
>>   
> Honest Answer: Just a few billion blue ray players, set top boxes and 
> multifunction network printers and any other device that run Java cdc.

Shall we go for a full use of JDK6 facilities, with a pre-processor to 
create a (or multiple) minimal version?

We can prototype this with a ant/sed/awk script, We could strip the 
@overrides, selectively include JDK6 dependend facilities, etc.

Really, i do sometimes miss something like the C preprocessor, in order 
to postprocess for different deployment scenarios. A //#ifdef maybe?

Gr. Sim

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: River Core Platform [was: Re: sketches]

Posted by Peter Firmstone <ji...@zeus.net.au>.
That's good news Bob, heartwarming news actually.  I truly hope both of 
you are able to participate.

Thank you & Welcome to Apache River,

Peter.

Bob Craig wrote:
> Peter,
>  
> I just wanted to chime in privately to pass on two things.  1) I and Chris are both grateful to see the leadership you are providing to River after a long period after Sun dropped active involvement.  We are both heavily committed to Jini within our Corporate environment and appreciate the fact that the "community" is coming alive with your leadership.
>  
> 2) Chris is a solid "open-source" citizen (he is active in several other open-source projects) and sincerely wants to contribute back to River in whatever way he can (he has discussed it with me personally).  The work we've done on the Jini codestream internally is something that Chris wants to contribute back - I am working through the Corporate hoops that we need to jump through here at Avid to get official sign-off to make that happen (and have our asses covered).
>  
> Bob Craig
> Avid Technology
>  
>  
>
> ________________________________
>
> From: Peter Firmstone [mailto:jini@zeus.net.au]
> Sent: Sat 2/6/2010 5:19 PM
> To: river-dev@incubator.apache.org
> Subject: River Core Platform [was: Re: sketches]
>
>
>
> Christopher Dolan wrote:
>   
>> Honest question: considering that Sun end-of-lifed Java 1.5 back in
>> October 2009, what's the value in continuing to support the Java 1.4
>> platform in River?
>>  
>>     
> Honest Answer: Just a few billion blue ray players, set top boxes and
> multifunction network printers and any other device that run Java cdc.
>
> Let me make one thing very clear: No code will be rejected because it
> uses a later JAVA version.
>
> What we do need to define is what constitutes the core platform, so that
> when I want to run River on a cdc device, that provides or consumes a
> simple service, I can.  Which again highlights another problem,
> modularity, the Jini Specification is supposed to be able to have
> multiple implementations.
>
> I receive an endless stream of resistance about modularity, or
> versioning, every time I try to get the blessing of the River dev list
> to start working towards something, so here I sit doing the menial
> tasks, getting out the new release, posting discussions to river-dev,
> hesitant to make changes in case it turns out to be a wast of my
> valuable time.  I have children and have paid work to do also. This
> isn't directed at you Chris, I want to see you participate, I just
> happened to pick up your thread response, it's directed at the list in
> general.
>
> Modularity allows us to have multiple implementations, if we provide a
> Service that uses Java 5 language features and if it's proxy is a smart
> proxy then if that smart proxy also uses Java 5 or later then it isn't
> available to Java 1.4 J2SE or cdc.
>
> However if it is a simple reflective proxy with typical classes it's
> bytecode is generated dynamically at runtime.
>
> We need a way to annotate what Java Version a Service provides, well
> that's what you call modularity, it should be handled for you
> automatically by the River platform, it can be annotated in
> MarshalledObject.
>
>
>       Compatibility Checkers
>
> Some tools used by Apache projects:
>
>     * Java
>           o Clirr <http://clirr.sourceforge.net//> works on binaries
>           o JDiff <http://jdiff.sourceforge.net/> uses sources
>
>
> Once a core platform has been decided upon, then that is all we check
> for Java 1.4 compatibility and those that want the Java 1.4 platform
> support are obliged to maintain that compatibility, namely me, but it
> was me that pushed later Java language feature support in the first
> place and still support it, I want as large an adoption and user pool as
> possible.
>
> So if we put the effort into defining just what the minimum requirements
> are for producing or consuming a basic service, we have the core
> platform, this can then be a small download.
>
> Eg, the JERI Relay service that Sim is working on is not part of the
> core platform, why not take advantage of the Concurrency libraries? 
> Reggie Services shouldn't be restricted either, you wouldn't provide a
> Reggie service with java cdc, however it's proxy stub would need to be
> compiled with the -jsr1.4 option. (Actually Reggie still uses rmic, we
> need to convert to JERI)
>
> NO CODE WILL BE REJECTED BECAUSE IT USES LATER JAVA LANGUAGE FEATURES
>
> If you want to prove to yourself the -jsr14 compiler option works, edit
> build.xml, add the option, use JDK1.6 to compile the current codebase
> and specify source=5 instead of 1.4.
>
> I'm actually starting to wonder if we need two Releases of Apache River:
>
>    1. River for Trusted Intranet Networks
>    2. River for Untrusted Networks
>
> Both could use the same core platform, one would be secure by default
> and require more configuration, the other simple, with few concerns
> about security, or codebase evolution (Preffered Class loading
> mechanisms will suffice).
>
> BR Peter.
>   
>> I've found it to be tricky to avoid using new methods when striving for
>> backward runtime compatibility.  Extensive unit testing or static
>> analysis are the only ways to ensure you've found all the problems,
>> since the compiler won't help you.
>>
>> Googling for "-target jsr14" revealed this less-than-inspiring quote:
>>   "It is convenient, if unsupported, and the compiler generates mostly
>> compatible bytecode in a single pass."
>>   http://twit88.com/blog/2008/08/26/java-understanding-jsr14/
>>
>>
>> Chris
>>
>> -----Original Message-----
>> From: Peter Firmstone [mailto:jini@zeus.net.au]
>> Sent: Friday, February 05, 2010 5:16 PM
>> To: river-dev@incubator.apache.org
>> Subject: Re: sketches
>>
>> Yes, Java 5 language features next release.
>>
>> Although I'd like to find just what our core jini platform should be and
>>
>> compile it with -jsr14 to produce Java 1.4 bytecode from Java 5 source.
>>
>> In that core platform, we couldn't use any new methods or libraries not
>> in Java 1.4, however we could use generics.
>>
>> So for instance the Java cdc platform can consume and provide simple
>> services.
>>
>> Cheers,
>>
>> Peter.
>>
>>  
>>     
>
>
>
>
>   


RE: River Core Platform [was: Re: sketches]

Posted by Bob Craig <bo...@avid.com>.
Peter,
 
I just wanted to chime in privately to pass on two things.  1) I and Chris are both grateful to see the leadership you are providing to River after a long period after Sun dropped active involvement.  We are both heavily committed to Jini within our Corporate environment and appreciate the fact that the "community" is coming alive with your leadership.
 
2) Chris is a solid "open-source" citizen (he is active in several other open-source projects) and sincerely wants to contribute back to River in whatever way he can (he has discussed it with me personally).  The work we've done on the Jini codestream internally is something that Chris wants to contribute back - I am working through the Corporate hoops that we need to jump through here at Avid to get official sign-off to make that happen (and have our asses covered).
 
Bob Craig
Avid Technology
 
 

________________________________

From: Peter Firmstone [mailto:jini@zeus.net.au]
Sent: Sat 2/6/2010 5:19 PM
To: river-dev@incubator.apache.org
Subject: River Core Platform [was: Re: sketches]



Christopher Dolan wrote:
> Honest question: considering that Sun end-of-lifed Java 1.5 back in
> October 2009, what's the value in continuing to support the Java 1.4
> platform in River?
>  
Honest Answer: Just a few billion blue ray players, set top boxes and
multifunction network printers and any other device that run Java cdc.

Let me make one thing very clear: No code will be rejected because it
uses a later JAVA version.

What we do need to define is what constitutes the core platform, so that
when I want to run River on a cdc device, that provides or consumes a
simple service, I can.  Which again highlights another problem,
modularity, the Jini Specification is supposed to be able to have
multiple implementations.

I receive an endless stream of resistance about modularity, or
versioning, every time I try to get the blessing of the River dev list
to start working towards something, so here I sit doing the menial
tasks, getting out the new release, posting discussions to river-dev,
hesitant to make changes in case it turns out to be a wast of my
valuable time.  I have children and have paid work to do also. This
isn't directed at you Chris, I want to see you participate, I just
happened to pick up your thread response, it's directed at the list in
general.

Modularity allows us to have multiple implementations, if we provide a
Service that uses Java 5 language features and if it's proxy is a smart
proxy then if that smart proxy also uses Java 5 or later then it isn't
available to Java 1.4 J2SE or cdc.

However if it is a simple reflective proxy with typical classes it's
bytecode is generated dynamically at runtime.

We need a way to annotate what Java Version a Service provides, well
that's what you call modularity, it should be handled for you
automatically by the River platform, it can be annotated in
MarshalledObject.


      Compatibility Checkers

Some tools used by Apache projects:

    * Java
          o Clirr <http://clirr.sourceforge.net//> works on binaries
          o JDiff <http://jdiff.sourceforge.net/> uses sources


Once a core platform has been decided upon, then that is all we check
for Java 1.4 compatibility and those that want the Java 1.4 platform
support are obliged to maintain that compatibility, namely me, but it
was me that pushed later Java language feature support in the first
place and still support it, I want as large an adoption and user pool as
possible.

So if we put the effort into defining just what the minimum requirements
are for producing or consuming a basic service, we have the core
platform, this can then be a small download.

Eg, the JERI Relay service that Sim is working on is not part of the
core platform, why not take advantage of the Concurrency libraries? 
Reggie Services shouldn't be restricted either, you wouldn't provide a
Reggie service with java cdc, however it's proxy stub would need to be
compiled with the -jsr1.4 option. (Actually Reggie still uses rmic, we
need to convert to JERI)

NO CODE WILL BE REJECTED BECAUSE IT USES LATER JAVA LANGUAGE FEATURES

If you want to prove to yourself the -jsr14 compiler option works, edit
build.xml, add the option, use JDK1.6 to compile the current codebase
and specify source=5 instead of 1.4.

I'm actually starting to wonder if we need two Releases of Apache River:

   1. River for Trusted Intranet Networks
   2. River for Untrusted Networks

Both could use the same core platform, one would be secure by default
and require more configuration, the other simple, with few concerns
about security, or codebase evolution (Preffered Class loading
mechanisms will suffice).

BR Peter.
> I've found it to be tricky to avoid using new methods when striving for
> backward runtime compatibility.  Extensive unit testing or static
> analysis are the only ways to ensure you've found all the problems,
> since the compiler won't help you.
>
> Googling for "-target jsr14" revealed this less-than-inspiring quote:
>   "It is convenient, if unsupported, and the compiler generates mostly
> compatible bytecode in a single pass."
>   http://twit88.com/blog/2008/08/26/java-understanding-jsr14/
>
>
> Chris
>
> -----Original Message-----
> From: Peter Firmstone [mailto:jini@zeus.net.au]
> Sent: Friday, February 05, 2010 5:16 PM
> To: river-dev@incubator.apache.org
> Subject: Re: sketches
>
> Yes, Java 5 language features next release.
>
> Although I'd like to find just what our core jini platform should be and
>
> compile it with -jsr14 to produce Java 1.4 bytecode from Java 5 source.
>
> In that core platform, we couldn't use any new methods or libraries not
> in Java 1.4, however we could use generics.
>
> So for instance the Java cdc platform can consume and provide simple
> services.
>
> Cheers,
>
> Peter.
>
>  




River Core Platform [was: Re: sketches]

Posted by Peter Firmstone <ji...@zeus.net.au>.
Christopher Dolan wrote:
> Honest question: considering that Sun end-of-lifed Java 1.5 back in
> October 2009, what's the value in continuing to support the Java 1.4
> platform in River?
>   
Honest Answer: Just a few billion blue ray players, set top boxes and 
multifunction network printers and any other device that run Java cdc.

Let me make one thing very clear: No code will be rejected because it 
uses a later JAVA version.

What we do need to define is what constitutes the core platform, so that 
when I want to run River on a cdc device, that provides or consumes a 
simple service, I can.  Which again highlights another problem, 
modularity, the Jini Specification is supposed to be able to have 
multiple implementations. 

I receive an endless stream of resistance about modularity, or 
versioning, every time I try to get the blessing of the River dev list 
to start working towards something, so here I sit doing the menial 
tasks, getting out the new release, posting discussions to river-dev, 
hesitant to make changes in case it turns out to be a wast of my 
valuable time.  I have children and have paid work to do also. This 
isn't directed at you Chris, I want to see you participate, I just 
happened to pick up your thread response, it's directed at the list in 
general.

Modularity allows us to have multiple implementations, if we provide a 
Service that uses Java 5 language features and if it's proxy is a smart 
proxy then if that smart proxy also uses Java 5 or later then it isn't 
available to Java 1.4 J2SE or cdc.

However if it is a simple reflective proxy with typical classes it's 
bytecode is generated dynamically at runtime. 

We need a way to annotate what Java Version a Service provides, well 
that's what you call modularity, it should be handled for you 
automatically by the River platform, it can be annotated in 
MarshalledObject.


      Compatibility Checkers

Some tools used by Apache projects:

    * Java
          o Clirr <http://clirr.sourceforge.net//> works on binaries
          o JDiff <http://jdiff.sourceforge.net/> uses sources


Once a core platform has been decided upon, then that is all we check 
for Java 1.4 compatibility and those that want the Java 1.4 platform 
support are obliged to maintain that compatibility, namely me, but it 
was me that pushed later Java language feature support in the first 
place and still support it, I want as large an adoption and user pool as 
possible.

So if we put the effort into defining just what the minimum requirements 
are for producing or consuming a basic service, we have the core 
platform, this can then be a small download.

Eg, the JERI Relay service that Sim is working on is not part of the 
core platform, why not take advantage of the Concurrency libraries?  
Reggie Services shouldn't be restricted either, you wouldn't provide a 
Reggie service with java cdc, however it's proxy stub would need to be 
compiled with the -jsr1.4 option. (Actually Reggie still uses rmic, we 
need to convert to JERI)

NO CODE WILL BE REJECTED BECAUSE IT USES LATER JAVA LANGUAGE FEATURES

If you want to prove to yourself the -jsr14 compiler option works, edit 
build.xml, add the option, use JDK1.6 to compile the current codebase 
and specify source=5 instead of 1.4.

I'm actually starting to wonder if we need two Releases of Apache River:

   1. River for Trusted Intranet Networks
   2. River for Untrusted Networks

Both could use the same core platform, one would be secure by default 
and require more configuration, the other simple, with few concerns 
about security, or codebase evolution (Preffered Class loading 
mechanisms will suffice).

BR Peter.
> I've found it to be tricky to avoid using new methods when striving for
> backward runtime compatibility.  Extensive unit testing or static
> analysis are the only ways to ensure you've found all the problems,
> since the compiler won't help you.
>
> Googling for "-target jsr14" revealed this less-than-inspiring quote:
>   "It is convenient, if unsupported, and the compiler generates mostly
> compatible bytecode in a single pass."
>   http://twit88.com/blog/2008/08/26/java-understanding-jsr14/
>
>
> Chris
>
> -----Original Message-----
> From: Peter Firmstone [mailto:jini@zeus.net.au] 
> Sent: Friday, February 05, 2010 5:16 PM
> To: river-dev@incubator.apache.org
> Subject: Re: sketches
>
> Yes, Java 5 language features next release.
>
> Although I'd like to find just what our core jini platform should be and
>
> compile it with -jsr14 to produce Java 1.4 bytecode from Java 5 source.
>
> In that core platform, we couldn't use any new methods or libraries not 
> in Java 1.4, however we could use generics.
>
> So for instance the Java cdc platform can consume and provide simple 
> services.
>
> Cheers,
>
> Peter.
>
>   


Re: sketches

Posted by Peter Firmstone <ji...@zeus.net.au>.
The converse is, that we just have a copy of the complete API for the 
Java CDC foundation profile, collected using static analysis and compare 
against it.   That copy might just be the serialised form of the 
dependency analysis results.

As per your suggestion, Static analysis is the way to go, might also 
need to copy a subset of the qa tests for testing the core river 
platform on Java CDC.

Cheers,

Peter.

Peter Firmstone wrote:
> Christopher Dolan wrote:
>> Honest question: considering that Sun end-of-lifed Java 1.5 back in
>> October 2009, what's the value in continuing to support the Java 1.4
>> platform in River?
>>
>> I've found it to be tricky to avoid using new methods when striving for
>> backward runtime compatibility.  Extensive unit testing or static
>> analysis are the only ways to ensure you've found all the problems,
>> since the compiler won't help you.
>>   
> Hi Chris,
>
> I've been considering the above, we have a tool called ClassDep, 
> (ClassDepend - the implementation) that uses the ASM bytecode library 
> to perform dependency analysis.  One of the options -edges picks up 
> dependencies between packages.  Now we currently don't have a way of 
> showing all the methods etc, however every static dependency link is 
> analyzed, I have thought about recording all the API signatures from 
> the dependencies.  We can perform an analysis using clrr to find the 
> unsupported methods in Java Foundation Profile, in comparison to Java 
> 1.6 (I thought about J2SE 1.4.2, but I think 6 is more appropriate).  
> The results of this analysis can be stored in the trunk on svn.
>
> Once we have a baseline on what classes or methods cannot be used, 
> we'll want to provide an ant build task that finds any illegal API 
> with ClassDepend, only the River core component that supports the Java 
> Foundation Profile, will be checked, we can still utilise generics and 
> some other Java 5 Language features in this core component.
>
> We can then make our ClassDepend tool available for checking Service 
> Interfaces and Smart Proxy compatibility with Foundation Profile.  
> This would only apply to the client side.
>
> People wishing to create a service for Foundation Profile would need 
> to use the Java ME SDK
>
> People that don't want to support Foundation Profile, need not concern 
> themselves.
>
> As I said this would only apply to the very core of Apache River, the 
> absolute minimum requirement to produce and consume a service, that 
> would of course include some parts of JERI.  We'll need ways of 
> managing compatibility information at runtime, by annotating the 
> bytecode version and package metadata into our MarshalledObjInstance.  
> So services aren't matched by platforms that cannot support them.
>
> More investigation is required, will keep you posted.
>
> Cheers,
>
> Peter.
>
>
>
>
>> Googling for "-target jsr14" revealed this less-than-inspiring quote:
>>   "It is convenient, if unsupported, and the compiler generates mostly
>> compatible bytecode in a single pass."
>>   http://twit88.com/blog/2008/08/26/java-understanding-jsr14/
>>
>>
>> Chris
>>
>> -----Original Message-----
>> From: Peter Firmstone [mailto:jini@zeus.net.au] Sent: Friday, 
>> February 05, 2010 5:16 PM
>> To: river-dev@incubator.apache.org
>> Subject: Re: sketches
>>
>> Yes, Java 5 language features next release.
>>
>> Although I'd like to find just what our core jini platform should be and
>>
>> compile it with -jsr14 to produce Java 1.4 bytecode from Java 5 source.
>>
>> In that core platform, we couldn't use any new methods or libraries 
>> not in Java 1.4, however we could use generics.
>>
>> So for instance the Java cdc platform can consume and provide simple 
>> services.
>>
>> Cheers,
>>
>> Peter.
>>
>>   
>
>


Re: sketches

Posted by Peter Firmstone <ji...@zeus.net.au>.
Christopher Dolan wrote:
> Honest question: considering that Sun end-of-lifed Java 1.5 back in
> October 2009, what's the value in continuing to support the Java 1.4
> platform in River?
>
> I've found it to be tricky to avoid using new methods when striving for
> backward runtime compatibility.  Extensive unit testing or static
> analysis are the only ways to ensure you've found all the problems,
> since the compiler won't help you.
>   
Hi Chris,

I've been considering the above, we have a tool called ClassDep, 
(ClassDepend - the implementation) that uses the ASM bytecode library to 
perform dependency analysis.  One of the options -edges picks up 
dependencies between packages.  Now we currently don't have a way of 
showing all the methods etc, however every static dependency link is 
analyzed, I have thought about recording all the API signatures from the 
dependencies.  We can perform an analysis using clrr to find the 
unsupported methods in Java Foundation Profile, in comparison to Java 
1.6 (I thought about J2SE 1.4.2, but I think 6 is more appropriate).  
The results of this analysis can be stored in the trunk on svn.

Once we have a baseline on what classes or methods cannot be used, we'll 
want to provide an ant build task that finds any illegal API with 
ClassDepend, only the River core component that supports the Java 
Foundation Profile, will be checked, we can still utilise generics and 
some other Java 5 Language features in this core component.

We can then make our ClassDepend tool available for checking Service 
Interfaces and Smart Proxy compatibility with Foundation Profile.  This 
would only apply to the client side.

People wishing to create a service for Foundation Profile would need to 
use the Java ME SDK

People that don't want to support Foundation Profile, need not concern 
themselves.

As I said this would only apply to the very core of Apache River, the 
absolute minimum requirement to produce and consume a service, that 
would of course include some parts of JERI.  We'll need ways of managing 
compatibility information at runtime, by annotating the bytecode version 
and package metadata into our MarshalledObjInstance.  So services aren't 
matched by platforms that cannot support them.

More investigation is required, will keep you posted.

Cheers,

Peter.




> Googling for "-target jsr14" revealed this less-than-inspiring quote:
>   "It is convenient, if unsupported, and the compiler generates mostly
> compatible bytecode in a single pass."
>   http://twit88.com/blog/2008/08/26/java-understanding-jsr14/
>
>
> Chris
>
> -----Original Message-----
> From: Peter Firmstone [mailto:jini@zeus.net.au] 
> Sent: Friday, February 05, 2010 5:16 PM
> To: river-dev@incubator.apache.org
> Subject: Re: sketches
>
> Yes, Java 5 language features next release.
>
> Although I'd like to find just what our core jini platform should be and
>
> compile it with -jsr14 to produce Java 1.4 bytecode from Java 5 source.
>
> In that core platform, we couldn't use any new methods or libraries not 
> in Java 1.4, however we could use generics.
>
> So for instance the Java cdc platform can consume and provide simple 
> services.
>
> Cheers,
>
> Peter.
>
>   


RE: sketches

Posted by Christopher Dolan <ch...@avid.com>.
Honest question: considering that Sun end-of-lifed Java 1.5 back in
October 2009, what's the value in continuing to support the Java 1.4
platform in River?

I've found it to be tricky to avoid using new methods when striving for
backward runtime compatibility.  Extensive unit testing or static
analysis are the only ways to ensure you've found all the problems,
since the compiler won't help you.

Googling for "-target jsr14" revealed this less-than-inspiring quote:
  "It is convenient, if unsupported, and the compiler generates mostly
compatible bytecode in a single pass."
  http://twit88.com/blog/2008/08/26/java-understanding-jsr14/


Chris

-----Original Message-----
From: Peter Firmstone [mailto:jini@zeus.net.au] 
Sent: Friday, February 05, 2010 5:16 PM
To: river-dev@incubator.apache.org
Subject: Re: sketches

Yes, Java 5 language features next release.

Although I'd like to find just what our core jini platform should be and

compile it with -jsr14 to produce Java 1.4 bytecode from Java 5 source.

In that core platform, we couldn't use any new methods or libraries not 
in Java 1.4, however we could use generics.

So for instance the Java cdc platform can consume and provide simple 
services.

Cheers,

Peter.

Re: sketches

Posted by Peter Firmstone <ji...@zeus.net.au>.
Yes, Java 5 language features next release.

Although I'd like to find just what our core jini platform should be and 
compile it with -jsr14 to produce Java 1.4 bytecode from Java 5 source.  
In that core platform, we couldn't use any new methods or libraries not 
in Java 1.4, however we could use generics.

So for instance the Java cdc platform can consume and provide simple 
services.

Cheers,

Peter.

> Did I hear Peter say that Java 1.5 is a goal for post-River 2.1.2?  If
> so, it's a fairly simple chore to change 1.6 code to work with 1.5.  The
> biggest changes that get me in trouble when I switch between the two
> versions are @Override, String.isEmpty(), and a few Swing changes
> (JTable, etc).
>
> Chris
>
> -----Original Message-----
> From: Sim IJskes - QCG [mailto:sim@qcg.nl] 
> Sent: Friday, February 05, 2010 2:24 AM
> To: river-dev@incubator.apache.org
> Subject: Re: sketches
>
> Can i use @override and the collections framework? At least for the 
> prototyping stage? If it works out, i can donate the code to the ASF and
>
>   the committers can put it in a contrib directory, with the specific 
> remark that it only compiles with jdk6.
>
>   


RE: sketches

Posted by Christopher Dolan <ch...@avid.com>.
If you use @Override only on methods that overload superclass methods
and not on methods that implement interface methods, then it will likely
work on JDK5, and the JDK6 dependency will not be needed.  The java.util
collections API barely changed at all between 5 and 6.

Did I hear Peter say that Java 1.5 is a goal for post-River 2.1.2?  If
so, it's a fairly simple chore to change 1.6 code to work with 1.5.  The
biggest changes that get me in trouble when I switch between the two
versions are @Override, String.isEmpty(), and a few Swing changes
(JTable, etc).

Chris

-----Original Message-----
From: Sim IJskes - QCG [mailto:sim@qcg.nl] 
Sent: Friday, February 05, 2010 2:24 AM
To: river-dev@incubator.apache.org
Subject: Re: sketches

Can i use @override and the collections framework? At least for the 
prototyping stage? If it works out, i can donate the code to the ASF and

  the committers can put it in a contrib directory, with the specific 
remark that it only compiles with jdk6.

Re: sketches

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
> Cool, once you've got your ProxiedEndpoint, can I do the Serialized form 
> for you?
> 
> Cheers,
> 
> Peter.

Sorry, but i'm ahead of you! :-) No, just kidding!

I've prefixed everything with Relay. Right now i've got.

MsgConnection.java
MsgOutboundRequest.java
RelayConnectionManager.java
RelayEndpointImpl.java
RelayEndpoint.java
RelayServiceImpl.java
RelayService.java

I've delegated the functionality from RelayEndpoint to 
RelayEndpointImpl. Generalized the OutboundRequest function to 
MsgOutboundRequest, Which is generic byte array oriented. It works 
together with an interface MsgConnection wich is the base for the 
RelayConnectionManager.

Can i use @override and the collections framework? At least for the 
prototyping stage? If it works out, i can donate the code to the ASF and 
  the committers can put it in a contrib directory, with the specific 
remark that it only compiles with jdk6.

Gr. Sim


-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: sketches

Posted by Peter Firmstone <ji...@zeus.net.au>.
Cool, once you've got your ProxiedEndpoint, can I do the Serialized form 
for you?

Cheers,

Peter.

Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>> Ok sounds interesting, lets work together, we need both types of 
>> functionality, one for scalibility and the other for reliability, if 
>> NAT p2p TCP fails we need to fall back on a routing service.
>
> I've a few notes:
>
> interface TransportService
>    extends Remote
> {
>    MsgBlock poll();
>
>    void send( MsgBlock mb );
> }
>
> This is to support bidirectional message flow. (think bosh).
>
> class ProxiedEndpoint
>    implements Serializable
> {
>     private SomeId id ;
>
>     private Endpoint transportEndpoint ;
>
>     ProxyEndpoint( SomeId id, Endpoint transportEndpoint )
>     {
>       ...
>     }
> }
>
> This is for the implementation and registration.
>
> interface ProxyServer
>    extends Remote
> {
>    ProxiedEndpoint createEndpoint();
> }
>
> Gr. Sim
>


sketches

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
> Ok sounds interesting, lets work together, we need both types of 
> functionality, one for scalibility and the other for reliability, if NAT 
> p2p TCP fails we need to fall back on a routing service.

I've a few notes:

interface TransportService
    extends Remote
{
    MsgBlock poll();

    void send( MsgBlock mb );
}

This is to support bidirectional message flow. (think bosh).

class ProxiedEndpoint
    implements Serializable
{
     private SomeId id ;

     private Endpoint transportEndpoint ;

     ProxyEndpoint( SomeId id, Endpoint transportEndpoint )
     {
       ...
     }
}

This is for the implementation and registration.

interface ProxyServer
    extends Remote
{
    ProxiedEndpoint createEndpoint();
}

Gr. Sim

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: Implementing means to allow NAT-ed clients to provide services [was roadmap]

Posted by Peter Firmstone <ji...@zeus.net.au>.
Ok sounds interesting, lets work together, we need both types of 
functionality, one for scalibility and the other for reliability, if NAT 
p2p TCP fails we need to fall back on a routing service.

As soon as the latest release is approved, we're free to work on the 
codebase again.  I've got Concurrent library replacements for Policy, 
Permissions, including a wrapper object for a PermissionCollection that 
allows multi reads, but locks on write (ReentrantReadWriteLock) that I'd 
like to add to the codebase to fix some of the problems Gregg has been 
experiencing with the single thread synchronization of the JAVA platform 
implementations.

In my library, it is advantageous not to add any synchronization code to 
a PermissionCollection (unless it mutates on read), the JAVA 
implementation of Permissions uses synchronisation anyway (albeit 
poorly) so PermissionCollection should have never been required to be 
synchronized.  A Parallel to Vector, ArrayList and the Collections 
libraries.

I also have a Concurrent Weak Hash map utility library.

Cheers,

Peter.

Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>> I would like to start by implementing the Natblaster technique of 
>> traversing NAT's with TCP, using JAVA.  I'm not 100% how to fit this 
>> in with JERI, however the following classes appear relevant.
>
> For my deployment i'm going to prototype a (Server)Endpoint factory as 
> a jini service. clients behind NAT can expose their services within 
> the scope of this factory. I'm not so worried about the performance.
>
> For me it's the shortest path to fullfill my requirements.
>
> Maybe it works... :-)
>
> Gr. Sim
>


Re: Implementing means to allow NAT-ed clients to provide services [was roadmap]

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
> I would like to start by implementing the Natblaster technique of 
> traversing NAT's with TCP, using JAVA.  I'm not 100% how to fit this in 
> with JERI, however the following classes appear relevant.

For my deployment i'm going to prototype a (Server)Endpoint factory as a 
jini service. clients behind NAT can expose their services within the 
scope of this factory. I'm not so worried about the performance.

For me it's the shortest path to fullfill my requirements.

Maybe it works... :-)

Gr. Sim

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: Implementing means to allow NAT-ed clients to provide services [was roadmap]

Posted by Peter Firmstone <ji...@zeus.net.au>.
The initial implementation will include the URL of the relevant ICMS, 
however I'd like to make the location of the ICMS discoverable using 
lookup so the ICMS location can change over time.

Peter Firmstone wrote:
> Hi Sim & Gregg,
>
> I just wanted to break this issue out of the original thread and 
> summarise our findings to provide a way forward, for solving one of 
> the original questions posed.
>
> I want the implementation to be invisible to existing client and 
> service implementation and to just work.
>
> I would like to start by implementing the Natblaster technique of 
> traversing NAT's with TCP, using JAVA.  I'm not 100% how to fit this 
> in with JERI, however the following classes appear relevant.
>
>    * net.jini.jeri.connection.ServerConnectionManager
>    * net.jini.jeri.connection.ConnectionManager
>
>
> I will need new ServerEndpoint and Endpoint implementations, some 
> observations:
>
>    * The Server will need to be aware of the Client's Public NAT IP
>      Address and recieve port.
>    * The Client will need to be aware of the Server's Public NAT IP
>      Address and recieve port.
>    * A public service will need to be provided - Initial Connection
>      Mediator Service (ICMS)?
>    * The Client will need to contact the Initial Connection Mediator
>      Service and request a connection to a Nat-ed Service.  This would
>      need to be performed as an implementation detail of the
>      Endpoint.newRequest() method.
>    * The NAT-ed Service would need to be listening using the
>      ServerEndpoint implementation, which would be polling the ICMS to
>      maintain a connection.
>    * The ServerEndpoint implementation would listen for and intercept
>      connection negotiation requests.
>    * The ServerEndpoint implementation would pass the communication
>      through to the callback object once the connection is established.
>    * Once a connection session was negotiated successfully, the client
>      Endpoint.newRequest() method would return.
>    * TLS will need to be utilised.
>
>
> Any helpful advise would be much appreciated, this should allow the 
> marshalled object to find it's way home.
>
> BR,
>
> Peter.
>
> Resources:
>
>    * http://netresearch.ics.uci.edu/kfujii/jpcap/doc/index.html --
>      Multi Platform Java library for reading and writing packets, java
>      equivalent of libpcap and libnet.    
> * http://natblaster.sourceforge.net/paper/natblaster.pdf  --
>      Techniques for traversing NAT with TCP peer to peer, includes a
>      implementation in C.
>
>
>
>


Implementing means to allow NAT-ed clients to provide services [was roadmap]

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hi Sim & Gregg,

I just wanted to break this issue out of the original thread and 
summarise our findings to provide a way forward, for solving one of the 
original questions posed.

I want the implementation to be invisible to existing client and service 
implementation and to just work.

I would like to start by implementing the Natblaster technique of 
traversing NAT's with TCP, using JAVA.  I'm not 100% how to fit this in 
with JERI, however the following classes appear relevant.

    * net.jini.jeri.connection.ServerConnectionManager
    * net.jini.jeri.connection.ConnectionManager


I will need new ServerEndpoint and Endpoint implementations, some 
observations:

    * The Server will need to be aware of the Client's Public NAT IP
      Address and recieve port.
    * The Client will need to be aware of the Server's Public NAT IP
      Address and recieve port.
    * A public service will need to be provided - Initial Connection
      Mediator Service (ICMS)?
    * The Client will need to contact the Initial Connection Mediator
      Service and request a connection to a Nat-ed Service.  This would
      need to be performed as an implementation detail of the
      Endpoint.newRequest() method.
    * The NAT-ed Service would need to be listening using the
      ServerEndpoint implementation, which would be polling the ICMS to
      maintain a connection.
    * The ServerEndpoint implementation would listen for and intercept
      connection negotiation requests.
    * The ServerEndpoint implementation would pass the communication
      through to the callback object once the connection is established.
    * Once a connection session was negotiated successfully, the client
      Endpoint.newRequest() method would return.
    * TLS will need to be utilised.


Any helpful advise would be much appreciated, this should allow the 
marshalled object to find it's way home.

BR,

Peter.

Resources:

    * http://netresearch.ics.uci.edu/kfujii/jpcap/doc/index.html --
      Multi Platform Java library for reading and writing packets, java
      equivalent of libpcap and libnet. 
    * http://natblaster.sourceforge.net/paper/natblaster.pdf  --
      Techniques for traversing NAT with TCP peer to peer, includes a
      implementation in C.




Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
Gregg Wonderly wrote:
> Peter Firmstone wrote:
>> Sim IJskes - QCG wrote:
>>>
>>> BTW pluggable marshallers, this could provide us for a place to put 
>>> an auto-exporter in. We could with annotations/interfaces signal 
>>> verify the intent. (i'm sure i'm not the first one thinking that).
>> This is going to be interesting, especially considering NAT's will 
>> change ports randomly, the Marshalled Object / Proxy instance won't 
>> know their way home, they'll probably need to find their location on 
>> an event que or something like that.
>
> To break through unrouted paths due to NAT, it would probably be 
> better to rely on connectivity reversal in the endpoint 
> implementations. A call through the endpoints in one direction, could 
> cause traffic in the opposing direction to request a remote inbound 
> connection, and then use that connection.
Thanks Gregg, I thought that too, but there are some issues, see the 
paper below, both NAT's have to think that they initiated the connection 
and there are a number of tricks to get the connection started that 
require a public third party.

>
> The problem is that when a service exports a marshalled proxy instance 
> into a lookup server, the unmarshalling of (an instance of) the proxy 
> is invisible to the service.

So I might have to delay obtaining a marshalled proxy instance until the 
connection is set up?  How do I request a new proxy instance directly 
from the service?

A DNS-SD Registrar (GlobalLookupService, I need a good name) smart proxy 
could potentially download the marshalled proxy directly from the 
service to the client (not sure how to do that either, got any ideas?).

My earlier comment about having a Marshalled ServiceItem Service with a 
hash lookup based on serviceID, might need to perform the Entry 
Comparisons for a DNS-SD Registrar (DNS-SD can't match entries), the 
Marshalled Proxy contained within would be useless.

Perhaps we need our own implementation of a Reflective Proxy that can 
find it's way home?  I have the basic reflective proxy object 
implementation that I stripped from harmony, I could alter that.  That 
way we'd be using local code to find the way home.  Maybe I should call 
him ET?

By utilising the OSGi Conditions for Permissions, certain permissions 
can be denied once the connection is lost, until the Service can be can 
re verify it's proxy and be authenticated.

>
> I haven't been able to read all of the details of what you all have 
> discussed because some of the words are not sinking in.
Let me know which one's I'll try to better explain it.

>
> However, the bigger issue is the NAT traversal issue.  If there are 
> not fixed port numbers and port forwarding through the NATing device, 
> I'm not sure there is a solution that doesn't involve a proxying host 
> (which you all did discuss).
It appears that the TCP/IP link can keep the connection by advising 
either side of the dynamic port changes.  See the report (link below), 
I'm not 100% confident, that I have interpreted this correctly, I hope 
it can actually do this, if it does, it will save a lot of hassle.

>
> That becomes a bottle neck and a resource that is difficult to manage.
My thoughts exactly, read this report (see link), there is a TCP p2p 
alternative that will provide a high degree of success for most NAT 
routers / firewalls.  There's a c implementation for Linux that requires 
root permissions (involves the Administrator, why adoption is low).  It 
only needs a third party to get the connection started.  We need a java 
implementation, see my earlier posting.  The proxying host could be a 
fall back if this fails.  This method would have no trouble with the 
typical home NAT device, however it addresses enterprise NAT devices 
also and that is a major concern.

This report details how to create a reliable TCP p2p NAT link between 
private networks that handles dynamic ports changes at both ends (the 
endpoints notify each other of the changes with TCP).

http://natblaster.sourceforge.net/paper/natblaster.pdf

See my earlier post "Re: roadmap - ICE Interactive Connectivity 
Establishment" for other references also.
>
> Maybe we need an endpoint implementation which knows how to use uPnP 
> for port forwarding configuration on consumer routers?  More and more 
> software is using uPnP for port forwarding.
Checked it out, the home routers are the easiest to break through, it's 
the enterprise stuff that's difficult, their uPnP is usually turned off.
>
> Microsofts Home Server knows how to do this, and there are others that 
> I've seen doing this to provide appropriate port forwarding changes.
>
> Gregg Wonderly
>
Cheers,

Peter.

Re: roadmap

Posted by Gregg Wonderly <gr...@wonderly.org>.
Peter Firmstone wrote:
> Sim IJskes - QCG wrote:
>>
>> BTW pluggable marshallers, this could provide us for a place to put an 
>> auto-exporter in. We could with annotations/interfaces signal verify 
>> the intent. (i'm sure i'm not the first one thinking that).
> This is going to be interesting, especially considering NAT's will 
> change ports randomly, the Marshalled Object / Proxy instance won't know 
> their way home, they'll probably need to find their location on an event 
> que or something like that.

To break through unrouted paths due to NAT, it would probably be better to rely 
on connectivity reversal in the endpoint implementations. A call through the 
endpoints in one direction, could cause traffic in the opposing direction to 
request a remote inbound connection, and then use that connection.

The problem is that when a service exports a marshalled proxy instance into a 
lookup server, the unmarshalling of (an instance of) the proxy is invisible to 
the service.

I haven't been able to read all of the details of what you all have discussed 
because some of the words are not sinking in.

However, the bigger issue is the NAT traversal issue.  If there are not fixed 
port numbers and port forwarding through the NATing device, I'm not sure there 
is a solution that doesn't involve a proxying host (which you all did discuss).

That becomes a bottle neck and a resource that is difficult to manage.

Maybe we need an endpoint implementation which knows how to use uPnP for port 
forwarding configuration on consumer routers?  More and more software is using 
uPnP for port forwarding.

Microsofts Home Server knows how to do this, and there are others that I've seen 
doing this to provide appropriate port forwarding changes.

Gregg Wonderly

Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
>
> BTW pluggable marshallers, this could provide us for a place to put an 
> auto-exporter in. We could with annotations/interfaces signal verify 
> the intent. (i'm sure i'm not the first one thinking that).
This is going to be interesting, especially considering NAT's will 
change ports randomly, the Marshalled Object / Proxy instance won't know 
their way home, they'll probably need to find their location on an event 
que or something like that.

Cheers,

Peter.

Re: roadmap - ICE Interactive Connectivity Establishment

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hi Sim,

I've found an interesting Draft itef spec that discusses all the issues 
we've just been considering, they call it ICE.

Terms:

    * ICE Interactive Connectivity Establishment
    * TURN Traversal Using Relay Nat
    * STUN Session Traversal Utilities for NAT

Links:

   1. http://code.google.com/apis/talk/libjingle/index.html --  Google
      code has a c++ ICE implementation with a BSD license.
   2. http://www.isoc.org/tools/blogs/ietfjournal/?p=117  - good
      overview & history
   3. http://tools.ietf.org/html/draft-ietf-mmusic-ice-tcp-08
   4. http://tools.ietf.org/html/rfc4091
   5. http://tools.ietf.org/html/draft-ietf-behave-turn-16  --  This one
      is most relevant to us to begin with.
   6. http://natblaster.sourceforge.net/paper/natblaster.pdf  --
      techniques for traversing NAT with TCP peer to peer. includes a c
      implementation.

It looks like the relay is the most reliable but the least scalable 
option, the standard everyone seems to be working toward is try to 
obtain a peer to peer NAT traversal first, using the relay to assist 
opening the connection, then fall back on the relay if it fails.  Items 
5 and 6 appear most relevant as these allow TLS connections. However 
given our resources, we should get a working relay before we worry about 
the p2p TCP too much.

The perl code mentioned earlier uses what is termed STUN, utilising UDP, 
it is not effective on corporate NAT's where UDP is typically blocked

Here's an interesting video from a Yahoo Engineer:

http://www.youtube.com/watch?v=9MWYw0fltr0&eurl=http%3A%2F%2Fwww.voip-news.com%2Ffeature%2Ftop-voip-videos-051707%2F

Let me know what you think.

Cheers,

Peter.


Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>>> This is not needed, a connection to a JR is on a polling basis, 
>>> initiated from the service (acting as a client from an RPC 
>>> perspective). (think XMPP, XEP-0124 BOSH).
>> Ok, so if it doesn't poll (empty packet) within a certain time, it's 
>> lease expires?
>
> Ah. Thats a good one. Probably. The expectance is, that the device 
> immediately starts another poll after retrieving a result. But if 
> there are processing constraints / connection interruptions and this 
> cannot be met, it should not be fatal. After a timeout, the relay 
> should stop. Or maybe after having a filled (not full) queue for a 
> certain timeout. Depending on the trust (or deployment decision) 
> between server and relay the relay could offer a receive queue with a 
> depth of more then 0. And to the client of de service-over-relay the 
> takedown-rebuild cycle should be transparant.
>
> Gr. Sim
>


Re: roadmap

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
>> This is not needed, a connection to a JR is on a polling basis, 
>> initiated from the service (acting as a client from an RPC 
>> perspective). (think XMPP, XEP-0124 BOSH).
> Ok, so if it doesn't poll (empty packet) within a certain time, it's 
> lease expires?

Ah. Thats a good one. Probably. The expectance is, that the device 
immediately starts another poll after retrieving a result. But if there 
are processing constraints / connection interruptions and this cannot be 
met, it should not be fatal. After a timeout, the relay should stop. Or 
maybe after having a filled (not full) queue for a certain timeout. 
Depending on the trust (or deployment decision) between server and relay 
the relay could offer a receive queue with a depth of more then 0. And 
to the client of de service-over-relay the takedown-rebuild cycle should 
be transparant.

Gr. Sim

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>>> Exactly, you cannot provide proxies in the wild without some kind of 
>>> verification. Thats why i keep returning to things analogous to 
>>> tunnels/VPNs etc. Because i want a TLS session end to end.
>> IPSec (VPN) packets cannot be mangled, it breaks the checksum, so the 
>> IPSec packet is wrapped in another packet that gets mangled, this 
>> provides the encryption end to end, unfortunately it requires low 
>> level Kernel support as it's below the TCP/IP Layer, so it isn't 
>> available without network admin intervention.
>>
>> We have to be careful to avoid a man in the middle with our Relay, 
>> see below I have some more thoughts.
>
> By providing a transparent packet relay, we can still provide for an 
> end-to-end TLS session, implemented with the 
> javax.net.ssl.SSLEngine.{wrap,unwrap} functions. And conceptually i 
> still see this as tunneled JERI session. And because the session is 
> running end-to-end, the change of a MITM is equivalent to other 
> solutions. The javax.net.ssl.SSLSession. getPeerPrincipal() provides 
> us with the Principals needed for our JERI security.
>
> The JERI Relay is only a public reachable bi-directional mailbox for 
> JERI packets/messages.
>
>> Perhaps JR can provide these functions:
>>   2. Keeps a line of communication open to the private Service
>>      utilising a heartbeat
>
> This is not needed, a connection to a JR is on a polling basis, 
> initiated from the service (acting as a client from an RPC 
> perspective). (think XMPP, XEP-0124 BOSH).
Ok, so if it doesn't poll (empty packet) within a certain time, it's 
lease expires?

>
>>   3. Read's ByteBuffer's from a Socket Channel, and write it's content
>>      to another Socket Channel without decrypting it.  If it isn't
>>      visible to the Relay and the Server and client can verify the JRS
>>      code, then there is potential there to trust the Relay.
>
> There is no trust of the Relay. End to end TLS session is all the 
> security we need. If you want to relay my service, be my guest. You 
> cannot inspect, you cannot intercept.
That's much simpler, use authentication instead to avoid the MIM.
>
>>   4. SSL Handshake: Not sure about how the handshake process from
>>      actual client to private service will handle the redirect.
>
> There is no redirect. The test used in HTTPS where the CN is compared 
> with the hostname is outside the scope of the TLS protocol IMHO.
Ok, good that solves that problem.
>
>> Perhaps the BasicJeriExporter can inspect the IP Address, and if it 
>> lies within a private range, check a system property to determine if 
>> JR Services are to be utilised?
>
> Can i reserve judgement on that? I cannot oversee the effect right now.
Sure.
>
>> JR could be looked up just like any other service, it only executes 
>> local code, doesn't need to know the details but can be trusted 
>> enough by the Service and client not to peek at the details.  
>
> Indeed, only local code, and it may peek at the stream just like any 
> other router.
Ok.
>
>> JR probably needs to be identified by a public domain certificate,
>> although this is a weak defence (DNS cache poisoning), it adds
>> another layer.
>
> No, no, no. I can not cooporate with an implementation where we 
> include the requirement that one MUST use a domain certificate. We 
> want to free ourselfs of the network admin, but chain ourselfs to a 
> costly certificate authority?
>
> I want to spray the internet with certificates, and cannot find the 
> money to fund that!

Just checking your paying attention ; ) good point, just use 
authentication!  Actually having thought about this a little more, 
allowing public domain certificates would probably weaken security, as 
some clients would rely on this flawed mechanism (exposing us to DNS 
cache poisoning).  Even though it checks the box of the marketing manager!
>
> Gr. Sim
>
> Apache river: The Data is the Code! (and vice-versa)
>
Nice catch phrase.  All about getting back to the original concepts 
around Objects.

Cheers,

Peter.

Re: roadmap

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
>> Exactly, you cannot provide proxies in the wild without some kind of 
>> verification. Thats why i keep returning to things analogous to 
>> tunnels/VPNs etc. Because i want a TLS session end to end.
> IPSec (VPN) packets cannot be mangled, it breaks the checksum, so the 
> IPSec packet is wrapped in another packet that gets mangled, this 
> provides the encryption end to end, unfortunately it requires low level 
> Kernel support as it's below the TCP/IP Layer, so it isn't available 
> without network admin intervention.
> 
> We have to be careful to avoid a man in the middle with our Relay, see 
> below I have some more thoughts.

By providing a transparent packet relay, we can still provide for an 
end-to-end TLS session, implemented with the 
javax.net.ssl.SSLEngine.{wrap,unwrap} functions. And conceptually i 
still see this as tunneled JERI session. And because the session is 
running end-to-end, the change of a MITM is equivalent to other 
solutions. The javax.net.ssl.SSLSession. getPeerPrincipal() provides us 
with the Principals needed for our JERI security.

The JERI Relay is only a public reachable bi-directional mailbox for 
JERI packets/messages.

> Perhaps JR can provide these functions:
>   2. Keeps a line of communication open to the private Service
>      utilising a heartbeat

This is not needed, a connection to a JR is on a polling basis, 
initiated from the service (acting as a client from an RPC perspective). 
(think XMPP, XEP-0124 BOSH).

>   3. Read's ByteBuffer's from a Socket Channel, and write it's content
>      to another Socket Channel without decrypting it.  If it isn't
>      visible to the Relay and the Server and client can verify the JRS
>      code, then there is potential there to trust the Relay.

There is no trust of the Relay. End to end TLS session is all the 
security we need. If you want to relay my service, be my guest. You 
cannot inspect, you cannot intercept.

>   4. SSL Handshake: Not sure about how the handshake process from
>      actual client to private service will handle the redirect.

There is no redirect. The test used in HTTPS where the CN is compared 
with the hostname is outside the scope of the TLS protocol IMHO.

> Perhaps the BasicJeriExporter can inspect the IP Address, and if it lies 
> within a private range, check a system property to determine if JR 
> Services are to be utilised?

Can i reserve judgement on that? I cannot oversee the effect right now.

> JR could be looked up just like any other service, it only executes 
> local code, doesn't need to know the details but can be trusted enough 
> by the Service and client not to peek at the details.  

Indeed, only local code, and it may peek at the stream just like any 
other router.

> JR probably needs to be identified by a public domain certificate,
> although this is a weak defence (DNS cache poisoning), it adds
> another layer.

No, no, no. I can not cooporate with an implementation where we include 
the requirement that one MUST use a domain certificate. We want to free 
ourselfs of the network admin, but chain ourselfs to a costly 
certificate authority?

I want to spray the internet with certificates, and cannot find the 
money to fund that!

Gr. Sim

Apache river: The Data is the Code! (and vice-versa)

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
>> Instead we could communicate over ordinary public networks without
>> any special network admin intervention.
>
> I'm with you!
>
>> This does raise privacy issues though for serialization or message
>> streams, however secure JERI has mechanisms to handle those.
>
> Exactly, you cannot provide proxies in the wild without some kind of 
> verification. Thats why i keep returning to things analogous to 
> tunnels/VPNs etc. Because i want a TLS session end to end.
IPSec (VPN) packets cannot be mangled, it breaks the checksum, so the 
IPSec packet is wrapped in another packet that gets mangled, this 
provides the encryption end to end, unfortunately it requires low level 
Kernel support as it's below the TCP/IP Layer, so it isn't available 
without network admin intervention.

We have to be careful to avoid a man in the middle with our Relay, see 
below I have some more thoughts.
>
>> That is of course if Serialization is used for communication. 
>> Compressed Serialized binary data is the fastest way to communicate
>> over bandwidth restricted and high latency networks.
>
> BTW pluggable marshallers, this could provide us for a place to put an 
> auto-exporter in. We could with annotations/interfaces signal verify 
> the intent. (i'm sure i'm not the first one thinking that).
>
>
>> For smart proxy implementations we would want the client to be able 
>> to download the marshalled smart proxy from a lookup service and 
>> download the bytecode from a codebase service (it would be easier if 
>> the code base is public), where the smart proxy itself uses its 
>> internal reflective proxy (RMI JERI) to communicate with the private 
>> service via the listening post.  The listening post would just be 
>> relaying the methods / messages while keeping the communication lines 
>> (NAT gateway ports) open between it, the smart proxy and its server.
>
>> Perhaps JERI itself could utilise some sort of Relay listening post 
>> service?
>
> Exactly, i was only talking about a proxy in terms of JERI. I think we 
> now have the proper name for it. Jeri Relay Service (JRS)?
>
> We can also create a standard codebase service, which can (off-course) 
> also be exported over the JRS.
>
> Gr. Sim
>
JRS, I like that name.  Maybe Just JR?  Even though it's a service, the 
semantics become difficult to distinguish it from the private service.

Perhaps JR can provide these functions:

   1. Be verified by a TrustVerifier from the private Service node as
      well as the Client node, to verify the piece of code that handles
      the Socket Channels is the genuine JERI Relay.
   2. Keeps a line of communication open to the private Service
      utilising a heartbeat
   3. Read's ByteBuffer's from a Socket Channel, and write it's content
      to another Socket Channel without decrypting it.  If it isn't
      visible to the Relay and the Server and client can verify the JRS
      code, then there is potential there to trust the Relay.
   4. SSL Handshake: Not sure about how the handshake process from
      actual client to private service will handle the redirect.

Before a Relay can provide those functions between a client and service 
it must register the private Service proxy with a publicly visible 
registrar.

Perhaps the BasicJeriExporter can inspect the IP Address, and if it lies 
within a private range, check a system property to determine if JR 
Services are to be utilised?

JR could be looked up just like any other service, it only executes 
local code, doesn't need to know the details but can be trusted enough 
by the Service and client not to peek at the details.  JR probably needs 
to be identified by a public domain certificate, although this is a weak 
defence (DNS cache poisoning), it adds another layer.

Cheers,

Peter.





Re: roadmap

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
> Similar to a VPN, but without the Private Part, VPN's use IPSec, which 
> is a low level OS kernel and router implementation that TCP/IP utilises 
> which require forward planning and administration.  

I'm not clear on what the point is here, but it is my intention to 
create something that can be used in a CUG deployment. So that the owner 
of the infrastructure can prohibit use of their 'proxyservers' to 
systems outside the CUG.

> Instead we could communicate over ordinary public networks without
> any special network admin intervention.

I'm with you!

> This does raise privacy issues though for serialization or message
> streams, however secure JERI has mechanisms to handle those.

Exactly, you cannot provide proxies in the wild without some kind of 
verification. Thats why i keep returning to things analogous to 
tunnels/VPNs etc. Because i want a TLS session end to end.

> That is of course if Serialization is used for communication. 
> Compressed Serialized binary data is the fastest way to communicate
> over bandwidth restricted and high latency networks.

BTW pluggable marshallers, this could provide us for a place to put an 
auto-exporter in. We could with annotations/interfaces signal verify the 
intent. (i'm sure i'm not the first one thinking that).


> For smart proxy 
> implementations we would want the client to be able to download the 
> marshalled smart proxy from a lookup service and download the bytecode 
> from a codebase service (it would be easier if the code base is public), 
> where the smart proxy itself uses its internal reflective proxy (RMI 
> JERI) to communicate with the private service via the listening post.  
> The listening post would just be relaying the methods / messages while 
> keeping the communication lines (NAT gateway ports) open between it, the 
> smart proxy and its server.

> Perhaps JERI itself could utilise some sort of Relay listening post 
> service?

Exactly, i was only talking about a proxy in terms of JERI. I think we 
now have the proper name for it. Jeri Relay Service (JRS)?

We can also create a standard codebase service, which can (off-course) 
also be exported over the JRS.

Gr. Sim

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
> Peter Firmstone wrote:
>
>> If we have a private service behind a NAT gateway open a connection 
>> to a public remote host and keep it open by utilising a heartbeat 
>> (empty packet sent on a regular basis during idle periods), the 
>> public host can maintain the connection also by using a heartbeat. 
>> While the private service is in contact with the host, the public 
>> host can be a proxy service for the host. By utilising DNS-SD the 
>> public host can utilise all of its available free ports to act as 
>> proxy services for private service instances, these could be 
>> registered as DNS-SD Jini services where they can be discovered. We 
>> could call this a listening post Service. The private services could 
>> upload simple reflective proxies to the listening post service. The 
>> DNS-SD could be maintained using Dynamic Update Leases When a 
>> connection is lost, the private service can re instantiate it and re 
>> register it with a DNS Dynamic Update Lease.
>
> Thanks. You raised some interesting issues. Never thought about the 
> serviceproxies. You only need a serviceproxy when you want to switch 
> endpoint technologies. If we have a new message based endpoint 
> implementation, you could create a endpointproxy with a receive queue 
> of size 0, and not bother with a serviceproxy.
>
> (hmm, again it sounds like a VPN implementation over Jini.)
>
> Gr. Sim
>
Similar to a VPN, but without the Private Part, VPN's use IPSec, which 
is a low level OS kernel and router implementation that TCP/IP utilises 
which require forward planning and administration.  Instead we could 
communicate over ordinary public networks without any special network 
admin intervention.  This does raise privacy issues though for 
serialization or message streams, however secure JERI has mechanisms to 
handle those.

That is of course if Serialization is used for communication.  
Compressed Serialized binary data is the fastest way to communicate over 
bandwidth restricted and high latency networks.  For smart proxy 
implementations we would want the client to be able to download the 
marshalled smart proxy from a lookup service and download the bytecode 
from a codebase service (it would be easier if the code base is public), 
where the smart proxy itself uses its internal reflective proxy (RMI 
JERI) to communicate with the private service via the listening post.  
The listening post would just be relaying the methods / messages while 
keeping the communication lines (NAT gateway ports) open between it, the 
smart proxy and its server.

Perhaps JERI itself could utilise some sort of Relay listening post service?

I have no idea at the moment how the Java interfaces for this might be 
implemented.

Keep thinking.

Cheers,

Peter.

Re: roadmap

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:

> If we have a private service behind a NAT gateway open a connection to a 
> public remote host and keep it open by utilising a heartbeat (empty 
> packet sent on a regular basis during idle periods), the public host can 
> maintain the connection also by using a heartbeat. While the private 
> service is in contact with the host, the public host can be a proxy 
> service for the host. By utilising DNS-SD the public host can utilise 
> all of its available free ports to act as proxy services for private 
> service instances, these could be registered as DNS-SD Jini services 
> where they can be discovered. We could call this a listening post 
> Service. The private services could upload simple reflective proxies to 
> the listening post service. The DNS-SD could be maintained using Dynamic 
> Update Leases When a connection is lost, the private service can re 
> instantiate it and re register it with a DNS Dynamic Update Lease.

Thanks. You raised some interesting issues. Never thought about the 
serviceproxies. You only need a serviceproxy when you want to switch 
endpoint technologies. If we have a new message based endpoint 
implementation, you could create a endpointproxy with a receive queue of 
size 0, and not bother with a serviceproxy.

(hmm, again it sounds like a VPN implementation over Jini.)

Gr. Sim


-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: roadmap

Posted by Peter Firmstone <ji...@zeus.net.au>.
Hi Sim,

Here's an idea for No.1.

The NAT-PMP protocol allows client software to request a port mapping 
from the NAT Gateway, however this is for simple NAT networks, not 
nested NAT and not everything supports it.

The connection from an internal host for reply packets appear as random 
ephemeral ports (ports not assigned protocols by IANA) on a NAT gateway, 
however this port closes after a short idle period. Then there's the 
problem of network filters. We could compress the serialization byte 
stream using deflate compression, I don't know if this would disguise 
the stream but it would be faster.

If we have a private service behind a NAT gateway open a connection to a 
public remote host and keep it open by utilising a heartbeat (empty 
packet sent on a regular basis during idle periods), the public host can 
maintain the connection also by using a heartbeat. While the private 
service is in contact with the host, the public host can be a proxy 
service for the host. By utilising DNS-SD the public host can utilise 
all of its available free ports to act as proxy services for private 
service instances, these could be registered as DNS-SD Jini services 
where they can be discovered. We could call this a listening post 
Service. The private services could upload simple reflective proxies to 
the listening post service. The DNS-SD could be maintained using Dynamic 
Update Leases When a connection is lost, the private service can re 
instantiate it and re register it with a DNS Dynamic Update Lease.

Then all I need is a method of utilising the DNS-SD from Jini / River.

Cheers,

Peter.

 From http://mindprod.com/jgloss/tcpip.html


    Disconnect Detection

Since TCP/IP sends no packets except when there is traffic, without 
Socket.setKeepAlive( true ), it has no way of noticing a disconnect 
until you start trying to send (or to a certain extent receive) traffic 
again. Java has the Socket.setKeepAlive( true ) method to ask TCP/IP to 
handle heartbeat probing without any data packets or application 
programming. Unfortunately, you can’t tell it how frequently to send the 
heartbeat probes. If the other end does not respond in time, you will 
get a socket exception on your pending read. Heartbeat packets in both 
directions let the other end know you are still there. A heartbeat 
packet is just an ordinary TCP/IP ack packet without any piggybacking data.

When the applications are idling, your applications could periodically 
send tiny heartbeat messages to each other. The receiver could just 
ignore them. However, they force the TCP/IP protocol to check if the 
other end is still alive. These are not part of the TCP/IP protocol. You 
would have to build them into your application protocols. They act as 
are-you-still-alive? messages. I have found Java’s connection continuity 
testing to be less that 100% reliable. My bullet-proof technique to 
detect disconnect is to have the server send an application-level 
heartbeat packet if it has not sent some packet in the last 30 seconds. 
It has to send some message every 30 seconds, not necessarily a dummy 
heartbeat packet. The heartbeat packets thus only appear when the server 
is idling. Otherwise normal traffic acts as the heartbeat. The Applet 
detects the lack of traffic on disconnect and automatically restarts the 
connection. The downside is your applications have to be aware of these 
heartbeats and they have to fit into whatever other protocol you are 
using, unlike relying on TCP/IP level heartbeats.

However, it is simpler to use the built-in Socket.setKeepAlive( true ) 
method to ask TCP/IP to handle the heartbeat probing without any data 
packets or application programming. Each end with nothing to say just 
periodically sends an empty data packet with its current sequence, 
acknowledgement and window numbers.

The advantage of application level heartbeats is they let you know the 
applications at both ends are alive, not just the communications software.




QCG - Sim IJskes wrote:
> I'm a bit swamped at the moment, but my requirements for jini look 
> like this:
>
> 1) provide means to allow NAT-ed clients to provide services.
> 2) create an identity provisioning service
>
> I have a way to provide issue 1 right now, but i'm not happy about it. 
> Its a star network with HTTP as a transport layer. The intention is to 
> create a service to act as a nat-service-proxy, with a mailbox style 
> rendezvous. The NAT-ed service polls the mailbox. The client connects 
> to the service-proxy. The protocol would be message based, with 
> messages method-call and method-reply. I'm thinking about abstracting 
> the serialization from a suitable transport in order to find the 
> message boundaries. (i'm a little suspicious: why didn't the sun jini 
> team do this?)
>
> The intention for issue 2 is to provide a service whereby a client can 
> request an identity (or group membership) certificate and use this 
> certificate for incoming and outgoing connections from that point on. 
> Acceptance of the identity request will be done by the GUI or another 
> outside system, providing the acceptor with a secret in order to 
> verify identity via outside channels (think of bluetooth pairing).
>
> My JXTA for Jini attempt is shelved. The JXTA production release from 
> a few months ago was non-functional for my deployment scenario 
> (HTTP-only), the HEAD release had stall problems during connection 
> setup. The effort seems to be big compared to building the 
> functionality needed with Jini alone.
>
> Gr. Sim
>
> P.S. for UDP a messagetype method-call-without-reply might be possible.
>


roadmap

Posted by QCG - Sim IJskes <si...@qcg.nl>.
I'm a bit swamped at the moment, but my requirements for jini look like 
this:

1) provide means to allow NAT-ed clients to provide services.
2) create an identity provisioning service

I have a way to provide issue 1 right now, but i'm not happy about it. 
Its a star network with HTTP as a transport layer. The intention is to 
create a service to act as a nat-service-proxy, with a mailbox style 
rendezvous. The NAT-ed service polls the mailbox. The client connects to 
the service-proxy. The protocol would be message based, with messages 
method-call and method-reply. I'm thinking about abstracting the 
serialization from a suitable transport in order to find the message 
boundaries. (i'm a little suspicious: why didn't the sun jini team do this?)

The intention for issue 2 is to provide a service whereby a client can 
request an identity (or group membership) certificate and use this 
certificate for incoming and outgoing connections from that point on. 
Acceptance of the identity request will be done by the GUI or another 
outside system, providing the acceptor with a secret in order to verify 
identity via outside channels (think of bluetooth pairing).

My JXTA for Jini attempt is shelved. The JXTA production release from a 
few months ago was non-functional for my deployment scenario 
(HTTP-only), the HEAD release had stall problems during connection 
setup. The effort seems to be big compared to building the functionality 
needed with Jini alone.

Gr. Sim

P.S. for UDP a messagetype method-call-without-reply might be possible.

Re: Lookup Service Discovery using DNS? - Java Implementation

Posted by Peter Firmstone <ji...@zeus.net.au>.
There's pure java DNS-SD implementation that also includes Multicast 
DNS, just like the Bonjour Library, I don't yet know if it supports 
DNSSEC, which I believe is important.

While I think link local ip addresses are interesting and may be useful 
for large mobile plant (eg a Dragline , Shovel or Bucket Wheel) to have 
a backup in case of DHCP failure, it really is a low level OS function 
and not our concern.

Here's the site:

http://code.google.com/p/waiter/


    Waiter as a Java DNS library replacement

The core of /waiter/ is a clean, modern Java DNS library which can be 
used explicitly instead of the very weak services in the JDK. It be can 
be used with or without caching, and provides access to far more of the 
DNS messages, allowing Java programmers to truly leverage DNS in service 
clusters and distributed systems.

Cheers,

Peter.

Peter Firmstone wrote:
> Sim IJskes - QCG wrote:
>> When we have several reggies running on the internet, it would be
>> handy to find them via DNS SRV records. It would only create an extra 
>> layer of indirection. With DDNS zone updates one could build some 
>> kind of super registry, but i would prefer just to code it in java.
> Sim, I hear you when you say you'd just prefer to code it in Java, has 
> anyone any experience with dnsjava?  Their website is so simple, I 
> decided to attach it at the end of this message, however it is 
> available at http://www.dnsjava.org/
>
> When I think about a GlobalLookupService, I want something that can co 
> exist with Reggie.
>
> Daniel emailed me to let me know he's passed on my request to add the 
> service ID and group set record keys to the Jini DNS SRV Service Type.
>
> The Bonjour daemon, supports Multicast DNS Service Discovery as well 
> as Unicast DNS-SD, however it is coded in C.
>
> Support for link local ip addresses, is really a low level operating 
> system function, it doesn't ship with the Bonjour daemon, however OS's 
> will probably pick this up in future.
>
> I'd recommend anybody interested pick up a copy of "Zero Configuration 
> Networking" by Stuart Cheshire & Daniel H. Steinberg.
>
> In fact I'll personally purchase and send a copy to anyone ready to 
> give me a hand to create a GlobalLookupService.
>
> Cheers,
>
> Peter.
>
>


Re: Lookup Service Discovery using DNS?

Posted by Peter Firmstone <ji...@zeus.net.au>.
The tricky part is how to enumerate multiple domains?

Perhaps this could be done by finding other lookup services globally via 
a DNS-SD directly by two new methods in LookupDiscovery?

public void addDomains(String[] domains) throws IOException {...}
public void removeDomains(String[] domains) throws IOException {...}

We might then have another class that enumerates the domains for us.

Then we have to figure out how to retrieve the remote ServiceItem's for 
each result for ServiceAttribute Entry comparisons.

This could require an array of smart proxies that each process their 
domain results?  Putting the load and the risk back onto the client.  
Perhaps it would be best to perform most entry matches on the lookup 
server to reduce remote network bandwidth requirements.

Anyway I don't think I've got the interactions conceptualised properly, 
I doesn't feel right.  Feel free to come up with ideas.

Cheers,

Peter.


Peter Firmstone wrote:
> QCG - Sim IJskes wrote:
>> Peter Firmstone wrote:
>>> Sim IJskes - QCG wrote:
>>>> When we have several reggies running on the internet, it would be
>>>> handy to find them via DNS SRV records. It would only create an 
>>>> extra layer of indirection. With DDNS zone updates one could build 
>>>> some kind of super registry, but i would prefer just to code it in 
>>>> java.
>>> Sim, I hear you when you say you'd just prefer to code it in Java, 
>>> has anyone any experience with dnsjava?  Their website is so simple, I 
>>
>> Sorry, i was unclear. I think i was expressing that i prefer to code 
>> it in Jini services, instead of DDNS etc. This is only for the 
>> prototyping phase. I've noticed that i can work much faster when 
>> everything is java and nothing is unmodifiable. When it works, and 
>> behaves like expected then we can refactor it to fit the DDNS 
>> service, and put in a factory for the implementation to enable to 
>> switch between DDNS and the Java Ref Implementation. Much easier for 
>> testing and other bench activities. As we can code the ref. impl. in 
>> any form we want, we can also detect mismatches between the DDNS and 
>> what we need, early during development.
>>
>> Gr. Sim
>>
>>
> Ok , that's a good idea, I'm just trying to imagine the way it might 
> work, the GlobalLookupService should be a smart proxy service that 
> allows current jini services to be registered to DNS without requiring 
> any changes to those services , perhaps my semantics are poor, a 
> better name might be DNSLookupService that also implements 
> ServiceRegistrar.  This could be discovered by the usual Jini 
> Multicast Request, Announcement or Unicast Discovery methods currently 
> utilised by jini services.  We could also have some kind of boot strap 
> utility that finds DNSLookupService's using DNS-SD returning the URL's 
> to LookupLocator instances when we need to avoid Jini Multicast 
> Discovery or just use the exiting LookupLocator with a public URL (we 
> might do this if we're roaming outside a Jini network).  Later our 
> DNSLookupService smart proxy implementations (after prototyping) are 
> free to be re implemented or compared for improvement.  The Marshalled 
> Entry instances will have to be cached at the DNSLookupService Server 
> at Registration as DNS-SD cannot support Entry's.  I had in mind that 
> the client do most of the compute work for comparing entry's rather 
> than the server.
>
> The Sequence might work like this:
>
>   1. Jini Service discovers a LookupService registrar obtained through
>      discovery.
>   2. Jini Service uses the ServiceRegistrar to register its service.
>   3. The LookupService registrar accepts the ServiceItem and caches it
>      at the Server and updates the DNS-SD (one dns service record for
>      each ServiceType interface)
>   4. A client utilises a LookupService smart proxy to find the DNS-SD
>      from the ServiceTemplate object passed in, we can utilise DNS-SD
>      to retrieve a service based on a ServiceID or the ServiceType.
>   5. Any entry matches are performed for the LookupService server by
>      its smart proxy after first filtering the results aginst
>      ServiceType and ServiceID, processed at the client. Althernately
>      they could be matched at the server using marshalled object
>      comparisons as Reggie currently does.
>
> Initially, as you point out, we don't need to actually utilise DNS-SD 
> to start with, we could prototype with a simple file behind the 
> DNSLookupService registrar with similar semantics to DNS-SD until 
> we've got a partial implementation working.
>
> Cheers,
>
> Peter.
>
>


Re: Lookup Service Discovery using DNS?

Posted by Peter Firmstone <ji...@zeus.net.au>.
QCG - Sim IJskes wrote:
> Peter Firmstone wrote:
>> Sim IJskes - QCG wrote:
>>> When we have several reggies running on the internet, it would be
>>> handy to find them via DNS SRV records. It would only create an 
>>> extra layer of indirection. With DDNS zone updates one could build 
>>> some kind of super registry, but i would prefer just to code it in 
>>> java.
>> Sim, I hear you when you say you'd just prefer to code it in Java, 
>> has anyone any experience with dnsjava?  Their website is so simple, I 
>
> Sorry, i was unclear. I think i was expressing that i prefer to code 
> it in Jini services, instead of DDNS etc. This is only for the 
> prototyping phase. I've noticed that i can work much faster when 
> everything is java and nothing is unmodifiable. When it works, and 
> behaves like expected then we can refactor it to fit the DDNS service, 
> and put in a factory for the implementation to enable to switch 
> between DDNS and the Java Ref Implementation. Much easier for testing 
> and other bench activities. As we can code the ref. impl. in any form 
> we want, we can also detect mismatches between the DDNS and what we 
> need, early during development.
>
> Gr. Sim
>
>
Ok , that's a good idea, I'm just trying to imagine the way it might 
work, the GlobalLookupService should be a smart proxy service that 
allows current jini services to be registered to DNS without requiring 
any changes to those services , perhaps my semantics are poor, a better 
name might be DNSLookupService that also implements ServiceRegistrar.  
This could be discovered by the usual Jini Multicast Request, 
Announcement or Unicast Discovery methods currently utilised by jini 
services.  We could also have some kind of boot strap utility that finds 
DNSLookupService's using DNS-SD returning the URL's to LookupLocator 
instances when we need to avoid Jini Multicast Discovery or just use the 
exiting LookupLocator with a public URL (we might do this if we're 
roaming outside a Jini network).  Later our DNSLookupService smart proxy 
implementations (after prototyping) are free to be re implemented or 
compared for improvement.  The Marshalled Entry instances will have to 
be cached at the DNSLookupService Server at Registration as DNS-SD 
cannot support Entry's.  I had in mind that the client do most of the 
compute work for comparing entry's rather than the server.

The Sequence might work like this:

   1. Jini Service discovers a LookupService registrar obtained through
      discovery.
   2. Jini Service uses the ServiceRegistrar to register its service.
   3. The LookupService registrar accepts the ServiceItem and caches it
      at the Server and updates the DNS-SD (one dns service record for
      each ServiceType interface)
   4. A client utilises a LookupService smart proxy to find the DNS-SD
      from the ServiceTemplate object passed in, we can utilise DNS-SD
      to retrieve a service based on a ServiceID or the ServiceType.
   5. Any entry matches are performed for the LookupService server by
      its smart proxy after first filtering the results aginst
      ServiceType and ServiceID, processed at the client. Althernately
      they could be matched at the server using marshalled object
      comparisons as Reggie currently does.

Initially, as you point out, we don't need to actually utilise DNS-SD to 
start with, we could prototype with a simple file behind the 
DNSLookupService registrar with similar semantics to DNS-SD until we've 
got a partial implementation working.

Cheers,

Peter.


Re: Lookup Service Discovery using DNS?

Posted by QCG - Sim IJskes <si...@qcg.nl>.
Peter Firmstone wrote:
> Sim IJskes - QCG wrote:
>> When we have several reggies running on the internet, it would be
>> handy to find them via DNS SRV records. It would only create an extra 
>> layer of indirection. With DDNS zone updates one could build some kind 
>> of super registry, but i would prefer just to code it in java.
> Sim, I hear you when you say you'd just prefer to code it in Java, has 
> anyone any experience with dnsjava?  Their website is so simple, I 

Sorry, i was unclear. I think i was expressing that i prefer to code it 
in Jini services, instead of DDNS etc. This is only for the prototyping 
phase. I've noticed that i can work much faster when everything is java 
and nothing is unmodifiable. When it works, and behaves like expected 
then we can refactor it to fit the DDNS service, and put in a factory 
for the implementation to enable to switch between DDNS and the Java Ref 
Implementation. Much easier for testing and other bench activities. As 
we can code the ref. impl. in any form we want, we can also detect 
mismatches between the DDNS and what we need, early during development.

Gr. Sim


Re: Lookup Service Discovery using DNS?

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
> When we have several reggies running on the internet, it would be
> handy to find them via DNS SRV records. It would only create an extra 
> layer of indirection. With DDNS zone updates one could build some kind 
> of super registry, but i would prefer just to code it in java.
Sim, I hear you when you say you'd just prefer to code it in Java, has 
anyone any experience with dnsjava?  Their website is so simple, I 
decided to attach it at the end of this message, however it is available 
at http://www.dnsjava.org/

When I think about a GlobalLookupService, I want something that can co 
exist with Reggie.

Daniel emailed me to let me know he's passed on my request to add the 
service ID and group set record keys to the Jini DNS SRV Service Type.

The Bonjour daemon, supports Multicast DNS Service Discovery as well as 
Unicast DNS-SD, however it is coded in C.

Support for link local ip addresses, is really a low level operating 
system function, it doesn't ship with the Bonjour daemon, however OS's 
will probably pick this up in future.

I'd recommend anybody interested pick up a copy of "Zero Configuration 
Networking" by Stuart Cheshire & Daniel H. Steinberg.

In fact I'll personally purchase and send a copy to anyone ready to give 
me a hand to create a GlobalLookupService.

Cheers,

Peter.


  dnsjava(2.0.8)

*New:* 2.0.8 released: Fixes to NSEC, NSEC3, and RRSIG records.

dnsjava is an implementation of DNS in Java. It supports all defined 
record types (including the DNSSEC types), and unknown types. It can be 
used for queries, zone transfers, and dynamic updates. It includes a 
cache which can be used by clients, and a minimal implementation of a 
server. It supports TSIG authenticated messages, partial DNSSEC 
verification, and EDNS0.

dnsjava provides functionality above and beyond that of the InetAddress 
class. Since it is written in pure Java, dnsjava is fully threadable, 
and in many cases is faster than using InetAddress.

dnsjava provides both high and low level access to DNS. The high level 
functions perform queries for records of a given name, type, and class, 
and return the answer or reason for failure. There are also functions 
similar to those in the InetAddress class. A cache is used to reduce the 
number of DNS queries sent. The low level functions allow direct 
manipulation of DNS messages and records, as well as allowing additional 
resolver properties to be set.

A simple tool for doing DNS lookups, a 'dig' clone and a dynamic update 
client are included, as well as a simple authoritative-only server.

For more information, see the README 
<http://www.dnsjava.org/dnsjava-current/README> file in the source 
distribution.

For information on the sample programs included, see the USAGE 
<http://www.dnsjava.org/dnsjava-current/USAGE> file in the source 
distribution.

For API documentation, see the Javadoc documentation 
<http://www.dnsjava.org/dnsjava-current/doc> in the source distribution 
or the examples.html 
<http://www.dnsjava.org/dnsjava-current/examples.html> file.

Please read this documentation before asking questions.

dnsjava is under the BSD license.

*Other software using dnsjava*

    * Muffin <http://muffin.doit.org/> - a filtering proxy server
    * JaMS <http://www.kimble.easynet.co.uk/jams/> - a mail server
    * JAMES <http://james.apache.org/> - Java Apache Mail Server
    * CustomDNS <http://customdns.sourceforge.net/> - a configurable DNS
      server
    * CRSMail <http://crsemail.sourceforge.net/> - a mail server
    * JacORB <http://www.jacorb.org/> - a Java ORB
    * Jackpot <http://jackpot.uk.net/> - an SMTP relay honeypot
    * Scarab <http://scarab.tigris.org/> - a defect tracking system
    * Java Email Server <http://www.ericdaugherty.com/java/mailserver/>
    * Jsmtpd <http://sourceforge.net/projects/jsmtpd/> - a java smtp daemon
    * jSPF <http://james.apache.org/jspf/index.html> - a pure java SPF
      implementation
    * SecSpider <http://secspider.cs.ucla.edu> - DNSSEC monitoring software
    * Rabbit4 <http://www.khelekore.org/rabbit/> - a web proxy
    * Netifera <http://netifera.com/> - a platform for creating network
      security tools
    * Eagle DNS <http://www.unlogic.se/projects/eagledns> - an
      authoritative DNS server

*Similar software*

    * dnspython <http://www.dnspython.org/>

View the Changelog <http://www.dnsjava.org/dnsjava-current/Changelog>

Download dnsjava: HTTP <http://www.dnsjava.org/download/>

See the project <http://sourceforge.net/projects/dnsjava/> page at 
Sourceforge. SourceForge Logo <http://sourceforge.net>

Brian Wellington (bwelling@xbill.org <ma...@xbill.org>)

------------------------------------------------------------------------




Re: Lookup Service Discovery using DNS? (revised)

Posted by Peter Firmstone <ji...@zeus.net.au>.
> Gr. Sim
>
> P.S. Just tell me, am i a scared conservative, blocking the way of 
> progress?
>
No, problems need to be identified in order to be solved, no one can 
envision all problems, consider it an experiment.  Security should still 
be a concern inside LAN's too.

Somehow we need to make security easier.  If developers or a tool writes 
required permissions, stored within each bundle and a UI Trust 
Relationship tool is constructed to assist administrators and developers 
to select from a list of available recommended permissions, security 
could be easier and be enabled by default, so the administrator or 
developer doesn't have to set the permissions for everything manually.  
In lieu of trust, for bytecode, signed by a developer or codebase 
service, without a trust relationship, a user could be presented with a 
list of required permissions (provided the user possesses the rights to 
grant them) and the ramifications of granting them.  The software might 
be able to find a list of friends who trust that developer, the user 
might be able to contact one or decide to utilise the service based on 
this information.  Bundles that cause problems (keys compromised, bad 
bug etc) could be reported, this information could be highlighted at the 
time a user makes the trust decision.

Alternatively if the user cannot grant the required permissions, the 
trust request could be sent to an administrator, who could follow up on 
authorising trust.

This would be far better than current circumstances where users download 
free applications all the time, without even so much as a checksum.

Who know's what malware could be lurking within LAN's.

Cheers,

Peter.