You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@qpid.apache.org by Carl Trieloff <cc...@redhat.com> on 2006/09/15 20:31:49 UTC

[Fwd: RE: Binding Context?]


There is debate on another thread about the JMS & Qpid API. I believe 
that Peter
brings up some good use cases using the API that are not limited to SCA 
and the JMS
API does not meet.

I believe that
- we need to meet JMS TCK compliance for those that care for a Java API
- we need to also be able to handle these other use cases

( I don't have an opinion yet if we should create two layers or API with 
JMS ontop
or just have extensions to the JMS API)

Carl.

-------- Original Message --------
Subject: 	RE: Binding Context?
Date: 	Thu, 14 Sep 2006 14:34:31 -0700
From: 	Peter Cousins <pe...@itemfield.com>
Reply-To: 	tuscany-dev@ws.apache.org
To: 	<tu...@ws.apache.org>



These are interesting ideas.  There are a few more things I think should
be covered.  

I understand your motivations about coupling reduction by abstraction
promotion of the underlying artifact.  

One usage scenario would be logging the credentials of the requester
into a database for audit trail reasons.  Given that for a given service
component implementation, there could be multiple middleware bindings,
and each binding could have multiple types of credentials, it would be
helpful to have a declarative way for the service to declare credentials
as authoritative.

For example, if I am using AMQP/Blaze with SOAP, what do I log?  It
could be any one of the following: 
 * "user id" message property
 * "app id" message property for trusted applications
 * X.509 certificate used to sign the message
 * WS-Security credentials in the SOAP header

Whereas if I am using HTTPS with SOAP, it could be a similar set:
 * "Authorization" http header
 * "Proxy Authorization" http header
 * X.509 certificate used for TLS/SSL
 * WS-Security credentials in the SOAP header

These bindings have no way of knowing which credential is authoritative
when multiple credentials are supplied, so it seems the best way is for
bindings to register a namespace with a binding context manager, and
present whatever they have received in their binding specific context.
This way an intermediate component like the "aspects" we discussed could
be responsible for interpreting the policy to select credentials from
the lower level of abstraction and transform them into higher level of
abstraction.

In this example, there could be three namespaces in the context manager:


http://www.osoa.org/schemas/contexts/security/authorization.xsd
	which could define a structure like:
		authorizationContext
			userId
			authorizationData

http://www.osoa.org/schemas/contexts/binding/amqp.xsd 	
which could define a structure like:
		amqpContext
			contentEncoding
			headers
				someCustomHeader
				anotherCustomHeader
			deliveryMode
			priority
			correlationId
			replyTo
			expiration
			messageId
			timestamp
			type
			userId
			appId
			clustered

http://www.osoa.org/schemas/contexts/binding/http.xsd
which could define a structure like:
		httpContext
			authorization
				userId
				password
			cookie
			certificate
				issuedTo
				expiration
				issuingAuthority
			proxyAuthentication
				userId
				password
				
http://www.osoa.org/schemas/contexts/binding/soap.xsd
which could define a structure like:
		soapContext
			wsseSecurity
				userId
				password
			
			wsTransaction
				tid


And then a security policy aspect could have the following
  Source  soapContext/wsseSecurity/userId
  Destination  authorizationContext/userId

Or could present a number of them in order of interest
  Source  soapContext/wsseSecurity/userId
  Source  httpContext/authorization/userId  
  Destination  authorizationContext/userId

Then the application need only pull authorizationContext/userId


Likewise this helps with the context propagation usage scenario you
alluded to in your mail:

An aspect can be defined for call chaining that could handle amqp
context propagation, because it could take
	amqpContext/correlationId 
	or
	amqpContext/messageId
from the inbound and move it to the outbound call context 
	amqpContext/correlationId

or for oneway short circuiting, could be configured to take the 
	amqpContext/replyTo
from the inbound and put it to the same place on the outbound call,
whereas if the application is going to handle the callback,
amqpContext/replyTo could be reconfigured before setting
amqpContext/replyTo on the outbound call context.

The same approach could be used to conditionally propagate
soapContext/wsTransaction/tid.

I know this was a long message, thanks for reading it all...PC


-----Original Message-----
From: Jeremy Boynes [mailto:jboynes@apache.org] 
Sent: Wednesday, September 13, 2006 2:13 PM
To: tuscany-dev@ws.apache.org
Subject: Re: Binding Context?


On Sep 13, 2006, at 4:48 AM, Peter Cousins wrote:

>
> I agree that business application logic should not use this context
> information.  Ideally, there would support for users to write simple
> plugins that could run inside the call context as aspects (in the AOP
> sense).   Only the aspects would access the context, and these would
> cross cut but supplement the application code.  This is useful not  
> only
> for routing but for security, version management, compression, context
> propagation, HA and load balancing strategies, pay per use billing,  
> and
> so on.
>
>
>
> This would allow a middle ground between applications components that
> shouldn't use this, and "being managed totally by the framework",  
> which
> is a less flexible way to manage such information...PC
>

[[ rolling in the audit use case as well ]]

Yes - I see a lot of similarity to aspects in the AOP sense, the  
important thing is how we define the pointcuts in the global sense  
and how the information from the activation is passed into the  
advice. I think programming an aspect this close to the join tends to  
involve a different set of skills from programming a normal  
application component and so the programming model should be  
different. The trick is to be able to map the "advise" programming  
model back into a normal "component" programming model so that it  
becomes easier for people to write their own implementations of the  
things you mention above. I think this is where things like framework  
interceptors, message handlers etc. break down - their information  
model is based on the interaction rather than on the data in the  
interaction that the programmer is actually trying to use.

So far we have focused on the programming model for traditional  
application components - the things providing the business  
application logic - with the goal of abstracting away from them the  
details of the lower-level infrastructure. We've wanted to let people  
write applications in the language/programming model/data format of  
their choice (Java, Spring, JavaScript, Ruby, and eventually XSLT,  
BPEL etc.; SDO, JAXB, JSON, AXIOM etc.) based on suitability for the  
business problem they are trying to solve. We have tried to maintain  
an isolation between this kind of application code and lower-level  
infrastructure concerns that will enable us to reuse/migrate/rewire  
these components as part of assembly.

This programming model is based on IoC principles, a key one being  
that components clearly declare their dependencies. For application  
code we want to express those dependencies in terms of "business"  
level artifacts - orders, customers and the services that act on them  
rather than plumbing. But that's just one domain. If you move down  
the stack, I believe the same programming model can work with  
"infrastructure" domain artifacts such as messages, principals, xids  
and so forth.

I also think there's a level between these for "business application  
infrastructure" - things like audit (compliance), authorization, QoS,  
chargebacks and so on. A programmer there is aware of business  
concerns and wants to deal in those terms rather than the real low- 
level things. For example, routing based on customer rather than  
source IP address. The IoC principles still apply, just the data  
types are different.

My hope therefore is that we can reuse the SCA programming models for  
implementing these "aspects" - the same models, just with the service  
contract and data types mapped to the appropriate constructs. For  
this to work we would need to extend the SCA assembly model to  
support a pointcut language that allowed users to specify the rules  
for attaching these behaviours to the wires.

With that in place it becomes the runtime's job to attach the  
appropriate hooks to the wiring and convert the raw data on the wire  
(from as low a level as necessary) into the data expected by the  
aspect implementation (in a similar way to how we convert data to the  
format expected by an application component). Perhaps not an easy  
job, but an interesting one :-) At least, that's one of the things  
that interests me (he says, going back to being build monkey).

--
Jeremy

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org