You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Peter Firmstone <ji...@zeus.net.au> on 2010/01/16 23:51:16 UTC

Application Code Auditing & Signing Area [Re: Lookup Service Discovery using DNS?]

When all is said & done, there is enough compute power, heck, River is a 
distributed system, it all comes down to trusting code.  We have the 
tools to sign, we have the tools to check integrity.  If we can clarify 
proxy verification from the server side, we've got it made.  How do we 
make sure the client side isn't compromised?

If only publicly audited, signed code is permitted for distribution and 
only granted a minimal set of permissions granted as described in a jar 
bundle format, then what issues are there in executing?

Focus on code auditing would be on the Serialization interface, required 
Permissions and defensive construction, that is proper encapsulation.

Anyone up for an experiment?

Cheers,

Peter.

Peter Firmstone wrote:
> Gregg Wonderly wrote:
>> I think that there are lots of choices about service location and 
>> activation of proxies.  The MarshalledObject mechanism allows one to 
>> wrap proxies at any point and make them available for remote 
>> consumption.
>>
>> Downloading a proxy via http instead of Reggie works fine.
> Good point.
>>
>> We need to document things and provide the convienence methods and 
>> classes that will promote standard practices.
> We can set up an experimental documents area on svn, scanned 
> scratchings etc while we're hashing things out?  The crawler lookup 
> service might discard non conformant proxies to assist in promoting 
> standard practices.  The lookup service might also advertise what 
> River platform versions it's compatible with.
>
> Global Lookup has some interesting compromises, due to the sheer 
> possible volume of entries, whether to provide convenience methods to 
> perform querying at the server to reduce network bandwidth 
> consumption, or to prohibit it due to security or memory and cpu 
> resources.  Lookup on the net would really be a search engine for 
> proxy's, the potential is there to make it very powerful if security 
> issues associated with remote code execution can be understood 
> properly, a paradigm shift that overcomes the issues with current 
> search engines could occur.
>
> How did the net evolve in the beginning?  As the number of web pages 
> increased, internet search engines were created?
>
> My gut feel is a River Search Engine / Service Index (Lookup Service) 
> would need to address security issues with remote code execution.
>
> Cheers,
>
> Peter.
>


Re: Application Code Auditing & Signing Area (revised)

Posted by Peter Firmstone <ji...@zeus.net.au>.
Thanks Sim,

The drawback of a root cert is; it becomes a very attractive target to
an attacker.  A compromised root certificate could cause catastrophic
damage.

You have a very valid point regarding old trust relationships too.

I have in mind, a process of repeatability, where the production of a
jar file can be verified from source to end product.

Example:

I submit to the code staging area for auditing; some java source code
(We might only allow Apache compatible license, others can set up their
own staging areas too), along with a comment describing the process
(javac version, vendor and environment) required to re create it.  I
also submit two bundles (jar files) that I have signed, one containing
the Service Interfaces, the other, a client implementation containing
the proxy for my service. There could be any number of volunteer 
auditors (any willing person, company or entity) who can first verify 
the process is repeatable, then check the code for vulnerabilities and 
finally also sign the submitted bundle.

Companies that wish to make available a public service could submit
their source code and bundles to the staging area, while other companies
wanting to utilise this service can sign it themselves, without granting
permissions to any third party based on certificate chains.  If a
company is pedantic about security, they may wish to utilise only code
they have audited and signed.

Someone less pedantic about security might just accept the signed bundle
until that certificate is revoked or is known to be compromised.

*Publicly available Service Interfaces might be useful to other
companies, the actual service (server) implementations may remain
private.  *

Another company might want to also provide this service, but utilise a
different client proxy implementation, so they create a bundle, sign and
upload it, along with the source and instructions to recreate it for
auditing.  They make sure that their client implementation depends upon
the common Service interface bundle ( this ensures that services are
interchangeable and comparable by sharing a common interface that will
reside within it's own class loader, visible to both implementations
that reside in their own class loaders locally).

An Auditor would be able to satisfy themselves that serialised streams
are unmarshalled defensively, so an attacker cannot retain a reference
to the internal state of a proxy or any of the objects returned by the
proxy.  See Effective Java 2nd Edition's Chapter on Serialization.

The service would also be able to use the existing TrustVerifier
interface to verify the unmarshalled proxy at the client belongs to that
service.

The submitted bundles would be made available on public codebase
servers, which would be refreshed on a regular basis to capture audits
and updates.

If a vulnerability is later found in any client proxy implementation, a
new version can be submitted containing  the fix and the process repeats
itself.  The compromised version is reported to a Global Vulnerability
Notice Board Service.

The OSGi framework can be utilised to control local node JVM
classloading to load the latest signed version, subject to local
security policy.  The OSGi r4.2 compendium has security mechanism
(ConditionalPermissionAdmin) that looks like it can assist in solving
some isses.  Conditions (an OSGi concept) simplifies the use of
permissions.   ConditionalPermissionAdmin would allow us to dynamically
deny any permission to a bundle that has a known vulnerability, even if
signed by our own certificate.

This security framework will take some time to setup and contruct,
however the benefits would be substantial.

Such a structure would be tolerant to attack, I'm not saying immune, but
due to its distributed nature, where coupling (dependency) has been
abstracted, it would be rather difficult.  By not depending upon any one
signing algorithm, having multiple keys, etc redundancy would be built in.

Cheers,

Peter.


Sim IJskes - QCG wrote:
> Sim IJskes - QCG wrote:
>> So in practice i foresee the following. There is a central deployment 
>> source for code & rootcerts. 1 rootcert identifies the deployment 
>> cloud/cluster/environment. Every node identifies itself by a 
>> indiviual cert signed by this rootcert. There is a cert generation 
>> facility running on the central deployment source, that allows for 
>> generation of new certs based on a cert request, signed with a 
>> external identification. The cert generation facility accepts this 
>> request either implicitly or by some other external verification.
>
> And this central deployment facility with own rootcert is run by 
> anybody who wants to source executable code, either by beeing the 
> author or by beeing a clearing house for code vetting.
>
> Gr. Sim
>



Re: Application Code Auditing & Signing Area

Posted by Peter Firmstone <ji...@zeus.net.au>.
Thanks Sim,

The drawback of a root cert is; it becomes a very attractive target to 
an attacker.  A compromised root certificate could cause catastrophic 
damage.

You have a very valid point regarding old trust relationships too.

I have in mind, a process of repeatability, where the production of a 
jar file can be verified from source to end product.

Example:

I submit to the code staging area for auditing; some java source code 
(We might only allow Apache compatible license, others can set up their 
own staging areas too), along with a comment describing the process 
(javac version, vendor and environment) required to re create it.  I 
also submit two bundles (jar files) that I have signed, one containing 
the Service Interfaces, the other, a client implementation containing 
the proxy for my service.  I also submit a bundle (jar file) that I have 
signed  There could be any number of volunteer auditors (any willing 
person, company or entity) who can first verify the process is 
repeatable, then check the code for vulnerabilities and finally also 
sign the submitted bundle.

Companies that wish to make available a public service could submit 
their source code and bundles to the staging area, while other companies 
wanting to utilise this service can sign it themselves, without granting 
permissions to any third party based on certificate chains.  If a 
company is pedantic about security, they may wish to utilise only code 
they have audited and signed.

Someone less pedantic about security might just accept the signed bundle 
until that certificate is revoked or is known to be compromised.

*Publicly available Service Interfaces might be useful to other 
companies, the actual service (server) implementations may remain 
private.  *

Another company might want to also provide this service, but utilise a 
different client proxy implementation, so they create a bundle, sign and 
upload it, along with the source and instructions to recreate it for 
auditing.  They make sure that their client implementation depends upon 
the common Service interface bundle ( this ensures that services are 
interchangeable and comparable by sharing a common interface that will 
reside within it's own class loader, visible to both implementations 
that reside in their own class loaders locally).

An Auditor would be able to satisfy themselves that serialised streams 
are unmarshalled defensively, so an attacker cannot retain a reference 
to the internal state of a proxy or any of the objects returned by the 
proxy.  See Effective Java 2nd Edition's Chapter on Serialization.

The service would also be able to use the existing TrustVerifier 
interface to verify the unmarshalled proxy at the client belongs to that 
service.

The submitted bundles would be made available on public codebase 
servers, which would be refreshed on a regular basis to capture audits 
and updates.

If a vulnerability is later found in any client proxy implementation, a 
new version can be submitted containing  the fix and the process repeats 
itself.  The compromised version is reported to a Global Vulnerability 
Notice Board Service.

The OSGi framework can be utilised to control local node JVM 
classloading to load the latest signed version, subject to local 
security policy.  The OSGi r4.2 compendium has security mechanism 
(ConditionalPermissionAdmin) that looks like it can assist in solving 
some isses.  Conditions (an OSGi concept) simplifies the use of 
permissions.   ConditionalPermissionAdmin would allow us to dynamically 
deny any permission to a bundle that has a known vulnerability, even if 
signed by our own certificate.

This security framework will take some time to setup and contruct, 
however the benefits would be substantial.

Such a structure would be tolerant to attack, I'm not saying immune, but 
due to its distributed nature, where coupling (dependency) has been 
abstracted, it would be rather difficult.  By not depending upon any one 
signing algorithm, having multiple keys, etc redundancy would be built in.

Cheers,

Peter.


Sim IJskes - QCG wrote:
> Sim IJskes - QCG wrote:
>> So in practice i foresee the following. There is a central deployment 
>> source for code & rootcerts. 1 rootcert identifies the deployment 
>> cloud/cluster/environment. Every node identifies itself by a 
>> indiviual cert signed by this rootcert. There is a cert generation 
>> facility running on the central deployment source, that allows for 
>> generation of new certs based on a cert request, signed with a 
>> external identification. The cert generation facility accepts this 
>> request either implicitly or by some other external verification.
>
> And this central deployment facility with own rootcert is run by 
> anybody who wants to source executable code, either by beeing the 
> author or by beeing a clearing house for code vetting.
>
> Gr. Sim
>


Re: Application Code Auditing & Signing Area

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Sim IJskes - QCG wrote:
> So in practice i foresee the following. There is a central deployment 
> source for code & rootcerts. 1 rootcert identifies the deployment 
> cloud/cluster/environment. Every node identifies itself by a indiviual 
> cert signed by this rootcert. There is a cert generation facility 
> running on the central deployment source, that allows for generation of 
> new certs based on a cert request, signed with a external 
> identification. The cert generation facility accepts this request either 
> implicitly or by some other external verification.

And this central deployment facility with own rootcert is run by anybody 
who wants to source executable code, either by beeing the author or by 
beeing a clearing house for code vetting.

Gr. Sim

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

Re: Application Code Auditing & Signing Area

Posted by Sim IJskes - QCG <si...@qcg.nl>.
Peter Firmstone wrote:
> When all is said & done, there is enough compute power, heck, River is a 
> distributed system, it all comes down to trusting code.  We have the 
> tools to sign, we have the tools to check integrity.  If we can clarify 
> proxy verification from the server side, we've got it made.  How do we 
> make sure the client side isn't compromised?

We verify trust on both sides by having the public part of the root of 
the chain in the certificate, in our keystore. (right?).

During first time deployment the public root cert is deployed together 
with the program code we trust implicitly.

Then we download code from an unknown (or potential rogue) source, and 
verify trust with the cert chain.

I would not trust code that is signed in the 3rd degree. I'm not sure if 
this is enforcable (currently?). And i don't like to depend on cert 
providers (for cost reasons for instance).

So in practice i foresee the following. There is a central deployment 
source for code & rootcerts. 1 rootcert identifies the deployment 
cloud/cluster/environment. Every node identifies itself by a indiviual 
cert signed by this rootcert. There is a cert generation facility 
running on the central deployment source, that allows for generation of 
new certs based on a cert request, signed with a external 
identification. The cert generation facility accepts this request either 
implicitly or by some other external verification.

By having all certs that i trust directly signed by a rootcert that i 
trust, i make sure everybody plays for the same team.

Anyhow, i believe we need to detach ourselfs from the notion the 
trustable certs are only generated by companies that charge a lot of money.

Also, would i trust all code signed by an apache cert? Or just the code 
signed by a delegate of an apache cert?

The problem with executing signed code is that one becomes an actor. We 
become responsible for the actions of that code. And the defence that 
one was an unwilling agent for malignent code becomes very thin when the 
people find out it was based on a trust relationship established some 
time ago. (a judge in a court would say: you should be a better judge of 
character, but your still responsible).

I can see parallels between downloading a program from a website and 
executing it, but we are talking here about pre-vetted code, and i would 
like to judge that personally on a case to case basis, instead of an 
automated process that can happen even when i sleep.

Gr. Sim

-- 
QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397