You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Paul Querna <pa...@querna.org> on 2013/08/06 19:24:15 UTC

breach attack

Hiya,

Has anyone given much thought to changes in httpd to help mitigate the
recently publicized breach attack:

http://breachattack.com/

>From an httpd perspective, looking at the mitigations
<http://breachattack.com/#mitigations>

1) Disabling HTTP compression
2) Separating secrets from user input
3) Randomizing secrets per request
4) Masking secrets (effectively randomizing by XORing with a random
secret per request)
5) Protecting vulnerable pages with CSRF
6) Length hiding (by adding random amount of bytes to the responses)
7) Rate-limiting the requests


Many of these are firmly in the domain of an application developer,
but I think we should work out if there are ways we can either change
default configurations or add new features to help application
developers be successful:

1) Has anyone given any thought to changing how we do chunked
encoding?    Chunking is kinda like arbitrary padding we an insert
into a response, without having to know anything about the actual
content of the response.  What if we increased the number of chunks we
create, and randomly placed them -- this wouldn't completely ruin some
of the attacks, but could potentially increase the number of requests
needed significantly. (We should figure out the math here?  How many
random chunks of how big are an effective mitigation?)

2) Disabling TLS Compression by default, even in older versions.
Currently we changed the default to SSLCompression off in >=2.4.3, I'd
like to see us back port this to 2.2.x.

3) Disable mod_default compression for X content types by default on
encrypted connections.  Likely HTML, maybe JSON, maybe Javascript
content types?

Thoughts? Other Ideas?

Thanks,

Paul

Re: breach attack

Posted by Dirk-Willem van Gulik <di...@webweaving.org>.
Op 12 aug. 2013, om 01:35 heeft Eric Covener <co...@gmail.com> het volgende geschreven:

> 
> > What do you think of including a header? Is there a way to find out
> > from the encrypted traffic where the header ends and where the body
> > starts?
> 
> For a typical request they are in separate SSL records and someone running a packet capture can tell when the headers or body has grown.  We could arrange for the headers to always span an SSL record, and put a variable length one at the bottom  -- but that only helps if the secret and request data are in the first frame.
Not sure - I am fairly sure we nicely cut on headers - and have the (SSL) packets go out at or very near the end of the header. 

So I guess we'd intentionally would have to sub-optimize this somewhat - or uses some default chunked/mime-type boundary trickery outside the traditional header instead.

Dw.


Re: breach attack

Posted by Eric Covener <co...@gmail.com>.
> What do you think of including a header? Is there a way to find out
> from the encrypted traffic where the header ends and where the body
> starts?

For a typical request they are in separate SSL records and someone running
a packet capture can tell when the headers or body has grown.  We could
arrange for the headers to always span an SSL record, and put a variable
length one at the bottom  -- but that only helps if the secret and request
data are in the first frame.

Re: breach attack

Posted by Joe Orton <jo...@redhat.com>.
On Sat, Aug 10, 2013 at 09:28:21PM +0200, Stefan Fritsch wrote:
> What do you think of including a header? Is there a way to find out 
> from the encrypted traffic where the header ends and where the body 
> starts? See my other mail, which I have sent before reading this one, 
> unfortunately.

Eric is right, HTTP response headers are currently always a separate TLS 
message so the observer can simply discard the first message.

It is simple to try harder to hide that in mod_ssl, at least with the 
coalesce filter, since the headers are just a brigade with a HEAP 
bucket, which mod_ssl can hang on to and merge with the response body 
before sending to OpenSSL.

But in the case of a dynamically generated or proxied response it is 
likely we'll get a FLUSH directly after the headers' HEAP, as the 
handler waits for content.  So in that case we'd have to push out a TLS 
message with just-the-headers anyway.

After playing with code & tcpdumping, I'm a more sceptical we can do 
anything simple + effective without hurting performance, or at least be 
99% confident we aren't hurting performance.

Regards, Joe

Re: breach attack

Posted by Dirk-Willem van Gulik <di...@webweaving.org>.
Op 10 aug. 2013, om 21:35 heeft Reindl Harald <h....@thelounge.net> het volgende geschreven:
> 
> IMHO that is all the wrong train
> 
> "victim's browser to visit the targeted website thousands of times"
> for me says clearly that a proper server with rate-controls based
> on iptable sor a firewall in front of the machine would stop this
> and honestly these days i would not connect any production server
> without rate-controls to the world wide web
> 
> so i am strictly against mangle in the procotol and risk making
> mod_defalte less effective, protections aginst such attacks do
> not belong in the application layer

A couple of years ago - I would have wholeheartedly agreed with you.

However the world is changing. And several things combined make the assumption that such vigilance is likely (or even possible) no longer a safe one.

At the higher layers - developers work increasingly with abstraction layers, languages and frameworks which are far removed from HTTP; and which no longer require the developer to look at things like field completion, sessions and ajax-dynamic population of fields, gradual disclosure and what not at the GET level. Instead some (js) library or framework is used which just does it.

So technically it is no longer realistic for the average (or every crack) programmer to tell you what GETs to normally expect. And organizationally it is totally unrealistic; there are simply too few incentives for that information to flow. That 'layer' of the org is not paid/expected to care.

Increasingly the http layer moves to the very bottom of the stack - well out of sight; tended to by people very far (and commonly at arms length) removed from the web product. It may even be mostly in some sort of proxy/pass-through type of mode - where the ops people have no real idea as to what request goes to what app - and the volume is such that only automated tools help make sense of it.

Wether we like it or not - Apache is now commonly deployed in that world.

So our 'new' users can fairly expect us to tune apache to that sort of 'blind' and 'bulky' deployments. So I think it somewhat behooves us to have options to mangle the protocol where and when needed - and focus on these in 'bulk and blind' modes.

Dw.

Re: breach attack

Posted by Reindl Harald <h....@thelounge.net>.

Am 10.08.2013 21:28, schrieb Stefan Fritsch:
> Am Freitag, 9. August 2013, 22:04:22 schrieb Joe Orton:
>> On Fri, Aug 09, 2013 at 09:14:51AM -0700, Paul Querna wrote:
>>> In this case, I don't know if any of the proposed mitigations
>>> help;
>>> I'd love to have an easy way to validate that, so we could bring
>>> data to the discussion:  If it increases the attack by multiple
>>> hours, and causes a <1% performance drop, isn't that the kind of
>>> thing that is useful?
>>
>> I sympathise with Stefan but I agree we should do something if we
>> can find something cheap, effective and reliable.
> 
> Effective is difficult when done on the server. OTOH, browsers could 
> just not send "Accept-Encoding: gzip" if a request is cross-domain and 
> contains some sort of credentials (HTTP-auth, cookies with the 
> 'secure' attribute, client certificate, ...). I think that would stop 
> the vast majority of attack scenarios. I very much doubt that any 
> measure at the server side can achieve a comparable level of 
> protection

IMHO that is all the wrong train

"victim's browser to visit the targeted website thousands of times"
for me says clearly that a proper server with rate-controls based
on iptable sor a firewall in front of the machine would stop this
and honestly these days i would not connect any production server
without rate-controls to the world wide web

so i am strictly against mangle in the procotol and risk making
mod_defalte less effective, protections aginst such attacks do
not belong in the application layer

http://www.theregister.co.uk/2013/08/02/breach_crypto_attack/
>> The attacker's booby-trapped website hosts a script that runs the second
>> phase of the attack: this forces the victim's browser to visit the targeted
>> website thousands of times, over and over, each time appending a different
>> combination of extra data. When the attacker-controlled bytes match any
>> bytes originally encrypted in the stream, the browser's compression kicks
>> in and reduces the size of the transmission, a subtle change the eavesdropper
>> can detect


Re: breach attack

Posted by Stefan Fritsch <sf...@sfritsch.de>.
Am Freitag, 9. August 2013, 22:04:22 schrieb Joe Orton:
> On Fri, Aug 09, 2013 at 09:14:51AM -0700, Paul Querna wrote:
> > In this case, I don't know if any of the proposed mitigations
> > help;
> > I'd love to have an easy way to validate that, so we could bring
> > data to the discussion:  If it increases the attack by multiple
> > hours, and causes a <1% performance drop, isn't that the kind of
> > thing that is useful?
> 
> I sympathise with Stefan but I agree we should do something if we
> can find something cheap, effective and reliable.

Effective is difficult when done on the server. OTOH, browsers could 
just not send "Accept-Encoding: gzip" if a request is cross-domain and 
contains some sort of credentials (HTTP-auth, cookies with the 
'secure' attribute, client certificate, ...). I think that would stop 
the vast majority of attack scenarios. I very much doubt that any 
measure at the server side can achieve a comparable level of 
protection.

> Length hiding seems the most promising avenue.  The paper notes that
> that simply adding rand(0..n) bytes to the response only increases
> the cost (time/requests) of executing the attack.
> 
> Adding a random number of leading zeroes to the chunk-size line
> would be be perhaps reliable (i.e. least likely to have interop
> issues), though we could only introduce relatively small
> variability of the total response length.  We could maybe 0-5
> leading zeroes per chunk, safely? Possibly that breaks some client
> already.  It's probably not effective.

What do you think of including a header? Is there a way to find out 
from the encrypted traffic where the header ends and where the body 
starts? See my other mail, which I have sent before reading this one, 
unfortunately.


Re: breach attack

Posted by Joe Orton <jo...@redhat.com>.
On Fri, Aug 09, 2013 at 09:14:51AM -0700, Paul Querna wrote:
> In this case, I don't know if any of the proposed mitigations help;
> I'd love to have an easy way to validate that, so we could bring data
> to the discussion:  If it increases the attack by multiple hours, and
> causes a <1% performance drop, isn't that the kind of thing that is
> useful?

I sympathise with Stefan but I agree we should do something if we can 
find something cheap, effective and reliable.

Length hiding seems the most promising avenue.  The paper notes that 
that simply adding rand(0..n) bytes to the response only increases the 
cost (time/requests) of executing the attack.

Adding a random number of leading zeroes to the chunk-size line would be 
be perhaps reliable (i.e. least likely to have interop issues), though 
we could only introduce relatively small variability of the total 
response length.  We could maybe 0-5 leading zeroes per chunk, safely?  
Possibly that breaks some client already.  It's probably not effective.

We could randomly vary the maximum bytes of application data per TLS 
message using the 2.4 mod_ssl "coalesce" filter too.  I'm not sure if 
that actually produces length hiding at the right level though, and it 
hurts performance.  (Crypto experts listening?)

It's kind of really a TLS problem.  Crypto experts should solve this in 
TLS! :)

Regards, Joe

Re: breach attack

Posted by Paul Querna <pa...@querna.org>.
On Fri, Aug 9, 2013 at 12:11 AM, Ruediger Pluem <rp...@apache.org> wrote:
>
>
> Stefan Fritsch wrote:
>> Am Dienstag, 6. August 2013, 10:24:15 schrieb Paul Querna:
>>> 1) Disabling HTTP compression
>>> 2) Separating secrets from user input
>>> 3) Randomizing secrets per request
>>> 4) Masking secrets (effectively randomizing by XORing with a random
>>> secret per request)
>>> 5) Protecting vulnerable pages with CSRF
>>> 6) Length hiding (by adding random amount of bytes to the responses)
>>> 7) Rate-limiting the requests
>>>
>>>
>>> Many of these are firmly in the domain of an application developer,
>>> but I think we should work out if there are ways we can either
>>> change default configurations or add new features to help
>>> application developers be successful:
>>
>> IMNSO, we are way past the point where we should patch up even more
>> issues that are caused by the broken security module of web browsers.
>> Instead, browsers vendors should fix this issue, for example by
>> offering a way to easily opt out of sending credentials with cross-
>> domain requests (maybe analogous to Strict Transport Security).
>>
>> I am against putting any mitigation measures into httpd that adversely
>> affect normal use, like adding chunk extensions that will likely break
>> lots of clients, or like making mod_deflate much less efficient.
>> Though if somebody comes up with a clever scheme that has no negative
>> side effects, that would be of course fine. Or if we add some rate
>> limiting facility that would be useful for many purposes.
>>
>>
>
> +1. Well put.
>

I strongly disagree.

We are a component in an ecosystem that consists of Browsers and Servers.

We are part of that broken security model, even if the root cause is
from the browsers.

In this case, I don't know if any of the proposed mitigations help;
I'd love to have an easy way to validate that, so we could bring data
to the discussion:  If it increases the attack by multiple hours, and
causes a <1% performance drop, isn't that the kind of thing that is
useful?

We should strive as a community help, not to just throw the browsers
under the bus.

Re: breach attack

Posted by Ruediger Pluem <rp...@apache.org>.

Stefan Fritsch wrote:
> Am Dienstag, 6. August 2013, 10:24:15 schrieb Paul Querna:
>> 1) Disabling HTTP compression
>> 2) Separating secrets from user input
>> 3) Randomizing secrets per request
>> 4) Masking secrets (effectively randomizing by XORing with a random
>> secret per request)
>> 5) Protecting vulnerable pages with CSRF
>> 6) Length hiding (by adding random amount of bytes to the responses)
>> 7) Rate-limiting the requests
>>
>>
>> Many of these are firmly in the domain of an application developer,
>> but I think we should work out if there are ways we can either
>> change default configurations or add new features to help
>> application developers be successful:
> 
> IMNSO, we are way past the point where we should patch up even more 
> issues that are caused by the broken security module of web browsers. 
> Instead, browsers vendors should fix this issue, for example by 
> offering a way to easily opt out of sending credentials with cross-
> domain requests (maybe analogous to Strict Transport Security).
> 
> I am against putting any mitigation measures into httpd that adversely 
> affect normal use, like adding chunk extensions that will likely break 
> lots of clients, or like making mod_deflate much less efficient. 
> Though if somebody comes up with a clever scheme that has no negative 
> side effects, that would be of course fine. Or if we add some rate 
> limiting facility that would be useful for many purposes.
> 
> 

+1. Well put.

Regards

Rüdiger

Re: breach attack

Posted by Stefan Fritsch <sf...@sfritsch.de>.
Am Dienstag, 6. August 2013, 10:24:15 schrieb Paul Querna:
> 1) Disabling HTTP compression
> 2) Separating secrets from user input
> 3) Randomizing secrets per request
> 4) Masking secrets (effectively randomizing by XORing with a random
> secret per request)
> 5) Protecting vulnerable pages with CSRF
> 6) Length hiding (by adding random amount of bytes to the responses)
> 7) Rate-limiting the requests
> 
> 
> Many of these are firmly in the domain of an application developer,
> but I think we should work out if there are ways we can either
> change default configurations or add new features to help
> application developers be successful:

IMNSO, we are way past the point where we should patch up even more 
issues that are caused by the broken security module of web browsers. 
Instead, browsers vendors should fix this issue, for example by 
offering a way to easily opt out of sending credentials with cross-
domain requests (maybe analogous to Strict Transport Security).

I am against putting any mitigation measures into httpd that adversely 
affect normal use, like adding chunk extensions that will likely break 
lots of clients, or like making mod_deflate much less efficient. 
Though if somebody comes up with a clever scheme that has no negative 
side effects, that would be of course fine. Or if we add some rate 
limiting facility that would be useful for many purposes.


Re: breach attack

Posted by Rainer Jung <ra...@kippdata.de>.
On 06.08.2013 19:36, Paul Querna wrote:
> On Tue, Aug 6, 2013 at 10:32 AM, Eric Covener <co...@gmail.com> wrote:
>> On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna <pa...@querna.org> wrote:
>>> Hiya,
>>>
>>> Has anyone given much thought to changes in httpd to help mitigate the
>>> recently publicized breach attack:
>>>
>>> http://breachattack.com/
>>>
>>> From an httpd perspective, looking at the mitigations
>>> <http://breachattack.com/#mitigations>
>>>
>>> 1) Disabling HTTP compression
>>> 2) Separating secrets from user input
>>> 3) Randomizing secrets per request
>>> 4) Masking secrets (effectively randomizing by XORing with a random
>>> secret per request)
>>> 5) Protecting vulnerable pages with CSRF
>>> 6) Length hiding (by adding random amount of bytes to the responses)
>>> 7) Rate-limiting the requests

The attack seems to focus on response bodies. So secrets transferred via
cookies seem to be out of scope here.

Any one time tokens should be OK to, e.g. using nonces to protect
against CSRF like we do in Apache or Tomcat for some of the pages that
need protection. It is protected itself and by reducing the rates with
which one can probe a site it should also protect the rest of the page
against the attack.

The examples I saw where attacking a user id that was included in app
pages after login. The user id itself was not sufficient for a login so
you would still need to attack the password. Of course you are now down
to one unknown token instead of two, but the login name often is pretty
easy to get by other means.

I think there are two different cases:

a) Secrets in the response body which are not displayed to the user by
the user agent. That should typically be secrets related to credentials,
session IDs. An example are URL encoded session IDs, like e.g. Java apps
support them when cookies are off (;jsessionid=...). They are part of
the response bodies and you can misuse them for session takeover. Or
some continuation id contained in the body of a data request that's not
meant to get rendered. Or hidden fields in forms.

This kind of information could in principle be twisted by the server as
long as it can undo the twist when the data is send (URI, query string,
form param) and before it is handed over to the application. As far as I
understood the attack, this twist doesn't have to be cryptographically
secure, it would only be used to increase the combinatorial complexity
of the attack. E.g. a session id could be prefixed by a random token
that is used to XOR the rest of the session id. Didn't yet think deeper
about that though.

b) Private data that gets rendered and displayed. Here the twisting
without cooperation by the user agent would break the application
display. Example could be the login name of a user displayed on every
page or simply some private data like the account number or the amount
of money on your bank account when doing online banking.

Either you would need a cooperating piece of JavaScript etc., or we
could only try to do some twist to the full response that's transparent
to the UA but not transparent for the exact compressed byte content. The
answer could be something generic - for that I don't know enough about
the exact algorithm in deflate - or it could be something depending on
mime type - adding stuff like random comments etc.

>From your list I find the rate limiting interesting. It would also have
more uses than "only" making the attack less feasible. Here we would
need to find a good pattern though, e.g. would it be enough to focus on
client IP and people using proxies need to switch the protection behind
a proxy off.

>>> Many of these are firmly in the domain of an application developer,
>>> but I think we should work out if there are ways we can either change
>>> default configurations or add new features to help application
>>> developers be successful:
>>>
>>> 1) Has anyone given any thought to changing how we do chunked
>>> encoding?    Chunking is kinda like arbitrary padding we an insert
>>> into a response, without having to know anything about the actual
>>> content of the response.  What if we increased the number of chunks we
>>> create, and randomly placed them -- this wouldn't completely ruin some
>>> of the attacks, but could potentially increase the number of requests
>>> needed significantly. (We should figure out the math here?  How many
>>> random chunks of how big are an effective mitigation?)

>> Another option in this neighborhood is small/varying deflate blocks.
>> But that probably limits the usefulness of deflate to the same extent
>> that it helps.  The idea is to make it less likely that the user input
>> and secret get compressed together.
>>
>>>
>>> 2) Disabling TLS Compression by default, even in older versions.
>>> Currently we changed the default to SSLCompression off in >=2.4.3, I'd
>>> like to see us back port this to 2.2.x.
>>
>> I think it recently made it to 2.2.x, post the last release.

Yes, last Saturday but after 2.2.25:

http://svn.apache.org/viewvc?view=revision&revision=1510043

>>> 3) Disable mod_default compression for X content types by default on
>>> encrypted connections.  Likely HTML, maybe JSON, maybe Javascript
>>> content types?

Or: For non-static Content?

>> In the code, or default conf / manual? It's the cautious thing to do,
>> but I'm not yet sure if making everyone opt back in would really mean
>> much (e.g. what number would give their app the required scrutiny
>> before opting back in)
> 
> In the code.

But currently mod_deflate is off by default and TLS Compression is off
by default in 2.4 and next 2.2. So apart from releasing another 2.2 we
are already at disable by default.

Regards,

Rainer


Re: breach attack

Posted by Paul Querna <pa...@querna.org>.
On Tue, Aug 6, 2013 at 10:32 AM, Eric Covener <co...@gmail.com> wrote:
> On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna <pa...@querna.org> wrote:
>> Hiya,
>>
>> Has anyone given much thought to changes in httpd to help mitigate the
>> recently publicized breach attack:
>>
>> http://breachattack.com/
>>
>> From an httpd perspective, looking at the mitigations
>> <http://breachattack.com/#mitigations>
>>
>> 1) Disabling HTTP compression
>> 2) Separating secrets from user input
>> 3) Randomizing secrets per request
>> 4) Masking secrets (effectively randomizing by XORing with a random
>> secret per request)
>> 5) Protecting vulnerable pages with CSRF
>> 6) Length hiding (by adding random amount of bytes to the responses)
>> 7) Rate-limiting the requests
>>
>>
>> Many of these are firmly in the domain of an application developer,
>> but I think we should work out if there are ways we can either change
>> default configurations or add new features to help application
>> developers be successful:
>>
>> 1) Has anyone given any thought to changing how we do chunked
>> encoding?    Chunking is kinda like arbitrary padding we an insert
>> into a response, without having to know anything about the actual
>> content of the response.  What if we increased the number of chunks we
>> create, and randomly placed them -- this wouldn't completely ruin some
>> of the attacks, but could potentially increase the number of requests
>> needed significantly. (We should figure out the math here?  How many
>> random chunks of how big are an effective mitigation?)
>
> Another option in this neighborhood is small/varying deflate blocks.
> But that probably limits the usefulness of deflate to the same extent
> that it helps.  The idea is to make it less likely that the user input
> and secret get compressed together.
>
>>
>> 2) Disabling TLS Compression by default, even in older versions.
>> Currently we changed the default to SSLCompression off in >=2.4.3, I'd
>> like to see us back port this to 2.2.x.
>
> I think it recently made it to 2.2.x, post the last release.
>
>>
>> 3) Disable mod_default compression for X content types by default on
>> encrypted connections.  Likely HTML, maybe JSON, maybe Javascript
>> content types?
>
> In the code, or default conf / manual? It's the cautious thing to do,
> but I'm not yet sure if making everyone opt back in would really mean
> much (e.g. what number would give their app the required scrutiny
> before opting back in)

In the code.

Configurations take order of magnitude more years to trickle down to
vendors compared to a code change... :-)

Re: breach attack

Posted by Jeff Trawick <tr...@gmail.com>.
On Tue, Aug 6, 2013 at 1:32 PM, Eric Covener <co...@gmail.com> wrote:

> On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna <pa...@querna.org> wrote:
> > Hiya,
> >
> > Has anyone given much thought to changes in httpd to help mitigate the
> > recently publicized breach attack:
> >
> > http://breachattack.com/
> >
> > From an httpd perspective, looking at the mitigations
> > <http://breachattack.com/#mitigations>
> >
> > 1) Disabling HTTP compression
> > 2) Separating secrets from user input
> > 3) Randomizing secrets per request
> > 4) Masking secrets (effectively randomizing by XORing with a random
> > secret per request)
> > 5) Protecting vulnerable pages with CSRF
> > 6) Length hiding (by adding random amount of bytes to the responses)
> > 7) Rate-limiting the requests
> >
> >
> > Many of these are firmly in the domain of an application developer,
> > but I think we should work out if there are ways we can either change
> > default configurations or add new features to help application
> > developers be successful:
> >
> > 1) Has anyone given any thought to changing how we do chunked
> > encoding?    Chunking is kinda like arbitrary padding we an insert
> > into a response, without having to know anything about the actual
> > content of the response.  What if we increased the number of chunks we
> > create, and randomly placed them -- this wouldn't completely ruin some
> > of the attacks, but could potentially increase the number of requests
> > needed significantly. (We should figure out the math here?  How many
> > random chunks of how big are an effective mitigation?)
>
> Another option in this neighborhood is small/varying deflate blocks.
> But that probably limits the usefulness of deflate to the same extent
> that it helps.  The idea is to make it less likely that the user input
> and secret get compressed together.
>

A filter could vary the size of what goes down the filter chain impacting
deflate blocks or chunk sizes or SSL compression, at some extra expense.
 Some rule-based introspection into the response could provide guidance in
some situations ???   Isn't that a strength of mod_security?


> >
> > 2) Disabling TLS Compression by default, even in older versions.
> > Currently we changed the default to SSLCompression off in >=2.4.3, I'd
> > like to see us back port this to 2.2.x.
>
> I think it recently made it to 2.2.x, post the last release.
>
> >
> > 3) Disable mod_default compression for X content types by default on
> > encrypted connections.  Likely HTML, maybe JSON, maybe Javascript
> > content types?
>
> In the code, or default conf / manual? It's the cautious thing to do,
> but I'm not yet sure if making everyone opt back in would really mean
> much (e.g. what number would give their app the required scrutiny
> before opting back in)
>



-- 
Born in Roswell... married an alien...
http://emptyhammock.com/

Re: breach attack

Posted by Stefan Fritsch <sf...@sfritsch.de>.
Am Samstag, 10. August 2013, 18:11:09 schrieb Dirk-Willem van Gulik:
> So the only fundamental thing we can do (i.e. if we want to go
> beyond guessing (future) browser and developer introduced
> vulnerabilities at higher layers) is a wee bit of
> padding/random*-cruft insertion in key places. Perhaps even doing
> so by default.

I think in general, an attacker should not be able to find out where 
headers end and where the body starts just by looking at the encrypted 
traffic. Therefore adding some header of random length should be just 
as effective as doing the padding somewhere in the body. Shouldn't it?

To defend against spurious flush buckets at the end of the headers, 
one could do some padding by doing random chunking at the start of the 
body. I think the chunk header is allowed to start with leading zeroes 
and it seems unlikely that an implementation will misinterpret that 
(at least much more unlikely than clients being broken by chunk 
extensions or chunk trailers). Maybe we could add a random number of 
leading zeros to some chunk headers?

However, before we go this way, someone should do the math how many 
more requests are necessary for which length of padding.

Re: breach attack

Posted by Dirk-Willem van Gulik <di...@webweaving.org>.
On 10 Aug 2013, at 18:14, "Steinar H. Gunderson" <sg...@bigfoot.com> wrote:

> On Sat, Aug 10, 2013 at 06:11:09PM +0200, Dirk-Willem van Gulik wrote:
>> I'd keep in mind that compression is simply an amplifier for this type of
>> attack. It makes the approach more effective. But it is not essential; when
>> you have in essence a largely known plaintext surrounding a short secret
>> and an oracle. And the latter is not going to go away - current dominant
>> site development models will make this worse; as do current operational
>> models w.r.t. to picking such up early.
> 
> Wait, what's the oracle if there's no compression?


As as ultimately before - the origin server (and/or the traffic you compare it to). Granted - doing this raw is not that feasible for large key lengths - but even some slight weakness elsewhere (could be as silly as a render/timing change in the browser) will help.

Dw.

Re: breach attack

Posted by "Steinar H. Gunderson" <sg...@bigfoot.com>.
On Sat, Aug 10, 2013 at 06:11:09PM +0200, Dirk-Willem van Gulik wrote:
> I'd keep in mind that compression is simply an amplifier for this type of
> attack. It makes the approach more effective. But it is not essential; when
> you have in essence a largely known plaintext surrounding a short secret
> and an oracle. And the latter is not going to go away - current dominant
> site development models will make this worse; as do current operational
> models w.r.t. to picking such up early.

Wait, what's the oracle if there's no compression?

/* Steinar */
-- 
Homepage: http://www.sesse.net/

Re: breach attack

Posted by Dirk-Willem van Gulik <di...@webweaving.org>.
On 10 Aug 2013, at 00:37, Eric Covener <co...@gmail.com> wrote:

> On Fri, Aug 9, 2013 at 5:24 PM, Steinar H. Gunderson
> <sg...@bigfoot.com> wrote:
>> On Tue, Aug 06, 2013 at 01:32:00PM -0400, Eric Covener wrote:
>>> Another option in this neighborhood is small/varying deflate blocks.
>>> But that probably limits the usefulness of deflate to the same extent
>>> that it helps.  The idea is to make it less likely that the user input
>>> and secret get compressed together.
>> 
>> It would be interesting to see how feasible “barriers” in mod_deflate would
>> be. E.g., if my application outputs
>> 
>>  <input type="hidden" name="csrftoken" DEFLATE_BARRIER_START value="1234" DEFLATE_BARRIER_END>
>> 
>> maybe mod_deflate could be taught not to compress the parts in-between.
> 
> For this attack, it would be enough to compress that section by itself
> -- a barrier between the reflected user input and the "secret".

I'd keep in mind that compression is simply an amplifier for this type of attack. It makes the approach more effective. But it is not essential; when you have in essence a largely known plaintext surrounding a short secret and an oracle. And the latter is not going to go away - current dominant site development models will make this worse; as do current operational models w.r.t. to picking such up early.

So the only fundamental thing we can do (i.e. if we want to go beyond guessing (future) browser and developer introduced vulnerabilities at higher layers) is a wee bit of padding/random*-cruft insertion in key places. Perhaps even doing so by default.

And whilst on this topic - may be good to also consider a slow migration away from RSA to at least DH and perhaps ECC where possible as our 'default's.

Dw.

*: or very pseudo - as not to make it simply a nop in statistics.

Re: breach attack

Posted by "Steinar H. Gunderson" <sg...@bigfoot.com>.
On Fri, Aug 09, 2013 at 06:37:50PM -0400, Eric Covener wrote:
>> It would be interesting to see how feasible “barriers” in mod_deflate would
>> be. E.g., if my application outputs
>>
>>   <input type="hidden" name="csrftoken" DEFLATE_BARRIER_START value="1234" DEFLATE_BARRIER_END>
>>
>> maybe mod_deflate could be taught not to compress the parts in-between.
> For this attack, it would be enough to compress that section by itself
> -- a barrier between the reflected user input and the "secret".

Indeed. (Or just avoid compressing it altogether.) But there's no simple way
of sending that signal to mod_deflate now that I know of.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

Re: breach attack

Posted by Eric Covener <co...@gmail.com>.
On Fri, Aug 9, 2013 at 5:24 PM, Steinar H. Gunderson
<sg...@bigfoot.com> wrote:
> On Tue, Aug 06, 2013 at 01:32:00PM -0400, Eric Covener wrote:
>> Another option in this neighborhood is small/varying deflate blocks.
>> But that probably limits the usefulness of deflate to the same extent
>> that it helps.  The idea is to make it less likely that the user input
>> and secret get compressed together.
>
> It would be interesting to see how feasible “barriers” in mod_deflate would
> be. E.g., if my application outputs
>
>   <input type="hidden" name="csrftoken" DEFLATE_BARRIER_START value="1234" DEFLATE_BARRIER_END>
>
> maybe mod_deflate could be taught not to compress the parts in-between.

For this attack, it would be enough to compress that section by itself
-- a barrier between the reflected user input and the "secret".

Re: breach attack

Posted by "Steinar H. Gunderson" <sg...@bigfoot.com>.
On Tue, Aug 06, 2013 at 01:32:00PM -0400, Eric Covener wrote:
> Another option in this neighborhood is small/varying deflate blocks.
> But that probably limits the usefulness of deflate to the same extent
> that it helps.  The idea is to make it less likely that the user input
> and secret get compressed together.

It would be interesting to see how feasible “barriers” in mod_deflate would
be. E.g., if my application outputs

  <input type="hidden" name="csrftoken" DEFLATE_BARRIER_START value="1234" DEFLATE_BARRIER_END>

maybe mod_deflate could be taught not to compress the parts in-between.

It's all rather speculative, though, and it only works when you know exactly
what you protect (there might be other sensitive data than the CSRF tokens)
or where the dangerous data comes from (easy to miss, for the same reasons
that XSS is easy to miss).

/* Steinar */
-- 
Homepage: http://www.sesse.net/

Re: breach attack

Posted by Eric Covener <co...@gmail.com>.
On Tue, Aug 6, 2013 at 1:24 PM, Paul Querna <pa...@querna.org> wrote:
> Hiya,
>
> Has anyone given much thought to changes in httpd to help mitigate the
> recently publicized breach attack:
>
> http://breachattack.com/
>
> From an httpd perspective, looking at the mitigations
> <http://breachattack.com/#mitigations>
>
> 1) Disabling HTTP compression
> 2) Separating secrets from user input
> 3) Randomizing secrets per request
> 4) Masking secrets (effectively randomizing by XORing with a random
> secret per request)
> 5) Protecting vulnerable pages with CSRF
> 6) Length hiding (by adding random amount of bytes to the responses)
> 7) Rate-limiting the requests
>
>
> Many of these are firmly in the domain of an application developer,
> but I think we should work out if there are ways we can either change
> default configurations or add new features to help application
> developers be successful:
>
> 1) Has anyone given any thought to changing how we do chunked
> encoding?    Chunking is kinda like arbitrary padding we an insert
> into a response, without having to know anything about the actual
> content of the response.  What if we increased the number of chunks we
> create, and randomly placed them -- this wouldn't completely ruin some
> of the attacks, but could potentially increase the number of requests
> needed significantly. (We should figure out the math here?  How many
> random chunks of how big are an effective mitigation?)

Another option in this neighborhood is small/varying deflate blocks.
But that probably limits the usefulness of deflate to the same extent
that it helps.  The idea is to make it less likely that the user input
and secret get compressed together.

>
> 2) Disabling TLS Compression by default, even in older versions.
> Currently we changed the default to SSLCompression off in >=2.4.3, I'd
> like to see us back port this to 2.2.x.

I think it recently made it to 2.2.x, post the last release.

>
> 3) Disable mod_default compression for X content types by default on
> encrypted connections.  Likely HTML, maybe JSON, maybe Javascript
> content types?

In the code, or default conf / manual? It's the cautious thing to do,
but I'm not yet sure if making everyone opt back in would really mean
much (e.g. what number would give their app the required scrutiny
before opting back in)