You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mina.apache.org by Lyor Goldstein <lg...@apache.org> on 2018/11/16 16:53:10 UTC

Q: your take on SSHD-868 - a.k.a. the dangers of RLE...

The issue recognizes the fact that since SSH packets are RLE (read-length
encoded) it is possible to craft malicious packets that can cause memory
allocation errors due to reporting extremely large lengths of data (up to
32-bit). We can easily implement some mechanism that executes some sanity
check whenever it decodes a "string" (byte array preceded by a length
specification) or when it allocates an array based on some packet "num-of"
field (e.g., keyboard-interactive challenges count, SFTP number of listed
names, etc....).

The problem with such a mechanism is that we would have to use
*hardwired *internal
limitations that while reasonably "generous", are not part of any RFC, so
in theory (if not in practice) we may too "stingy" and encounter some
client/server that exceeds our internal limitations without being malicious.

My question is whether the benefits of "hardening" the code against
malicious packets is worth the cost of loss of flexibility and
compatibility. Especially since such a limitation would not really prevent
malicious packets 100% - e.g., in the case of keyboard-interactive
authentication, even if we limit the number of challenges to several 100's
and the size of each challenge or response to several KB's (...generous -
remember ?...) it can still come out to several MB's - not to mention the
fact that we might be blocking some perfectly "legal" mechanism from
working just because it uses some larger numbers.

In view of this - should we embark on what are definitely extensive code
changes for this feature ?

Re: Q: your take on SSHD-868 - a.k.a. the dangers of RLE...

Posted by Emmanuel Lecharny <el...@apache.org>.
Interesting problem...

In ApacheDS, we decided to limit the size of a PDU to avoid crazy big (and
crafted) messages to be processed. This is of course configurable. I guess
you could do the same. Note that I don't think it makes sense to send a big
chunk of data in SSH, IMO.

Otherwise, I'm not a SSH specialist, but it seems that the SSH maximum
packet size is 32Kb (https://tools.ietf.org/html/rfc4253, par. 6.1).

Is it relevant ?

On Fri, Nov 16, 2018 at 5:47 PM Lyor Goldstein <lg...@apache.org>
wrote:

> The issue recognizes the fact that since SSH packets are RLE (read-length
> encoded) it is possible to craft malicious packets that can cause memory
> allocation errors due to reporting extremely large lengths of data (up to
> 32-bit). We can easily implement some mechanism that executes some sanity
> check whenever it decodes a "string" (byte array preceded by a length
> specification) or when it allocates an array based on some packet "num-of"
> field (e.g., keyboard-interactive challenges count, SFTP number of listed
> names, etc....).
>
> The problem with such a mechanism is that we would have to use
> *hardwired *internal
> limitations that while reasonably "generous", are not part of any RFC, so
> in theory (if not in practice) we may too "stingy" and encounter some
> client/server that exceeds our internal limitations without being
> malicious.
>
> My question is whether the benefits of "hardening" the code against
> malicious packets is worth the cost of loss of flexibility and
> compatibility. Especially since such a limitation would not really prevent
> malicious packets 100% - e.g., in the case of keyboard-interactive
> authentication, even if we limit the number of challenges to several 100's
> and the size of each challenge or response to several KB's (...generous -
> remember ?...) it can still come out to several MB's - not to mention the
> fact that we might be blocking some perfectly "legal" mechanism from
> working just because it uses some larger numbers.
>
> In view of this - should we embark on what are definitely extensive code
> changes for this feature ?
>


-- 
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com