You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mina.apache.org by "Lyor Goldstein (Jira)" <ji...@apache.org> on 2022/08/17 17:32:00 UTC

[jira] [Comment Edited] (SSHD-1287) Use the maximum packet size of the communication partner

    [ https://issues.apache.org/jira/browse/SSHD-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17580907#comment-17580907 ] 

Lyor Goldstein edited comment on SSHD-1287 at 8/17/22 5:31 PM:
---------------------------------------------------------------

[~twolf] while I agree in principle with your analysis, I am not sure we want to use the window sizes without some internal limit. My concern is about possibly introducing an OOM "attack" by someone intentionally or due to some misconfiguration requesting a read window size of 100's of MB or even GB's (BTW - what about a virtually unlimited window of size 2**32-1  that we support...). When the code will try to accommodate this request it may eventually run out of memory (maybe not as a single allocation but as a cumulative pending queue of packets). This is why we impose (sometimes artificial) limits whenever the input value serves as allocation size. We could use a larger (configurable) limit - e.g., 256K instead of 64K by default, and also address the behavior you described
{quote}
10 read requests to read 100kB at offsets 0, 100kB, 200kB, 300kB ....
{quote}
and also
{quote}
If a bufferSize > 126kB (twice the server's read size limit) is used, SftpInputStreamAsync has a bug that overwrites data during this synchronous re-load of missing data, so the file is corrupted in the transfer.
{quote}
I believe this will achieve the desired balance between optimized throughput and protecting the code from inadvertent (or intentional) OOM.


was (Author: lgoldstein):
[~twolf] while I agree in principle with your analysis, I am not sure we want to use the window sizes without some internal limit. My concern is about possibly introducing an OOM "attack" by someone intentionally or due to some misconfiguration requesting a read window size of 100's of MB or even GB's (BTW - what about a virtually unlimited window of size 2**32-1  that we support...). When the code will try to accommodate this request it may eventually run out of memory (maybe not as a single allocation but as a cumulative pending queue of packets). This is why we impose (sometimes artificial) limits whenever the input value serves as allocation size. We could use a larger (configurable) limit - e.g., 256K instead of 64K by default, and also address the behavior you described
{quote}
10 read requests to read 100kB at offsets 0, 100kB, 200kB, 300kB ....
{quote}
I believe this will achieve the desired balance between optimized throughput and protecting the code from inadvertent (or intentional) OOM.

> Use the maximum packet size of the communication partner
> --------------------------------------------------------
>
>                 Key: SSHD-1287
>                 URL: https://issues.apache.org/jira/browse/SSHD-1287
>             Project: MINA SSHD
>          Issue Type: Bug
>    Affects Versions: 2.8.0
>            Reporter: Ryosuke Kanda
>            Assignee: Thomas Wolf
>            Priority: Minor
>         Attachments: ClientMain.java, ConsoleLog, ServerMain.java
>
>
> It appears that SSHD may use the maximum packet size presented by the communicating party to request reception.
> RFC 4254 contains the following statement, where "maximum packet size" can be read as "the maximum packet size that the sender is willing to accept" (as in "initial window size").
> {code:java}
> Section 5.1 Opening a Channel
> The 'maximum packet size' specifies the maximum size of 
> 2an individual data packet that can be sent to the sender.{code}
> The client/server must comply with its own declared maximum packet size.
>  
> I encountered this issue when I was using "freeSSHd" as an SFTP server.
> The version of freeSSHd is 1.3.1.
> The event is confirmed by creating a server with SSHD.
> Attached are the client and server sources, as well as the client debug logs (up to the first read request).
> When communicating with these, the client knows that its maximum packet size is "32768" and the server's maximum packet size is "65536".
> {code:java}
> // client window
> [main] DEBUG org.apache.sshd.common.channel.Window - init(Window[client/local](SftpChannelSubsystem[id=0, recipient=-1]-ClientSessionImpl[user@localhost/127.0.0.1:10022][sftp])) size=2097152, max=2097152, packet=32768
> // server window
> [sshd-SshClient[5702b3b1]-nio2-thread-3] DEBUG org.apache.sshd.common.channel.Window - init(Window[client/remote](SftpChannelSubsystem[id=0, recipient=0]-ClientSessionImpl[user@localhost/127.0.0.1:10022][sftp])) size=2097152, max=2097152, packet=65536 {code}
> And when requesting to receive a file, it requests to read 65536 bytes at a time. This means that it attempts to receive packets that exceed 32768 bytes.
> {code:java}
> [main] TRACE org.apache.sshd.sftp.client.impl.SftpInputStreamAsync - sendRequests(SftpInputStreamAsync[ClientSessionImpl[user@localhost/127.0.0.1:10022]][/data4M.txt]) enqueue pending ack: SftpAckData[id=103, offset=0, length=65536] {code}
>  
> I hope this report is helpful.
> I am doing machine translation, so please allow it to be unnatural.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@mina.apache.org
For additional commands, e-mail: dev-help@mina.apache.org