You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@guacamole.apache.org by "Mike Jumper (Jira)" <ji...@apache.org> on 2021/04/05 20:49:00 UTC

[jira] [Commented] (GUACAMOLE-1320) Guacamole file upload with cloudflare chunking requests

    [ https://issues.apache.org/jira/browse/GUACAMOLE-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17315094#comment-17315094 ] 

Mike Jumper commented on GUACAMOLE-1320:
----------------------------------------

If CloudFlare restricts the maximum file upload size to 100 MB, that's an explicit choice by CloudFlare having its intended effect, not a bug in Guacamole, but providing an additional chunked transfer mechanism could make sense. Allowing HTTP file uploads to be entirely disabled and instead relying on the previous (slower) mechanism of tunneling uploads over the existing WebSocket would also avoid such limits.

> Guacamole file upload with cloudflare chunking requests
> -------------------------------------------------------
>
>                 Key: GUACAMOLE-1320
>                 URL: https://issues.apache.org/jira/browse/GUACAMOLE-1320
>             Project: Guacamole
>          Issue Type: Bug
>          Components: guacamole-client
>            Reporter: Joe Chlanda
>            Priority: Trivial
>
> When using the file upload feature of the Guacamole-client with cloudflare, we are unable to upload files larger than 100mb (cloudflare limit) due to the request staying open (and cloudflare tracking it even though it is chunked).
>  
> Uploading a file larger than 100mb results in a 413 HTTP code. Is there anyway we can improve chunking to use multiple requests instead of one HTTP request?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)