You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by bu...@apache.org on 2002/07/24 09:44:59 UTC

DO NOT REPLY [Bug 11117] New: - Coyote connector does not correctly deal with large PUT when using chunked transfer encoding

DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=11117>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=11117

Coyote connector does not correctly deal with large PUT when using chunked transfer encoding

           Summary: Coyote connector does not correctly deal with large PUT
                    when using chunked transfer encoding
           Product: Tomcat 4
           Version: 4.1.7
          Platform: PC
        OS/Version: Linux
            Status: NEW
          Severity: Major
          Priority: Other
         Component: Connector:Coyote HTTP/1.1
        AssignedTo: tomcat-dev@jakarta.apache.org
        ReportedBy: msmith@ns.xn.com.au


I've built a small test servlet (included below) that shows this behaviour.

If I upload data to tomcat (configured to use the HTTP/1.1 Coyote connector)
using a chunked transfer-encoding on a PUT, then the result (i.e. what I read
using the servlet's request.getInputStream() is corrupted.

This _only_ happens on large PUTs (it happens every time I tried it (about 10)
on a large PUT - I was testing with something around 700 kB, and another thing
of about 1 MB, but does NOT always corrupt it in the same way. I did not see any
corruption on small files (~50 kB and smaller), though I didn't test this
extensively.

The total length of the data read is exactly correct (712080 bytes in my first
test). At the first point of corruption in the file, a chunk header appears in
the output ("\r\n1000\r\n") followed by correct data from about 700 bytes later
on, then a second copy (in the correct place, I think) of the corrupt data (i.e.
correct data, followed by the chunk header, followed by some data repeated, the
second instance of which is in the correct place - so the first instance is
being produced _instead_ of the correct data for that point in the input).

Test servlet follows:

import java.io.*;

import javax.servlet.*;
import javax.servlet.http.*;

public class Test extends HttpServlet
{

    public void doPut(HttpServletRequest req, HttpServletResponse res)
                throws ServletException, IOException
    {
        FileOutputStream fos = new FileOutputStream("/tmp/servlet-out");

        InputStream is = req.getInputStream();

        byte buf[] = new byte[2000];
        int ret;

        while((ret = is.read(buf)) > 0) {
            fos.write(buf, 0, ret);
        }

        fos.close();
        is.close();

        PrintWriter pw = res.getWriter();
        pw.println("Done");
        pw.flush();
        pw.close();
    }
}

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>