You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by "Takashima, Makoto" <ta...@poem.ocn.ne.jp> on 2002/10/13 13:40:21 UTC

Problem with non-blocking write to pipe

Hi,

I found a problem with non-blocking write to pipe.

Current code (2.0.43) is as following.

------------------------------------------------------------
httpd-2.0.43/srclib/apr/file_io/unix/readwrite.c:apr_file_write()
------------------------------------------------------------
        do {
            rv = write(thefile->filedes, buf, *nbytes);
        } while (rv == (apr_size_t)-1 && errno == EINTR);
#ifdef USE_WAIT_FOR_IO
        if (rv == (apr_size_t)-1 &&
            (errno == EAGAIN || errno == EWOULDBLOCK) &&
            thefile->timeout != 0) {
            apr_status_t arv = apr_wait_for_io_or_timeout(thefile, NULL, 0);
            if (arv != APR_SUCCESS) {
                *nbytes = 0;
                return arv;
            }
            else {
                do {
                    rv = write(thefile->filedes, buf, *nbytes);
                } while (rv == (apr_size_t)-1 && errno == EINTR);
            }
        }
#endif
------------------------------------------------------------

It seems assuming write request never fail when
apr_wait_for_io_or_timeout() succeeded, but it is not true
for pipe.

"The Single UNIXR Specification, Version 2" is saying :

------------------------------------------------------------
Write requests to a pipe or FIFO will be handled the same as
a regular file with the following exceptions: 

[snip]

If the O_NONBLOCK flag is set, write() requests will be
handled differently, in the following ways: 

  - The write() function will not block the thread. 

  - A write request for {PIPE_BUF} or fewer bytes will have
    the following effect: If there is sufficient space
    available in the pipe, write() will transfer all the
    data and return the number of bytes requested. Otherwise,
    write() will transfer no data and return -1 with errno
    set to [EAGAIN]. 
------------------------------------------------------------

It means that if room on the pipe is smaller than *nbyes and
*nbytes is less than or equal to {PIPE_BUF}, write() can
fail with errno=EAGAIN and return -1, and apr_file_write()
just fails.

I found this problem on HP-UX11.0 whose PIPE_BUF is 8192 with
CGI that receive more than 8kbytes POST request.

This problem can be fixed with the following code, however I
do not know if there is better solution other than looping.

------------------------------------------------------------
        do {
            rv = write(thefile->filedes, buf, *nbytes);
        } while (rv == (apr_size_t)-1 && errno == EINTR);
#ifdef USE_WAIT_FOR_IO
        if (rv == (apr_size_t)-1 &&
            (errno == EAGAIN || errno == EWOULDBLOCK) &&
            thefile->timeout != 0) {
            apr_status_t arv = apr_wait_for_io_or_timeout(thefile, NULL, 0);
            if (arv != APR_SUCCESS) {
                *nbytes = 0;
                return arv;
            }
            else {
                do {
                    rv = write(thefile->filedes, buf, *nbytes);

                    /* write request of {PIPE_BUF} bytes or less may fail */
                    /* because it is atomic when writing to pipe or FIFO  */
                    while (rv == (apr_size_t)-1 &&
                           *nbytes < PIPE_BUF && errno == EAGAIN)
                    {
                        apr_sleep(10000);       /* sleep ~10ms */
                        rv = write(thefile->filedes, buf, *nbytes);
                    }
                } while (rv == (apr_size_t)-1 && errno == EINTR);
            }
        }
#endif
------------------------------------------------------------


--
takasima@poem.ocn.ne.jp


Re: Problem with non-blocking write to pipe

Posted by "Takashima, Makoto" <ta...@poem.ocn.ne.jp>.
One correction.

On Sun, 13 Oct 2002 20:40:21 +0900, takasima@poem.ocn.ne.jp wrote:
> 
>                     /* write request of {PIPE_BUF} bytes or less may fail */
>                     /* because it is atomic when writing to pipe or FIFO  */
>                     while (rv == (apr_size_t)-1 &&
>                            *nbytes < PIPE_BUF && errno == EAGAIN)

it sould be :

                    while (rv == (apr_size_t)-1 &&
                           *nbytes <= PIPE_BUF && errno == EAGAIN)

--
takasima@poem.ocn.ne.jp


Re: Problem with non-blocking write to pipe

Posted by "Takashima, Makoto" <ta...@poem.ocn.ne.jp>.
Hi,

On 13 Oct 2002 08:47:08 -0400, trawick@attglobal.net wrote:

> > httpd-2.0.43/srclib/apr/file_io/unix/readwrite.c:apr_file_write()
> 
> FYI...  this discussion belongs on dev@apr.apache.org...  the
> srclib/apr tree in the httpd-2.0 directory are a copy of the APR
> project code...

Sorry, I will be careful not to send to wrong mailing list.

> note that most existing users of APR pipes don't care about atomic
> writes...

I agree.

> I wonder if it is appropriate to have a pipe setting that
> says that atomic is important...  if really important, I guess we'd
> have to sleep before retry...  otherwise maybe we should try to write
> a smaller amount to the pipe...  it would be a shame to waste our
> timeslice, which could cause the reader to have to block too once the
> other side is empty...

I do not think atomic is important for Apache because it
does not share pipe with another process (or thread).

However all unix should have same functionality so we need
solution anyway.


--
takasima@poem.ocn.ne.jp


Re: Problem with non-blocking write to pipe

Posted by Jeff Trawick <tr...@attglobal.net>.
"Takashima, Makoto" <ta...@poem.ocn.ne.jp> writes:

> Hi,
> 
> I found a problem with non-blocking write to pipe.
> 
> Current code (2.0.43) is as following.
> 
> ------------------------------------------------------------
> httpd-2.0.43/srclib/apr/file_io/unix/readwrite.c:apr_file_write()

FYI...  this discussion belongs on dev@apr.apache.org...  the
srclib/apr tree in the httpd-2.0 directory are a copy of the APR
project code...

> It seems assuming write request never fail when
> apr_wait_for_io_or_timeout() succeeded, but it is not true
> for pipe.
...
>   - A write request for {PIPE_BUF} or fewer bytes will have
>     the following effect: If there is sufficient space
>     available in the pipe, write() will transfer all the
>     data and return the number of bytes requested. Otherwise,
>     write() will transfer no data and return -1 with errno
>     set to [EAGAIN]. 
...

boy, this sucks :)  no syscall to block until timeout occurs or we can
write the whole message...

note that most existing users of APR pipes don't care about atomic
writes...  I wonder if it is appropriate to have a pipe setting that
says that atomic is important...  if really important, I guess we'd
have to sleep before retry...  otherwise maybe we should try to write
a smaller amount to the pipe...  it would be a shame to waste our
timeslice, which could cause the reader to have to block too once the
other side is empty...

> I found this problem on HP-UX11.0 whose PIPE_BUF is 8192 with
> CGI that receive more than 8kbytes POST request.
> 
> This problem can be fixed with the following code, however I
> do not know if there is better solution other than looping.
> 
> ------------------------------------------------------------
>         do {
>             rv = write(thefile->filedes, buf, *nbytes);
>         } while (rv == (apr_size_t)-1 && errno == EINTR);
> #ifdef USE_WAIT_FOR_IO
>         if (rv == (apr_size_t)-1 &&
>             (errno == EAGAIN || errno == EWOULDBLOCK) &&
>             thefile->timeout != 0) {
>             apr_status_t arv = apr_wait_for_io_or_timeout(thefile, NULL, 0);
>             if (arv != APR_SUCCESS) {
>                 *nbytes = 0;
>                 return arv;
>             }
>             else {
>                 do {
>                     rv = write(thefile->filedes, buf, *nbytes);
> 
>                     /* write request of {PIPE_BUF} bytes or less may fail */
>                     /* because it is atomic when writing to pipe or FIFO  */
>                     while (rv == (apr_size_t)-1 &&
>                            *nbytes < PIPE_BUF && errno == EAGAIN)
>                     {
>                         apr_sleep(10000);       /* sleep ~10ms */
>                         rv = write(thefile->filedes, buf, *nbytes);
>                     }
>                 } while (rv == (apr_size_t)-1 && errno == EINTR);
>             }
>         }
> #endif
> ------------------------------------------------------------
> 
> 
> --
> takasima@poem.ocn.ne.jp
> 

-- 
Jeff Trawick | trawick@attglobal.net
Born in Roswell... married an alien...

Re: Problem with non-blocking write to pipe

Posted by Jeff Trawick <tr...@attglobal.net>.
"Takashima, Makoto" <ta...@poem.ocn.ne.jp> writes:

> Hi,
> 
> I found a problem with non-blocking write to pipe.
> 
> Current code (2.0.43) is as following.
> 
> ------------------------------------------------------------
> httpd-2.0.43/srclib/apr/file_io/unix/readwrite.c:apr_file_write()

FYI...  this discussion belongs on dev@apr.apache.org...  the
srclib/apr tree in the httpd-2.0 directory are a copy of the APR
project code...

> It seems assuming write request never fail when
> apr_wait_for_io_or_timeout() succeeded, but it is not true
> for pipe.
...
>   - A write request for {PIPE_BUF} or fewer bytes will have
>     the following effect: If there is sufficient space
>     available in the pipe, write() will transfer all the
>     data and return the number of bytes requested. Otherwise,
>     write() will transfer no data and return -1 with errno
>     set to [EAGAIN]. 
...

boy, this sucks :)  no syscall to block until timeout occurs or we can
write the whole message...

note that most existing users of APR pipes don't care about atomic
writes...  I wonder if it is appropriate to have a pipe setting that
says that atomic is important...  if really important, I guess we'd
have to sleep before retry...  otherwise maybe we should try to write
a smaller amount to the pipe...  it would be a shame to waste our
timeslice, which could cause the reader to have to block too once the
other side is empty...

> I found this problem on HP-UX11.0 whose PIPE_BUF is 8192 with
> CGI that receive more than 8kbytes POST request.
> 
> This problem can be fixed with the following code, however I
> do not know if there is better solution other than looping.
> 
> ------------------------------------------------------------
>         do {
>             rv = write(thefile->filedes, buf, *nbytes);
>         } while (rv == (apr_size_t)-1 && errno == EINTR);
> #ifdef USE_WAIT_FOR_IO
>         if (rv == (apr_size_t)-1 &&
>             (errno == EAGAIN || errno == EWOULDBLOCK) &&
>             thefile->timeout != 0) {
>             apr_status_t arv = apr_wait_for_io_or_timeout(thefile, NULL, 0);
>             if (arv != APR_SUCCESS) {
>                 *nbytes = 0;
>                 return arv;
>             }
>             else {
>                 do {
>                     rv = write(thefile->filedes, buf, *nbytes);
> 
>                     /* write request of {PIPE_BUF} bytes or less may fail */
>                     /* because it is atomic when writing to pipe or FIFO  */
>                     while (rv == (apr_size_t)-1 &&
>                            *nbytes < PIPE_BUF && errno == EAGAIN)
>                     {
>                         apr_sleep(10000);       /* sleep ~10ms */
>                         rv = write(thefile->filedes, buf, *nbytes);
>                     }
>                 } while (rv == (apr_size_t)-1 && errno == EINTR);
>             }
>         }
> #endif
> ------------------------------------------------------------
> 
> 
> --
> takasima@poem.ocn.ne.jp
> 

-- 
Jeff Trawick | trawick@attglobal.net
Born in Roswell... married an alien...