You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@serf.apache.org by Evgeny Kotkov <ev...@visualsvn.com> on 2017/03/17 11:54:29 UTC

[PATCH] HTTP/2: Avoid accessing a destroyed allocator for HPACK buckets

Hi,

This patch prevents Serf from accessing an already destroyed allocator
when freeing HTTP/2 HPACK buckets in error scenarios.  One possible
reproduction script for the issue is:

    (This is not 100% stable, but close to that.  DEBUG_DOUBLE_FREE
    should be defined.)

    > serf_get -c4 -n4 --http2 https://www.cloudflare.com
    Error running context: (120153) HTTP2 flow control limits exceeded
    Aborted (core dumped)

See serf_http2__stream_setup_next_request() around http2_stream.c:378.
The HPACK bucket is created using the request->allocator, and is then
added into the pump's output queue.  Certain error conditions, such as
the one above, can cause the request to be destroyed _before_ the pump
with the bucket still alive.  Attempting to destroy this bucket will then
result in accessing an already destroyed request->allocator.

While the HTTP/2 implementation can handle error situations with enqueued
output that contains buckets from a request, it requires marking a request
with SERF_WRITING_STARTED, to allow the special cleanup logic in
outgoing_request.c:47 to kick in.  This is how things currently work
for HTTP/2 requests with bodies, where the body bucket originates from
a serf_request_t.

Luckily, in this particular case with HPACK buckets and header-only
requests, it's possible to avoid using the request->allocator, and
allocate the new bucket using the HTTP/2 stream's allocator.  Thus,
the lifetime of the new bucket will no longer depend on the underlying
serf_request_t.

Please see the attached patch with the fix described above.  The log
message is included in the beginning of the patch file.


Regards,
Evgeny Kotkov

Re: [PATCH] HTTP/2: Avoid accessing a destroyed allocator for HPACK buckets

Posted by Evgeny Kotkov <ev...@visualsvn.com>.
Evgeny Kotkov <ev...@visualsvn.com> writes:

> Luckily, in this particular case with HPACK buckets and header-only
> requests, it's possible to avoid using the request->allocator, and
> allocate the new bucket using the HTTP/2 stream's allocator.  Thus,
> the lifetime of the new bucket will no longer depend on the underlying
> serf_request_t.
>
> Please see the attached patch with the fix described above.  The log
> message is included in the beginning of the patch file.

After giving it a fresh look, I think that there is a downside in the
proposed fix.

While it solves the issue, currently the HTTP/2 stream's allocator is
set to conn->allocator.  Thus, with the fix the per-request HPACK
buckets would be allocated using the per-connection allocator.
That's error-prone, since such buckets could not be destroyed in some
edge cases (perhaps, during errors or when a request is cancelled),
and that could result in a memory leak.

With this in mind, I would like to think about it for some time.


Regards,
Evgeny Kotkov