You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Kyriakos Zarifis <ky...@gmail.com> on 2016/12/29 01:07:29 UTC

HTTP/2 frame prioritization not honored

I am experimenting with contention between lower/higher priority HTTP/2
streams, and I think I am noticing that high-priority frames are not given
high priority (quickly enough)

The process is: Download dummy page A, which, after onLoad, Prefetches 8
objects (5MB each)  at low priority. While the first few of those are
downloaded, click on link to page B (high priority stream), and measure how
long it takes to receive.

The client-server RTT is 20ms, and ideally because of higher prioritization
we would want the second HTML to be received at a timeframe as close to
that RTT as possible. However, I am noticing that the second HTML is
delayed (although not completely HoL-blocked) by objects that are being
Prefetched at lower priority.

This <https://plot.ly/~kyriakos/95266/chrome-trace-tcp-notsent-lowat-1/> plots
the client-side trace (Chrome log): HTML A is stream_id 1, the 8 prefetched
objects are streams 3-17, and HTML B (the object that is delayed) is
stream_id 19.  Dots below the x axis are the GET requests for the
respectively colored streams. Dots above the x axis are received DATA
frames. GETs for the prefetched objects are sent after onLoad, and that of
stream_id 19 is sent when I click on the link, around t=1900.  As this
shows, the reply for the high-priority stream (19) arrives ~1.5s later.

Apache's log shows that it sent the reply to stream_id 19 (HTML B) ~500ms
after it got its request. During those 500ms, it wrote another ~350 frames
(~5MB) from other streams. So it honors the higher priority in the sense
that it interleaves the HTML between the currently written object, but it
continues to write many lower-priority frames before it does so, delaying
that high-prio HTML. In contrast, when I wait until all prefetched objects
are served and then click on the link, the logs shows that the reply is
written within a few ms from the request arrival, as expected, and the
second page loads observably faster.

This seems to break any default prioritization or prioritization imposed by
the client, and leads to unexpected performance in different
prioritization-dependent scenarios that I have tried.

Has anyone experience this before, and do you know what causes it and what
the right way to fix it would be?

Thanks

Re: HTTP/2 frame prioritization not honored

Posted by Eric Covener <co...@gmail.com>.
On Sat, Jan 28, 2017 at 11:01 AM, Kyriakos Zarifis
<ky...@gmail.com> wrote:
> Captures it perfectly - thanks, and thanks again for being super responsive all along and digging into this so quickly. (Also, nimble is a great word :))

+1

Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
Captures it perfectly - thanks, and thanks again for being super responsive all along and digging into this so quickly. (Also, nimble is a great word :))


> On Jan 28, 2017, at 6:09 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> 
> Kyriakis, I published a page using *your* graphics: https://icing.github.io/mod_h2/nimble.html
> Please have a look. Anything you'd like to add or changed, just make a PR!
> 
> Thanks for the investigations and data!
> 
> Cheers,
> 
> Stefan
> 
>> Am 21.01.2017 um 11:50 schrieb Stefan Eissing <st...@greenbytes.de>:
>> 
>> 
>>> Am 20.01.2017 um 03:57 schrieb Kyriakos Zarifis <ky...@gmail.com>:
>>> 
>>> * ... and by "here" I meant "here" 
>> 
>> :)
>> 
>>> On Thu, Jan 19, 2017 at 6:54 PM, Kyriakos Zarifis <ky...@gmail.com> wrote:
>>> Sounds great!
>>> 
>>> 
>>> Very interested. I'd like to add a page over at https://icing.github.io/mod_h2/ about it, so that people can easily grasp what the advantages are. For that, your numbers (do you have screenshots of browser timelines maybe?) would be very welcome. Also that someone besides the module author has measured it adds credibility. :-)
>>> 
>>> If you write yourself somewhere about it, I am happy to link that.
>>> 
>>> Since anything I write would be incomplete without your description of what caused it and how you resolved it, I put together a WIP write up here, with screenshots / link to logs*. Feel free to use it as you want or let me know if you'd like more details; I'm happy to help write the complete story for a page on https://icing.github.io/mod_h2/ , which is probably the most reasonable place to gather the relevant bits.
>>> 
>>> Cheers
>>> 
>>> 
>>> 
>>> * I rerun the tests to capture timeline screenshots, so the server-logs don't exactly correspond to those screenshots, but the behaviors were the same
>>> * Note that the delays in the timeline pictures are worse than those seen on server logs, which have been more helpful for understanding application-layer behavior. I think what's causing this is a bloated output buffer in the case where the server aggressively writes low-prio data (verified this by monitoring the buffer size which keeps increasing during the test)
>> 
>> Certainly an area to improve upon. mod_http2 is still writing so much into the socket that responsiveness suffers. This gives the best throughput performance, though. I know that the h2o server guys also experimented with interrogating TCP windows to prevent bloat. Have to look at that again.
>> 
>> Cheers,
>> 
>> Stefan Eissing
>> 
>> <green/>bytes GmbH
>> Hafenstrasse 16
>> 48155 Münster
>> www.greenbytes.de
>> 
> 
> Stefan Eissing
> 
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
> 

Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
Kyriakis, I published a page using *your* graphics: https://icing.github.io/mod_h2/nimble.html
Please have a look. Anything you'd like to add or changed, just make a PR!

Thanks for the investigations and data!

Cheers,

Stefan

> Am 21.01.2017 um 11:50 schrieb Stefan Eissing <st...@greenbytes.de>:
> 
> 
>> Am 20.01.2017 um 03:57 schrieb Kyriakos Zarifis <ky...@gmail.com>:
>> 
>> * ... and by "here" I meant "here" 
> 
> :)
> 
>> On Thu, Jan 19, 2017 at 6:54 PM, Kyriakos Zarifis <ky...@gmail.com> wrote:
>> Sounds great!
>> 
>> 
>> Very interested. I'd like to add a page over at https://icing.github.io/mod_h2/ about it, so that people can easily grasp what the advantages are. For that, your numbers (do you have screenshots of browser timelines maybe?) would be very welcome. Also that someone besides the module author has measured it adds credibility. :-)
>> 
>> If you write yourself somewhere about it, I am happy to link that.
>> 
>> Since anything I write would be incomplete without your description of what caused it and how you resolved it, I put together a WIP write up here, with screenshots / link to logs*. Feel free to use it as you want or let me know if you'd like more details; I'm happy to help write the complete story for a page on https://icing.github.io/mod_h2/ , which is probably the most reasonable place to gather the relevant bits.
>> 
>> Cheers
>> 
>> 
>> 
>> * I rerun the tests to capture timeline screenshots, so the server-logs don't exactly correspond to those screenshots, but the behaviors were the same
>> * Note that the delays in the timeline pictures are worse than those seen on server logs, which have been more helpful for understanding application-layer behavior. I think what's causing this is a bloated output buffer in the case where the server aggressively writes low-prio data (verified this by monitoring the buffer size which keeps increasing during the test)
> 
> Certainly an area to improve upon. mod_http2 is still writing so much into the socket that responsiveness suffers. This gives the best throughput performance, though. I know that the h2o server guys also experimented with interrogating TCP windows to prevent bloat. Have to look at that again.
> 
> Cheers,
> 
> Stefan Eissing
> 
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
> 

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de


Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
> Am 20.01.2017 um 03:57 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> 
> * ... and by "here" I meant "here" 

:)

> On Thu, Jan 19, 2017 at 6:54 PM, Kyriakos Zarifis <ky...@gmail.com> wrote:
> Sounds great!
> 
> 
> Very interested. I'd like to add a page over at https://icing.github.io/mod_h2/ about it, so that people can easily grasp what the advantages are. For that, your numbers (do you have screenshots of browser timelines maybe?) would be very welcome. Also that someone besides the module author has measured it adds credibility. :-)
> 
> If you write yourself somewhere about it, I am happy to link that.
> 
> Since anything I write would be incomplete without your description of what caused it and how you resolved it, I put together a WIP write up here, with screenshots / link to logs*. Feel free to use it as you want or let me know if you'd like more details; I'm happy to help write the complete story for a page on https://icing.github.io/mod_h2/ , which is probably the most reasonable place to gather the relevant bits.
> 
> Cheers
> 
> 
> 
> * I rerun the tests to capture timeline screenshots, so the server-logs don't exactly correspond to those screenshots, but the behaviors were the same
> * Note that the delays in the timeline pictures are worse than those seen on server logs, which have been more helpful for understanding application-layer behavior. I think what's causing this is a bloated output buffer in the case where the server aggressively writes low-prio data (verified this by monitoring the buffer size which keeps increasing during the test)

Certainly an area to improve upon. mod_http2 is still writing so much into the socket that responsiveness suffers. This gives the best throughput performance, though. I know that the h2o server guys also experimented with interrogating TCP windows to prevent bloat. Have to look at that again.

Cheers,

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de


Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
* ... and by "here" I meant "here
<https://docs.google.com/document/d/1KZyWfaLrXVvwG6REjJ_TT0KiV1u1rTLsedgN74oCGTw/edit?usp=sharing>
"

On Thu, Jan 19, 2017 at 6:54 PM, Kyriakos Zarifis <ky...@gmail.com>
wrote:

> Sounds great!
>
>
>> Very interested. I'd like to add a page over at
>> https://icing.github.io/mod_h2/ about it, so that people can easily
>> grasp what the advantages are. For that, your numbers (do you have
>> screenshots of browser timelines maybe?) would be very welcome. Also that
>> someone besides the module author has measured it adds credibility. :-)
>>
>> If you write yourself somewhere about it, I am happy to link that.
>>
>
> Since anything I write would be incomplete without your description of
> what caused it and how you resolved it, I put together a WIP write up here,
> with screenshots / link to logs*. Feel free to use it as you want or let me
> know if you'd like more details; I'm happy to help write the complete story
> for a page on https://icing.github.io/mod_h2/
> <https://www.google.com/url?q=https%3A%2F%2Ficing.github.io%2Fmod_h2%2F&sa=D&sntz=1&usg=AFQjCNGGEpIrm6K6x3D656pasknPd6AoTQ> ,
> which is probably the most reasonable place to gather the relevant bits.
>
> Cheers
>
>
>
> * I rerun the tests to capture timeline screenshots, so the server-logs
> don't exactly correspond to those screenshots, but the behaviors were the
> same
> * Note that the delays in the timeline pictures are worse than those seen
> on server logs, which have been more helpful for understanding
> application-layer behavior. I think what's causing this is a bloated output
> buffer in the case where the server aggressively writes low-prio data
> (verified this by monitoring the buffer size which keeps increasing during
> the test)
>

Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
Sounds great!


> Very interested. I'd like to add a page over at
> https://icing.github.io/mod_h2/ about it, so that people can easily grasp
> what the advantages are. For that, your numbers (do you have screenshots of
> browser timelines maybe?) would be very welcome. Also that someone besides
> the module author has measured it adds credibility. :-)
>
> If you write yourself somewhere about it, I am happy to link that.
>

Since anything I write would be incomplete without your description of what
caused it and how you resolved it, I put together a WIP write up here, with
screenshots / link to logs*. Feel free to use it as you want or let me know
if you'd like more details; I'm happy to help write the complete story for
a page on https://icing.github.io/mod_h2/
<https://www.google.com/url?q=https%3A%2F%2Ficing.github.io%2Fmod_h2%2F&sa=D&sntz=1&usg=AFQjCNGGEpIrm6K6x3D656pasknPd6AoTQ>
,
which is probably the most reasonable place to gather the relevant bits.

Cheers



* I rerun the tests to capture timeline screenshots, so the server-logs
don't exactly correspond to those screenshots, but the behaviors were the
same
* Note that the delays in the timeline pictures are worse than those seen
on server logs, which have been more helpful for understanding
application-layer behavior. I think what's causing this is a bloated output
buffer in the case where the server aggressively writes low-prio data
(verified this by monitoring the buffer size which keeps increasing during
the test)

Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
> Am 17.01.2017 um 20:52 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> 
> Hi Stefan,
> 
> Sorry for the delay, I just got back from traveling. I just tried your new patch and indeed it gets rid of the 100ms delay: The server now serves the high priority object only ~5-20ms (did a few runs) after it receives its request, and only sending 5-6 lower-prio frames in between!

Happy to hear that!

> That's is a dramatic improvement compared to what I was observing in the first experiments (~500ms delay), and I think it affects not only this scenario that I was testing, but any scenario where objects of different priorities are conflicting. To verify this, I also tested another simple scenario in which I aggressively Server-Push several big objects when the server gets the base HTML file. Without the patch, objects embedded in the HTML (requests normally) are backed behind a large fraction of the pushed payload and delayed considerably (500ms). With the patch this is avoided (embedded objects server within a few ms after their request arrives, preempting Pushed objects) 
> 
> If you are interested, I have logs comparing the v1.8.8 performance to the baseline, for both the scenarios ( 1: "prefetched" objects triggered at the end of a page load delaying normal objects from the next navigation, and 2: "server-pushed" objects conflicting with embedded objects on the current page)

Very interested. I'd like to add a page over at https://icing.github.io/mod_h2/ about it, so that people can easily grasp what the advantages are. For that, your numbers (do you have screenshots of browser timelines maybe?) would be very welcome. Also that someone besides the module author has measured it adds credibility. :-)

If you write yourself somewhere about it, I am happy to link that.

> Would this patch eventually make it upstream? I'd be very interested in some details on what was causing this and how you resolved it. 

This version sits in the 2.4.x maintenance branch and will be part of the next server release. We target releases every quarter and the last was shortly, so it might take a while.

The details about what happend was:
 - the callback writing out on the client connection needs to tell the nghttp2 library explicitly to stop otherwise it will be called over and over again, so long as DATA is available.
 - once the write-out stops, mod_http2 is checking if new responses became available or if streams need to be resumed. It also checks if there are new frames from the client to read. So, basically there are three events that need to be served/checked. I have ideas to make that more event triggered, but it is basically polling now with timed waits inbetween if nothing is happening.

The change I did first was to never write more than 100ms and then check the other things again. That is what you saw in v1.8.7.
In v1.8.8 there is a new atomic flag that signals new response data becoming available. This is checked every time before a new frame is assembled for writing out. Once the writeout itself is called, there is no more interruption at the moment until it is done. That explains the 5-20ms delay you observe now.

Ideally, this all would run poll triggered and I have some ideas how to do that. Now I need to find the time to do it.

Nevertheless, thanks to your investigation, we have now a much nicer behaviour regarding this and a more responsive server!

Cheers,

Stefan
 

> On Fri, Jan 13, 2017 at 8:43 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> Hi Kyriakos,
> 
> maybe you can give https://github.com/icing/mod_h2/releases/tag/v1.8.8 a try in your setup? I would be interested if it gets rid of the 100ms delay in response processing. Thanks!
> 
> Cheers,
> 
> Stefan
> 
> > Am 04.01.2017 um 19:27 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> >
> > Hi Stefan,
> >
> > Yes, this is making a big, obvservable difference!
> >
> > Specifically, in all 3 repeats, the high priority stream is now served 100ms after it was received, writing ~100 frames (~1.6MB) of currently served, lower-priority stream.   (was: 500ms, 500frames(~7.5MB))
> >
> > In more detail, after the high-prio request is received, 20 more low-prio frames are served before the h2_task for it logs that it opens the output for the new stream. Then, another 80 low-frames are served before the high-prio reply is written. (relevant logs below)
> >
> > This already has an observable impact on the transition to the next page the moment I click on the link (goes from 1.5sec to less than 500ms), which I think is great because this tweak is relevant not just to this scenario, but to any higher level stream that begins while lower ones are served, even within a single page.
> >
> > I'm wondering if the change you made can be pushed harder to make the switch to the new stream even faster, e.g. avoiding even those 100 frames?
> >
> >
> > Thanks,
> > Kyriakos
> >
> >
> >
> > [Wed Jan 04 10:14:48.577687 2017] [http2:debug] [pid 24864] h2_stream.c(213): [client] AH03082: h2_stream(0-19): opened
> >
> > [Wed Jan 04 10:14:48.577758 2017] [http2:debug] [pid 24864] h2_session.c(452): [client] AH03066: h2_session(0): recv FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1486 (r/s)
> >
> > 20 x lower-prio frames:
> >
> > [Wed Jan 04 10:14:48.577864 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1486 (r/s)
> >
> > [Wed Jan 04 10:14:48.578775 2017] [http2:debug] [pid 24864] h2_task.c(106): [client] AH03348: h2_task(0-19): open output to GET 204.57.7.200 /preposition/nextnav.html
> >
> > 80 x lower-prio frames:
> > [Wed Jan 04 10:14:48.578790 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1504 (r/s)
> >
> > [Wed Jan 04 10:14:48.682168 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/1587 (r/s)
> >
> >
> > [Wed Jan 04 10:14:48.682186 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/1588 (r/s)
> >
> >
> > On Wed, Jan 4, 2017 at 9:28 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> > Hi Kyriakos,
> >
> > sorry for not replying earlier. I could find the issue you ran into, namely that mod_http2 is obsessed with the streams it already has and does not submit ready responses - until the existing streams are done or pause.
> >
> > I hope that the new release works much more nicely for you. You find it at https://github.com/icing/mod_h2/releases/tag/v1.8.7
> >
> > Thanks,
> >
> > Stefan
> >
> > > Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> > >
> > > Thanks Stefan!
> > >
> > > I just tried the tweaked version. I think I am seeing similar behavior, i.e. the higher-prio HTML reply is sent ~500ms after its request is received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.
> > >
> > > Before any conclusions, I wanted to make sure I compiled/used the tweaked mod properly with my existing Apache/2.4.25 on Ubuntu, since I haven't done the process before: I couldn't find details on the right way to swap in/out module versions, so I ended up compiling v.1.8.6 and pointing to the created mod_http2.so in "/etc/apache2/mods-enabled/http2.load", but I'm really not sure that's the right way. The only way I verified it was seeing this in /var/log/apache2/error.log:
> > >
> > > "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git, feats=CHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..."
> > >
> > >
> > > Assuming this is an acceptable way to use the tweaked version of the module (please let me know if not), where should I share two apache log files (one trace for each module version) so you could verify what I see?
> > >
> > >
> > >
> > >
> > > A few relevant lines from the v1.8.6 run (similar to the stable module, AFAICT):
> > >
> > > [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718] h2_session.c(439): [client ] AH03066: h2_session(0): recv FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1721 (r/s)
> > > [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718] h2_task.c(106): [client ] AH03348: h2_task(0-19): open output to GET  /preposition/nextnav.html
> > >
> > > [ ... continue sending ~500 DATA frames for streams 7-11 ...]
> > >
> > > [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718] h2_session.c(661): [client ] AH03068: h2_session(0): sent FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/2209 (r/s)
> > > [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718] h2_session.c(661): [client ] AH03068: h2_session(0): sent FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/2210 (r/s)8.6
> > >
> > > [ ... continue sending streams 11 onwards ...]
> > >
> > > Thanks!
> > >
> > > On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> > > Hi Kyriakos,
> > >
> > > have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6
> > >
> > > That version flushes when at least 2 TLS records are ready to send. Also, frame sizes are now aligned to TLS record sizes. So they are influenced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
> > >
> > > Additionally, and highly experimental, I added H2TLSFlushCount to configure the number of records to flush. You may play around with it (default is 2) in your scenarios.
> > >
> > > I hope that this reduces buffering and makes the server more (another word for agile, pls) to stream changes. Please let me know if that had any effect on your tests.
> > >
> > > Thanks,
> > >
> > > Stefan
> > >
> > > > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> > > >
> > > > That means the images should get a minim of ~30% of the available bandwidth as long as they have data. My reading.
> > > >
> > > > Right. Makes sense.
> > >
> > > Stefan Eissing
> > >
> > > <green/>bytes GmbH
> > > Hafenstrasse 16
> > > 48155 Münster
> > > www.greenbytes.de
> > >
> > >
> >
> > Stefan Eissing
> >
> > <green/>bytes GmbH
> > Hafenstrasse 16
> > 48155 Münster
> > www.greenbytes.de
> >
> >
> 
> Stefan Eissing
> 
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
> 
> 

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de


Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
Hi Stefan,

Sorry for the delay, I just got back from traveling. I just tried your new
patch and indeed it gets rid of the 100ms delay: The server now serves the
high priority object only ~5-20ms (did a few runs) after it receives its
request, and only sending 5-6 lower-prio frames in between!

That's is a dramatic improvement compared to what I was observing in the
first experiments (~500ms delay), and I think it affects not only this
scenario that I was testing, but any scenario where objects of different
priorities are conflicting. To verify this, I also tested another simple
scenario in which I aggressively Server-Push several big objects when the
server gets the base HTML file. Without the patch, objects embedded in the
HTML (requests normally) are backed behind a large fraction of the pushed
payload and delayed considerably (500ms). With the patch this is avoided
(embedded objects server within a few ms after their request arrives,
preempting Pushed objects)

If you are interested, I have logs comparing the v1.8.8 performance to the
baseline, for both the scenarios ( 1: "prefetched" objects triggered at the
end of a page load delaying normal objects from the next navigation, and 2:
"server-pushed" objects conflicting with embedded objects on the current
page)

Would this patch eventually make it upstream? I'd be very interested in
some details on what was causing this and how you resolved it.


On Fri, Jan 13, 2017 at 8:43 AM, Stefan Eissing <
stefan.eissing@greenbytes.de> wrote:

> Hi Kyriakos,
>
> maybe you can give https://github.com/icing/mod_h2/releases/tag/v1.8.8 a
> try in your setup? I would be interested if it gets rid of the 100ms delay
> in response processing. Thanks!
>
> Cheers,
>
> Stefan
>
> > Am 04.01.2017 um 19:27 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> >
> > Hi Stefan,
> >
> > Yes, this is making a big, obvservable difference!
> >
> > Specifically, in all 3 repeats, the high priority stream is now served
> 100ms after it was received, writing ~100 frames (~1.6MB) of currently
> served, lower-priority stream.   (was: 500ms, 500frames(~7.5MB))
> >
> > In more detail, after the high-prio request is received, 20 more
> low-prio frames are served before the h2_task for it logs that it opens the
> output for the new stream. Then, another 80 low-frames are served before
> the high-prio reply is written. (relevant logs below)
> >
> > This already has an observable impact on the transition to the next page
> the moment I click on the link (goes from 1.5sec to less than 500ms), which
> I think is great because this tweak is relevant not just to this scenario,
> but to any higher level stream that begins while lower ones are served,
> even within a single page.
> >
> > I'm wondering if the change you made can be pushed harder to make the
> switch to the new stream even faster, e.g. avoiding even those 100 frames?
> >
> >
> > Thanks,
> > Kyriakos
> >
> >
> >
> > [Wed Jan 04 10:14:48.577687 2017] [http2:debug] [pid 24864]
> h2_stream.c(213): [client] AH03082: h2_stream(0-19): opened
> >
> > [Wed Jan 04 10:14:48.577758 2017] [http2:debug] [pid 24864]
> h2_session.c(452): [client] AH03066: h2_session(0): recv
> FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1486 (r/s)
> >
> > 20 x lower-prio frames:
> >
> > [Wed Jan 04 10:14:48.577864 2017] [http2:debug] [pid 24864]
> h2_session.c(685): [client] AH03068: h2_session(0): sent
> FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1486 (r/s)
> >
> > [Wed Jan 04 10:14:48.578775 2017] [http2:debug] [pid 24864]
> h2_task.c(106): [client] AH03348: h2_task(0-19): open output to GET
> 204.57.7.200 /preposition/nextnav.html
> >
> > 80 x lower-prio frames:
> > [Wed Jan 04 10:14:48.578790 2017] [http2:debug] [pid 24864]
> h2_session.c(685): [client] AH03068: h2_session(0): sent
> FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1504 (r/s)
> >
> > [Wed Jan 04 10:14:48.682168 2017] [http2:debug] [pid 24864]
> h2_session.c(685): [client] AH03068: h2_session(0): sent
> FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/1587 (r/s)
> >
> >
> > [Wed Jan 04 10:14:48.682186 2017] [http2:debug] [pid 24864]
> h2_session.c(685): [client] AH03068: h2_session(0): sent
> FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/1588 (r/s)
> >
> >
> > On Wed, Jan 4, 2017 at 9:28 AM, Stefan Eissing <
> stefan.eissing@greenbytes.de> wrote:
> > Hi Kyriakos,
> >
> > sorry for not replying earlier. I could find the issue you ran into,
> namely that mod_http2 is obsessed with the streams it already has and does
> not submit ready responses - until the existing streams are done or pause.
> >
> > I hope that the new release works much more nicely for you. You find it
> at https://github.com/icing/mod_h2/releases/tag/v1.8.7
> >
> > Thanks,
> >
> > Stefan
> >
> > > Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis <kyr.zarifis@gmail.com
> >:
> > >
> > > Thanks Stefan!
> > >
> > > I just tried the tweaked version. I think I am seeing similar
> behavior, i.e. the higher-prio HTML reply is sent ~500ms after its request
> is received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.
> > >
> > > Before any conclusions, I wanted to make sure I compiled/used the
> tweaked mod properly with my existing Apache/2.4.25 on Ubuntu, since I
> haven't done the process before: I couldn't find details on the right way
> to swap in/out module versions, so I ended up compiling v.1.8.6 and
> pointing to the created mod_http2.so in "/etc/apache2/mods-enabled/http2.load",
> but I'm really not sure that's the right way. The only way I verified it
> was seeing this in /var/log/apache2/error.log:
> > >
> > > "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git,
> feats=CHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..."
> > >
> > >
> > > Assuming this is an acceptable way to use the tweaked version of the
> module (please let me know if not), where should I share two apache log
> files (one trace for each module version) so you could verify what I see?
> > >
> > >
> > >
> > >
> > > A few relevant lines from the v1.8.6 run (similar to the stable
> module, AFAICT):
> > >
> > > [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718]
> h2_session.c(439): [client ] AH03066: h2_session(0): recv
> FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1721 (r/s)
> > > [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718]
> h2_task.c(106): [client ] AH03348: h2_task(0-19): open output to GET
> /preposition/nextnav.html
> > >
> > > [ ... continue sending ~500 DATA frames for streams 7-11 ...]
> > >
> > > [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718]
> h2_session.c(661): [client ] AH03068: h2_session(0): sent
> FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/2209 (r/s)
> > > [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718]
> h2_session.c(661): [client ] AH03068: h2_session(0): sent
> FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/2210
> (r/s)8.6
> > >
> > > [ ... continue sending streams 11 onwards ...]
> > >
> > > Thanks!
> > >
> > > On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing <
> stefan.eissing@greenbytes.de> wrote:
> > > Hi Kyriakos,
> > >
> > > have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6
> > >
> > > That version flushes when at least 2 TLS records are ready to send.
> Also, frame sizes are now aligned to TLS record sizes. So they are
> influenced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
> > >
> > > Additionally, and highly experimental, I added H2TLSFlushCount to
> configure the number of records to flush. You may play around with it
> (default is 2) in your scenarios.
> > >
> > > I hope that this reduces buffering and makes the server more (another
> word for agile, pls) to stream changes. Please let me know if that had any
> effect on your tests.
> > >
> > > Thanks,
> > >
> > > Stefan
> > >
> > > > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <
> kyr.zarifis@gmail.com>:
> > > >
> > > > That means the images should get a minim of ~30% of the available
> bandwidth as long as they have data. My reading.
> > > >
> > > > Right. Makes sense.
> > >
> > > Stefan Eissing
> > >
> > > <green/>bytes GmbH
> > > Hafenstrasse 16
> > > 48155 Münster
> > > www.greenbytes.de
> > >
> > >
> >
> > Stefan Eissing
> >
> > <green/>bytes GmbH
> > Hafenstrasse 16
> > 48155 Münster
> > www.greenbytes.de
> >
> >
>
> Stefan Eissing
>
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
>
>

Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
Hi Kyriakos,

maybe you can give https://github.com/icing/mod_h2/releases/tag/v1.8.8 a try in your setup? I would be interested if it gets rid of the 100ms delay in response processing. Thanks!

Cheers,

Stefan

> Am 04.01.2017 um 19:27 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> 
> Hi Stefan,
> 
> Yes, this is making a big, obvservable difference!
> 
> Specifically, in all 3 repeats, the high priority stream is now served 100ms after it was received, writing ~100 frames (~1.6MB) of currently served, lower-priority stream.   (was: 500ms, 500frames(~7.5MB))
> 
> In more detail, after the high-prio request is received, 20 more low-prio frames are served before the h2_task for it logs that it opens the output for the new stream. Then, another 80 low-frames are served before the high-prio reply is written. (relevant logs below)
> 
> This already has an observable impact on the transition to the next page the moment I click on the link (goes from 1.5sec to less than 500ms), which I think is great because this tweak is relevant not just to this scenario, but to any higher level stream that begins while lower ones are served, even within a single page.
> 
> I'm wondering if the change you made can be pushed harder to make the switch to the new stream even faster, e.g. avoiding even those 100 frames?
> 
> 
> Thanks,
> Kyriakos
> 
> 
> 
> [Wed Jan 04 10:14:48.577687 2017] [http2:debug] [pid 24864] h2_stream.c(213): [client] AH03082: h2_stream(0-19): opened
> 
> [Wed Jan 04 10:14:48.577758 2017] [http2:debug] [pid 24864] h2_session.c(452): [client] AH03066: h2_session(0): recv FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1486 (r/s)
> 
> 20 x lower-prio frames:
> 
> [Wed Jan 04 10:14:48.577864 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1486 (r/s)
> 
> [Wed Jan 04 10:14:48.578775 2017] [http2:debug] [pid 24864] h2_task.c(106): [client] AH03348: h2_task(0-19): open output to GET 204.57.7.200 /preposition/nextnav.html
> 
> 80 x lower-prio frames:
> [Wed Jan 04 10:14:48.578790 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1504 (r/s)
> 
> [Wed Jan 04 10:14:48.682168 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/1587 (r/s)
> 
> 
> [Wed Jan 04 10:14:48.682186 2017] [http2:debug] [pid 24864] h2_session.c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/1588 (r/s)
> 
> 
> On Wed, Jan 4, 2017 at 9:28 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> Hi Kyriakos,
> 
> sorry for not replying earlier. I could find the issue you ran into, namely that mod_http2 is obsessed with the streams it already has and does not submit ready responses - until the existing streams are done or pause.
> 
> I hope that the new release works much more nicely for you. You find it at https://github.com/icing/mod_h2/releases/tag/v1.8.7
> 
> Thanks,
> 
> Stefan
> 
> > Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> >
> > Thanks Stefan!
> >
> > I just tried the tweaked version. I think I am seeing similar behavior, i.e. the higher-prio HTML reply is sent ~500ms after its request is received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.
> >
> > Before any conclusions, I wanted to make sure I compiled/used the tweaked mod properly with my existing Apache/2.4.25 on Ubuntu, since I haven't done the process before: I couldn't find details on the right way to swap in/out module versions, so I ended up compiling v.1.8.6 and pointing to the created mod_http2.so in "/etc/apache2/mods-enabled/http2.load", but I'm really not sure that's the right way. The only way I verified it was seeing this in /var/log/apache2/error.log:
> >
> > "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git, feats=CHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..."
> >
> >
> > Assuming this is an acceptable way to use the tweaked version of the module (please let me know if not), where should I share two apache log files (one trace for each module version) so you could verify what I see?
> >
> >
> >
> >
> > A few relevant lines from the v1.8.6 run (similar to the stable module, AFAICT):
> >
> > [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718] h2_session.c(439): [client ] AH03066: h2_session(0): recv FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1721 (r/s)
> > [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718] h2_task.c(106): [client ] AH03348: h2_task(0-19): open output to GET  /preposition/nextnav.html
> >
> > [ ... continue sending ~500 DATA frames for streams 7-11 ...]
> >
> > [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718] h2_session.c(661): [client ] AH03068: h2_session(0): sent FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/2209 (r/s)
> > [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718] h2_session.c(661): [client ] AH03068: h2_session(0): sent FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/2210 (r/s)8.6
> >
> > [ ... continue sending streams 11 onwards ...]
> >
> > Thanks!
> >
> > On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> > Hi Kyriakos,
> >
> > have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6
> >
> > That version flushes when at least 2 TLS records are ready to send. Also, frame sizes are now aligned to TLS record sizes. So they are influenced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
> >
> > Additionally, and highly experimental, I added H2TLSFlushCount to configure the number of records to flush. You may play around with it (default is 2) in your scenarios.
> >
> > I hope that this reduces buffering and makes the server more (another word for agile, pls) to stream changes. Please let me know if that had any effect on your tests.
> >
> > Thanks,
> >
> > Stefan
> >
> > > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> > >
> > > That means the images should get a minim of ~30% of the available bandwidth as long as they have data. My reading.
> > >
> > > Right. Makes sense.
> >
> > Stefan Eissing
> >
> > <green/>bytes GmbH
> > Hafenstrasse 16
> > 48155 Münster
> > www.greenbytes.de
> >
> >
> 
> Stefan Eissing
> 
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
> 
> 

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de


Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
Hi Stefan,

Yes, this is making a big, obvservable difference!

Specifically, in all 3 repeats, the high priority stream is now served
100ms after it was received, writing ~100 frames (~1.6MB) of currently
served, lower-priority stream.   (was: 500ms, 500frames(~7.5MB))

In more detail, after the high-prio request is received, 20 more low-prio
frames are served before the h2_task for it logs that it opens the output
for the new stream. Then, another 80 low-frames are served before the
high-prio reply is written. (relevant logs below)

This already has an observable impact on the transition to the next page
the moment I click on the link (goes from 1.5sec to less than 500ms), which
I think is great because this tweak is relevant not just to this scenario,
but to any higher level stream that begins while lower ones are served,
even within a single page.

I'm wondering if the change you made can be pushed harder to make the
switch to the new stream even faster, e.g. avoiding even those 100 frames?


Thanks,
Kyriakos



[Wed Jan 04 10:14:48.577687 2017] [http2:debug] [pid 24864]
h2_stream.c(213): [client] AH03082: h2_stream(0-19): opened

[Wed Jan 04 10:14:48.577758 2017] [http2:debug] [pid 24864]
h2_session.c(452): [client] AH03066: h2_session(0): recv
FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1486 (r/s)

20 x lower-prio frames:

[Wed Jan 04 10:14:48.577864 2017] [http2:debug] [pid 24864]
h2_session.c(685): [client] AH03068: h2_session(0): sent
FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1486 (r/s)

[Wed Jan 04 10:14:48.578775 2017] [http2:debug] [pid 24864] h2_task.c(106):
[client] AH03348: h2_task(0-19): open output to GET 204.57.7.200
/preposition/nextnav.html
80 x lower-prio frames:

[Wed Jan 04 10:14:48.578790 2017] [http2:debug] [pid 24864]
h2_session.c(685): [client] AH03068: h2_session(0): sent
FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1504 (r/s)

[Wed Jan 04 10:14:48.682168 2017] [http2:debug] [pid 24864]
h2_session.c(685): [client] AH03068: h2_session(0): sent
FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/1587 (r/s)

[Wed Jan 04 10:14:48.682186 2017] [http2:debug] [pid 24864]
h2_session.c(685): [client] AH03068: h2_session(0): sent
FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/1588 (r/s)

On Wed, Jan 4, 2017 at 9:28 AM, Stefan Eissing <stefan.eissing@greenbytes.de
> wrote:

> Hi Kyriakos,
>
> sorry for not replying earlier. I could find the issue you ran into,
> namely that mod_http2 is obsessed with the streams it already has and does
> not submit ready responses - until the existing streams are done or pause.
>
> I hope that the new release works much more nicely for you. You find it at
> https://github.com/icing/mod_h2/releases/tag/v1.8.7
>
> Thanks,
>
> Stefan
>
> > Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> >
> > Thanks Stefan!
> >
> > I just tried the tweaked version. I think I am seeing similar behavior,
> i.e. the higher-prio HTML reply is sent ~500ms after its request is
> received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.
> >
> > Before any conclusions, I wanted to make sure I compiled/used the
> tweaked mod properly with my existing Apache/2.4.25 on Ubuntu, since I
> haven't done the process before: I couldn't find details on the right way
> to swap in/out module versions, so I ended up compiling v.1.8.6 and
> pointing to the created mod_http2.so in "/etc/apache2/mods-enabled/http2.load",
> but I'm really not sure that's the right way. The only way I verified it
> was seeing this in /var/log/apache2/error.log:
> >
> > "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git,
> feats=CHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..."
> >
> >
> > Assuming this is an acceptable way to use the tweaked version of the
> module (please let me know if not), where should I share two apache log
> files (one trace for each module version) so you could verify what I see?
> >
> >
> >
> >
> > A few relevant lines from the v1.8.6 run (similar to the stable module,
> AFAICT):
> >
> > [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718]
> h2_session.c(439): [client ] AH03066: h2_session(0): recv
> FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1721 (r/s)
> > [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718]
> h2_task.c(106): [client ] AH03348: h2_task(0-19): open output to GET
> /preposition/nextnav.html
> >
> > [ ... continue sending ~500 DATA frames for streams 7-11 ...]
> >
> > [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718]
> h2_session.c(661): [client ] AH03068: h2_session(0): sent
> FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/2209 (r/s)
> > [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718]
> h2_session.c(661): [client ] AH03068: h2_session(0): sent
> FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/2210
> (r/s)8.6
> >
> > [ ... continue sending streams 11 onwards ...]
> >
> > Thanks!
> >
> > On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing <
> stefan.eissing@greenbytes.de> wrote:
> > Hi Kyriakos,
> >
> > have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6
> >
> > That version flushes when at least 2 TLS records are ready to send.
> Also, frame sizes are now aligned to TLS record sizes. So they are
> influenced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
> >
> > Additionally, and highly experimental, I added H2TLSFlushCount to
> configure the number of records to flush. You may play around with it
> (default is 2) in your scenarios.
> >
> > I hope that this reduces buffering and makes the server more (another
> word for agile, pls) to stream changes. Please let me know if that had any
> effect on your tests.
> >
> > Thanks,
> >
> > Stefan
> >
> > > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <kyr.zarifis@gmail.com
> >:
> > >
> > > That means the images should get a minim of ~30% of the available
> bandwidth as long as they have data. My reading.
> > >
> > > Right. Makes sense.
> >
> > Stefan Eissing
> >
> > <green/>bytes GmbH
> > Hafenstrasse 16
> > 48155 Münster
> > www.greenbytes.de
> >
> >
>
> Stefan Eissing
>
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
>
>

Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
Hi Kyriakos,

sorry for not replying earlier. I could find the issue you ran into, namely that mod_http2 is obsessed with the streams it already has and does not submit ready responses - until the existing streams are done or pause.

I hope that the new release works much more nicely for you. You find it at https://github.com/icing/mod_h2/releases/tag/v1.8.7

Thanks,

Stefan

> Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> 
> Thanks Stefan!
> 
> I just tried the tweaked version. I think I am seeing similar behavior, i.e. the higher-prio HTML reply is sent ~500ms after its request is received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.
> 
> Before any conclusions, I wanted to make sure I compiled/used the tweaked mod properly with my existing Apache/2.4.25 on Ubuntu, since I haven't done the process before: I couldn't find details on the right way to swap in/out module versions, so I ended up compiling v.1.8.6 and pointing to the created mod_http2.so in "/etc/apache2/mods-enabled/http2.load", but I'm really not sure that's the right way. The only way I verified it was seeing this in /var/log/apache2/error.log:
> 
> "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git, feats=CHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..." 
> 
> 
> Assuming this is an acceptable way to use the tweaked version of the module (please let me know if not), where should I share two apache log files (one trace for each module version) so you could verify what I see?
> 
> 
> 
> 
> A few relevant lines from the v1.8.6 run (similar to the stable module, AFAICT):
> 
> [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718] h2_session.c(439): [client ] AH03066: h2_session(0): recv FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1721 (r/s)
> [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718] h2_task.c(106): [client ] AH03348: h2_task(0-19): open output to GET  /preposition/nextnav.html
> 
> [ ... continue sending ~500 DATA frames for streams 7-11 ...]
> 
> [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718] h2_session.c(661): [client ] AH03068: h2_session(0): sent FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/2209 (r/s)
> [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718] h2_session.c(661): [client ] AH03068: h2_session(0): sent FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/2210 (r/s)8.6
> 
> [ ... continue sending streams 11 onwards ...]
> 
> Thanks!
> 
> On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> Hi Kyriakos,
> 
> have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6
> 
> That version flushes when at least 2 TLS records are ready to send. Also, frame sizes are now aligned to TLS record sizes. So they are influenced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
> 
> Additionally, and highly experimental, I added H2TLSFlushCount to configure the number of records to flush. You may play around with it (default is 2) in your scenarios.
> 
> I hope that this reduces buffering and makes the server more (another word for agile, pls) to stream changes. Please let me know if that had any effect on your tests.
> 
> Thanks,
> 
> Stefan
> 
> > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> >
> > That means the images should get a minim of ~30% of the available bandwidth as long as they have data. My reading.
> >
> > Right. Makes sense.
> 
> Stefan Eissing
> 
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
> 
> 

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de


Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
Thanks Stefan!

I just tried the tweaked version. I think I am seeing similar behavior,
i.e. the higher-prio HTML reply is sent ~500ms after its request is
received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.

Before any conclusions, I wanted to make sure I compiled/used the tweaked
mod properly with my existing Apache/2.4.25 on Ubuntu, since I haven't done
the process before: I couldn't find details on the right way to swap in/out
module versions, so I ended up compiling v.1.8.6 and pointing to the
created mod_http2.so in "/etc/apache2/mods-enabled/http2.load", but I'm
really not sure that's the right way. The only way I verified it was seeing
this in /var/log/apache2/error.log:

"[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git,
feats=CHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..."


Assuming this is an acceptable way to use the tweaked version of the module
(please let me know if not), where should I share two apache log files (one
trace for each module version) so you could verify what I see?




A few relevant lines from the v1.8.6 run (similar to the stable module,
AFAICT):

[Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718]
h2_session.c(439): [client ] AH03066: h2_session(0): recv
FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1721 (r/s)
[Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718] h2_task.c(106):
[client ] AH03348: h2_task(0-19): open output to GET
 /preposition/nextnav.html

[ ... continue sending ~500 DATA frames for streams 7-11 ...]

[Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718]
h2_session.c(661): [client ] AH03068: h2_session(0): sent
FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/2209 (r/s)
[Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718]
h2_session.c(661): [client ] AH03068: h2_session(0): sent
FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/2210
(r/s)8.6

[ ... continue sending streams 11 onwards ...]

Thanks!

On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing <
stefan.eissing@greenbytes.de> wrote:

> Hi Kyriakos,
>
> have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6
>
> That version flushes when at least 2 TLS records are ready to send. Also,
> frame sizes are now aligned to TLS record sizes. So they are influenced by
> the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
>
> Additionally, and highly experimental, I added H2TLSFlushCount to
> configure the number of records to flush. You may play around with it
> (default is 2) in your scenarios.
>
> I hope that this reduces buffering and makes the server more (another word
> for agile, pls) to stream changes. Please let me know if that had any
> effect on your tests.
>
> Thanks,
>
> Stefan
>
> > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> >
> > That means the images should get a minim of ~30% of the available
> bandwidth as long as they have data. My reading.
> >
> > Right. Makes sense.
>
> Stefan Eissing
>
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
>
>

Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
Hi Kyriakos,

have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6

That version flushes when at least 2 TLS records are ready to send. Also, frame sizes are now aligned to TLS record sizes. So they are influenced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.

Additionally, and highly experimental, I added H2TLSFlushCount to configure the number of records to flush. You may play around with it (default is 2) in your scenarios.

I hope that this reduces buffering and makes the server more (another word for agile, pls) to stream changes. Please let me know if that had any effect on your tests.

Thanks,

Stefan

> Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> 
> That means the images should get a minim of ~30% of the available bandwidth as long as they have data. My reading.
> 
> Right. Makes sense.  

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de


Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
>
> That means the images should get a minim of ~30% of the available
> bandwidth as long as they have data. My reading.


Right. Makes sense.

Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
> Am 29.12.2016 um 12:10 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> 
> 
> 1. priority dependencies: streams with lower prio, but depending on a high level stream, inherit this priority. This would make all preloads *together* have the same prio as page B. Last I knew, chrome did no dependencies. Were these resources PUSHed by the server?
> 
> Not PUSHed; they were requested by the browser upon A's onLoad using the ' link rel=”prefetch” ' directive in HTML A's <head>.
> 
> Right, I forgot to add this in the description: Chrome's log shows the priorities/dependencies as follows:
> HTML A : stream_id=1, parent_stream=0, weight=256
> 1st Prefetched image : stream_id=3, parent_stream=0, weight=110
> 2nd Prefetched image : stream_id=5, parent_stream=3, weight=110
> 3nd Prefetched image : stream_id=7, parent_stream=5, weight=110
> ...
> 8th Prefetched image : stream_id=17, parent_stream=15, weight=110
> HTML B : stream_id=19, parent_stream=0, weight=256
> 
> So my understanding was that the dependencies looked reasonable, i.e. the prefetched objects form a separate lower priority chain starting at the root and as such should not block anything with higher prio with the same parent (0). Am I reading this correctly?

That is different than Firefox is doing things, but it should work. Notice though, that the images are chained with weight 110 on stream 0 and HTML B with max weight 255. That means the images should get a minim of ~30% of the available bandwidth as long as they have data. My reading.

>  
> 2. mod_http2 is, in its current implementation, very eager to fill the core network buffers. That means a lot of frames have already been pre-packaged for sending before page B becomes ready to sent. These are then not pre-empted by page B, but will be sent first.
> 
> If you are willing to test a github beta of the module, I might find the time next week to tweak the behaviour in 2) and you could verify the benefits in your setup. I will not promise anything, though.
> 
> This was my (as well as a few Chrome devs') guess, especially considering that HTML B is not blocked behind the prefetched images (in fact not even blocked by completing any single one of them: the images are sent serially and HTML B is correctly interleaved between frames of one of those images - it just doesn't seem to happen as quickly as it could have (~500ms earlier, in this particular case) ).
> 
> I'd be really interested in repeating the experiment with a tweaked version of the module. Please feel free to let me know if/how I can help.

I see what I can come up with. Thanks!

> 
> Thanks,
> Kyriakos
>  
> 
> -Stefan
> 
> 
> 
> 
> 

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de


Re: HTTP/2 frame prioritization not honored

Posted by Kyriakos Zarifis <ky...@gmail.com>.
> 1. priority dependencies: streams with lower prio, but depending on a high
> level stream, inherit this priority. This would make all preloads
> *together* have the same prio as page B. Last I knew, chrome did no
> dependencies. Were these resources PUSHed by the server?
>

Not PUSHed; they were requested by the browser upon A's onLoad using the '
link rel=”prefetch” ' directive in HTML A's <head>.

Right, I forgot to add this in the description: Chrome's log shows the
priorities/dependencies as follows:
HTML A : stream_id=1, parent_stream=0, weight=256
1st Prefetched image : stream_id=3, parent_stream=0, weight=110
2nd Prefetched image : stream_id=5, parent_stream=3, weight=110
3nd Prefetched image : stream_id=7, parent_stream=5, weight=110
...
8th Prefetched image : stream_id=17, parent_stream=15, weight=110
HTML B : stream_id=19, parent_stream=0, weight=256

So my understanding was that the dependencies looked reasonable, i.e. the
prefetched objects form a separate lower priority chain starting at the
root and as such should not block anything with higher prio with the same
parent (0). Am I reading this correctly?


> 2. mod_http2 is, in its current implementation, very eager to fill the
> core network buffers. That means a lot of frames have already been
> pre-packaged for sending before page B becomes ready to sent. These are
> then not pre-empted by page B, but will be sent first.
>
> If you are willing to test a github beta of the module, I might find the
> time next week to tweak the behaviour in 2) and you could verify the
> benefits in your setup. I will not promise anything, though.
>

This was my (as well as a few Chrome devs') guess, especially considering
that HTML B is not blocked behind the prefetched images (in fact not even
blocked by completing any single one of them: the images are sent serially
and HTML B is correctly interleaved between frames of one of those images -
it just doesn't seem to happen as quickly as it could have (~500ms earlier,
in this particular case) ).

I'd be really interested in repeating the experiment with a tweaked version
of the module. Please feel free to let me know if/how I can help.

Thanks,
Kyriakos


>
> -Stefan
>
>
>
>
>

Re: HTTP/2 frame prioritization not honored

Posted by Stefan Eissing <st...@greenbytes.de>.
> Am 29.12.2016 um 02:07 schrieb Kyriakos Zarifis <ky...@gmail.com>:
> 
> I am experimenting with contention between lower/higher priority HTTP/2 streams, and I think I am noticing that high-priority frames are not given high priority (quickly enough)
> 
> The process is: Download dummy page A, which, after onLoad, Prefetches 8 objects (5MB each)  at low priority. While the first few of those are downloaded, click on link to page B (high priority stream), and measure how long it takes to receive.
> 
> The client-server RTT is 20ms, and ideally because of higher prioritization we would want the second HTML to be received at a timeframe as close to that RTT as possible. However, I am noticing that the second HTML is delayed (although not completely HoL-blocked) by objects that are being Prefetched at lower priority.
> 
> This plots the client-side trace (Chrome log): HTML A is stream_id 1, the 8 prefetched objects are streams 3-17, and HTML B (the object that is delayed) is stream_id 19.  Dots below the x axis are the GET requests for the respectively colored streams. Dots above the x axis are received DATA frames. GETs for the prefetched objects are sent after onLoad, and that of stream_id 19 is sent when I click on the link, around t=1900.  As this shows, the reply for the high-priority stream (19) arrives ~1.5s later.
> 
> Apache's log shows that it sent the reply to stream_id 19 (HTML B) ~500ms after it got its request. During those 500ms, it wrote another ~350 frames (~5MB) from other streams. So it honors the higher priority in the sense that it interleaves the HTML between the currently written object, but it continues to write many lower-priority frames before it does so, delaying that high-prio HTML. In contrast, when I wait until all prefetched objects are served and then click on the link, the logs shows that the reply is written within a few ms from the request arrival, as expected, and the second page loads observably faster.
> 
> This seems to break any default prioritization or prioritization imposed by the client, and leads to unexpected performance in different prioritization-dependent scenarios that I have tried.
> 
> Has anyone experience this before, and do you know what causes it and what the right way to fix it would be?

Reading your excellent description, there are two main things that can affect this:
1. priority dependencies: streams with lower prio, but depending on a high level stream, inherit this priority. This would make all preloads *together* have the same prio as page B. Last I knew, chrome did no dependencies. Were these resources PUSHed by the server?
2. mod_http2 is, in its current implementation, very eager to fill the core network buffers. That means a lot of frames have already been pre-packaged for sending before page B becomes ready to sent. These are then not pre-empted by page B, but will be sent first.

If you are willing to test a github beta of the module, I might find the time next week to tweak the behaviour in 2) and you could verify the benefits in your setup. I will not promise anything, though.

-Stefan