You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafficserver.apache.org by GitBox <gi...@apache.org> on 2020/09/11 13:55:55 UTC

[GitHub] [trafficserver] cheluskin opened a new issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

cheluskin opened a new issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179


   Range plugin works well, but when I add slice.so to the config something strange happens. TS consumes all memory and restarts.
   It seems to me that it creates more slices than client needs, this can be seen in the squid log. Creates over 400 slices in one second. Although the client did not request them and could not even receive them. 
   Similar nginx configuration works without such overloads
   
   ```
   memory 8G
   cpu 4 cores
   network 1Gb
   ```
   
   
   <details><summary>records.config</summary>
   
   ```
   CONFIG proxy.config.ssl.CA.cert.path STRING /opt/trafficserver/etc/trafficserver/ssl
   CONFIG proxy.config.http.enable_http_info INT 1
   CONFIG proxy.config.log.rolling_enabled INT 2
   CONFIG proxy.config.log.logging_enabled INT 3
   CONFIG proxy.config.log.rolling_size_mb INT 100
   CONFIG proxy.config.http.enable_http_stats INT 1
   CONFIG proxy.config.http.connect_attempts_timeout INT 10
   CONFIG proxy.config.diags.debug.enabled INT 0
   CONFIG proxy.config.ssl.client.private_key.path STRING /opt/trafficserver/etc/trafficserver/ssl
   CONFIG proxy.config.http.slow.log.threshold INT 60000
   CONFIG proxy.config.admin.user_id STRING ats
   CONFIG proxy.config.http.cache.required_headers INT 1
   CONFIG proxy.config.http.response_server_str STRING Nginx
   CONFIG proxy.config.log.max_space_mb_headroom INT 50
   CONFIG proxy.config.ssl.server.private_key.path STRING /opt/trafficserver/etc/trafficserver/ssl
   CONFIG proxy.config.ssl.server.ticket_key.filename STRING NULL
   CONFIG proxy.config.http.server_ports STRING 80 80:ipv6 443:proto=http:ssl 443:ipv6:proto=http:ssl
   CONFIG proxy.config.dns.round_robin_nameservers INT 0
   CONFIG proxy.config.output.logfile.rolling_enabled INT 2
   CONFIG proxy.config.ssl.client.CA.cert.path STRING /opt/trafficserver/etc/trafficserver/ssl
   CONFIG proxy.config.cache.ram_cache.algorithm INT 0
   CONFIG proxy.config.body_factory.template_sets_dir STRING /opt/trafficserver/etc/trafficserver/body_factory
   CONFIG proxy.config.output.logfile.rolling_size_mb INT 100
   CONFIG proxy.config.ssl.server.cert.path STRING /opt/trafficserver/etc/trafficserver/ssl
   CONFIG proxy.config.ssl.ocsp.enabled INT 1
   CONFIG proxy.config.url_remap.remap_required INT 1
   CONFIG proxy.config.http.cache.ignore_server_no_cache INT 0
   CONFIG proxy.config.log.max_space_mb_for_logs INT 10000
   CONFIG proxy.config.log.logfile_dir STRING /opt/trafficserver/var/log/trafficserver
   CONFIG proxy.config.ssl.client.cert.path STRING /opt/trafficserver/etc/trafficserver/ssl
   ```
   </details>
   
   <details><summary>remap.config</summary>
   
   ```
   map    https://cdn.fps.mycdn.com/     http://myorigin.com/ @plugin=cache_range_requests.so
   ```
   
   </details>
   
   <details><summary>squid Log</summary>
   
   ```
   1599825124.717 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.717 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.717 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.718 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.718 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.718 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.719 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.719 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.719 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.720 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.720 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.721 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.721 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.722 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.722 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.723 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.723 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.724 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.724 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.725 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.725 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.726 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.726 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.726 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.727 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.727 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.727 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.728 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.728 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.729 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.729 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.729 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.730 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.730 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.730 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.731 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.731 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.732 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.732 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.733 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.733 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.734 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.734 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.735 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.735 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.736 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.736 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.737 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.737 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.738 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.739 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.739 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.740 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.741 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.741 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.742 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.743 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.743 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.744 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.745 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.746 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.747 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.747 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.748 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.749 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.750 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.751 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.752 0 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.753 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.754 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.755 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.756 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.757 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.758 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.759 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.761 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.762 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.763 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.764 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.765 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.767 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.768 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.769 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.771 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.772 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.773 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.775 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.776 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.778 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.779 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.781 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.782 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.784 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.785 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.787 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.789 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.790 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.792 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.794 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.795 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.797 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.799 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.801 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.802 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.804 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.806 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.808 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.810 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.812 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.813 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.815 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.817 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.819 2 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.821 2 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.823 1 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.825 2 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.827 2 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   1599825124.830 2 217.66.156.43 TCP_MEM_HIT/206 1048938 GET http://myorigin.com/contents/videos/19000/19023/19023.mp4 - NONE/- video/mp4
   ```
   
   </details>
   
   <details><summary>Full configuration records from crashlog</summary>
   
   ```
   Traffic Server Configuration Records:
   proxy.process.http.completed_requests 139811
   proxy.process.http.total_incoming_connections 47593
   proxy.process.http.total_client_connections 47593
   proxy.process.http.total_client_connections_ipv4 47593
   proxy.process.http.total_client_connections_ipv6 0
   proxy.process.http.total_server_connections 96037
   proxy.process.http.total_parent_proxy_connections 0
   proxy.process.http.total_parent_retries 0
   proxy.process.http.total_parent_switches 0
   proxy.process.http.total_parent_retries_exhausted 0
   proxy.process.http.total_parent_marked_down_count 0
   proxy.process.http.avg_transactions_per_client_connection 1.004158
   proxy.process.http.avg_transactions_per_server_connection 1.093080
   proxy.process.http.transaction_counts.errors.pre_accept_hangups 0
   proxy.process.http.transaction_totaltime.errors.pre_accept_hangups 0.000000
   proxy.process.http.incoming_requests 139252
   proxy.process.http.outgoing_requests 105493
   proxy.process.http.incoming_responses 105483
   proxy.process.http.invalid_client_requests 537
   proxy.process.http.missing_host_hdr 0
   proxy.process.http.get_requests 135875
   proxy.process.http.head_requests 3880
   proxy.process.http.trace_requests 0
   proxy.process.http.options_requests 3
   proxy.process.http.post_requests 30
   proxy.process.http.put_requests 0
   proxy.process.http.push_requests 0
   proxy.process.http.delete_requests 0
   proxy.process.http.purge_requests 0
   proxy.process.http.connect_requests 3
   proxy.process.http.extension_method_requests 52
   proxy.process.http.broken_server_connections 0
   proxy.process.http.cache_lookups 46114
   proxy.process.http.cache_writes 8410
   proxy.process.http.cache_updates 0
   proxy.process.http.cache_deletes 0
   proxy.process.http.tunnels 93138
   proxy.process.http.throttled_proxy_only 0
   proxy.process.http.parent_proxy_transaction_time 0
   proxy.process.http.user_agent_request_header_total_size 36056839
   proxy.process.http.user_agent_response_header_total_size 34116302
   proxy.process.http.user_agent_request_document_total_size 0
   proxy.process.http.user_agent_response_document_total_size 47699991828
   proxy.process.http.origin_server_request_header_total_size 32452876
   proxy.process.http.origin_server_response_header_total_size 11464218
   proxy.process.http.origin_server_request_document_total_size 0
   proxy.process.http.origin_server_response_document_total_size 12684977620
   proxy.process.http.parent_proxy_request_total_bytes 0
   proxy.process.http.parent_proxy_response_total_bytes 0
   proxy.process.http.pushed_response_header_total_size 0
   proxy.process.http.pushed_document_total_size 0
   proxy.process.http.response_document_size_100 4500
   proxy.process.http.response_document_size_1K 651
   proxy.process.http.response_document_size_3K 0
   proxy.process.http.response_document_size_5K 0
   proxy.process.http.response_document_size_10K 2
   proxy.process.http.response_document_size_1M 134598
   proxy.process.http.response_document_size_inf 60
   proxy.process.http.request_document_size_100 139811
   proxy.process.http.request_document_size_1K 0
   proxy.process.http.request_document_size_3K 0
   proxy.process.http.request_document_size_5K 0
   proxy.process.http.request_document_size_10K 0
   proxy.process.http.request_document_size_1M 0
   proxy.process.http.request_document_size_inf 0
   proxy.process.http.user_agent_speed_bytes_per_sec_100 15208
   proxy.process.http.user_agent_speed_bytes_per_sec_1K 155
   proxy.process.http.user_agent_speed_bytes_per_sec_10K 85
   proxy.process.http.user_agent_speed_bytes_per_sec_100K 71
   proxy.process.http.user_agent_speed_bytes_per_sec_1M 643
   proxy.process.http.user_agent_speed_bytes_per_sec_10M 1920
   proxy.process.http.user_agent_speed_bytes_per_sec_100M 121720
   proxy.process.http.origin_server_speed_bytes_per_sec_100 88
   proxy.process.http.origin_server_speed_bytes_per_sec_1K 1
   proxy.process.http.origin_server_speed_bytes_per_sec_10K 4
   proxy.process.http.origin_server_speed_bytes_per_sec_100K 11
   proxy.process.http.origin_server_speed_bytes_per_sec_1M 750
   proxy.process.http.origin_server_speed_bytes_per_sec_10M 1649
   proxy.process.http.origin_server_speed_bytes_per_sec_100M 26731
   proxy.process.http.total_transactions_time 4085427719986
   proxy.process.http.cache_hit_fresh 33602
   proxy.process.http.cache_hit_mem_fresh 11382
   proxy.process.http.cache_hit_revalidated 0
   proxy.process.http.cache_hit_ims 1
   proxy.process.http.cache_hit_stale_served 0
   proxy.process.http.cache_miss_cold 12222
   proxy.process.http.cache_miss_changed 0
   proxy.process.http.cache_miss_client_no_cache 0
   proxy.process.http.cache_miss_client_not_cacheable 92715
   proxy.process.http.cache_miss_ims 3
   proxy.process.http.cache_read_error 0
   proxy.process.http.tcp_hit_count_stat 33602
   proxy.process.http.tcp_hit_user_agent_bytes_stat 35043110867
   proxy.process.http.tcp_hit_origin_server_bytes_stat 0
   proxy.process.http.tcp_miss_count_stat 104937
   proxy.process.http.tcp_miss_user_agent_bytes_stat 12196592160
   proxy.process.http.tcp_miss_origin_server_bytes_stat 12198974603
   proxy.process.http.tcp_expired_miss_count_stat 0
   proxy.process.http.tcp_expired_miss_user_agent_bytes_stat 0
   proxy.process.http.tcp_expired_miss_origin_server_bytes_stat 0
   proxy.process.http.tcp_refresh_hit_count_stat 0
   proxy.process.http.tcp_refresh_hit_user_agent_bytes_stat 0
   proxy.process.http.tcp_refresh_hit_origin_server_bytes_stat 0
   proxy.process.http.tcp_refresh_miss_count_stat 0
   proxy.process.http.tcp_refresh_miss_user_agent_bytes_stat 0
   proxy.process.http.tcp_refresh_miss_origin_server_bytes_stat 0
   proxy.process.http.tcp_client_refresh_count_stat 0
   proxy.process.http.tcp_client_refresh_user_agent_bytes_stat 0
   proxy.process.http.tcp_client_refresh_origin_server_bytes_stat 0
   proxy.process.http.tcp_ims_hit_count_stat 1
   proxy.process.http.tcp_ims_hit_user_agent_bytes_stat 781
   proxy.process.http.tcp_ims_hit_origin_server_bytes_stat 0
   proxy.process.http.tcp_ims_miss_count_stat 3
   proxy.process.http.tcp_ims_miss_user_agent_bytes_stat 2665
   proxy.process.http.tcp_ims_miss_origin_server_bytes_stat 2963
   proxy.process.http.err_client_abort_count_stat 363
   proxy.process.http.err_client_abort_user_agent_bytes_stat 9538351
   proxy.process.http.err_client_abort_origin_server_bytes_stat 9570286
   proxy.process.http.err_client_read_error_count_stat 314
   proxy.process.http.err_client_read_error_user_agent_bytes_stat 520530824
   proxy.process.http.err_client_read_error_origin_server_bytes_stat 520346862
   proxy.process.http.err_connect_fail_count_stat 0
   proxy.process.http.err_connect_fail_user_agent_bytes_stat 0
   proxy.process.http.err_connect_fail_origin_server_bytes_stat 0
   proxy.process.http.misc_count_stat 591
   proxy.process.http.misc_user_agent_bytes_stat 389321
   proxy.process.http.http_misc_origin_server_bytes_stat 0
   proxy.process.http.background_fill_bytes_aborted_stat 0
   proxy.process.http.background_fill_bytes_completed_stat 0
   proxy.process.http.cache_write_errors 0
   proxy.process.http.cache_read_errors 0
   proxy.process.http.100_responses 0
   proxy.process.http.101_responses 0
   proxy.process.http.1xx_responses 0
   proxy.process.http.200_responses 95942
   proxy.process.http.201_responses 0
   proxy.process.http.202_responses 0
   proxy.process.http.203_responses 0
   proxy.process.http.204_responses 0
   proxy.process.http.205_responses 0
   proxy.process.http.206_responses 42504
   proxy.process.http.2xx_responses 138446
   proxy.process.http.300_responses 0
   proxy.process.http.301_responses 4
   proxy.process.http.302_responses 0
   proxy.process.http.303_responses 0
   proxy.process.http.304_responses 4
   proxy.process.http.305_responses 0
   proxy.process.http.307_responses 0
   proxy.process.http.308_responses 0
   proxy.process.http.3xx_responses 8
   proxy.process.http.400_responses 67
   proxy.process.http.401_responses 0
   proxy.process.http.402_responses 0
   proxy.process.http.403_responses 0
   proxy.process.http.404_responses 611
   proxy.process.http.405_responses 0
   proxy.process.http.406_responses 0
   proxy.process.http.407_responses 0
   proxy.process.http.408_responses 0
   proxy.process.http.409_responses 0
   proxy.process.http.410_responses 0
   proxy.process.http.411_responses 0
   proxy.process.http.412_responses 0
   proxy.process.http.413_responses 0
   proxy.process.http.414_responses 0
   proxy.process.http.415_responses 0
   proxy.process.http.416_responses 2
   proxy.process.http.4xx_responses 680
   proxy.process.http.500_responses 0
   proxy.process.http.501_responses 0
   proxy.process.http.502_responses 0
   proxy.process.http.503_responses 0
   proxy.process.http.504_responses 0
   proxy.process.http.505_responses 0
   proxy.process.http.5xx_responses 0
   proxy.process.http.transaction_counts.hit_fresh 33603
   proxy.process.http.transaction_totaltime.hit_fresh 364.222992
   proxy.process.http.transaction_counts.hit_fresh.process 33603
   proxy.process.http.transaction_totaltime.hit_fresh.process 349.196991
   proxy.process.http.transaction_counts.hit_revalidated 0
   proxy.process.http.transaction_totaltime.hit_revalidated 0.000000
   proxy.process.http.transaction_counts.miss_cold 12225
   proxy.process.http.transaction_totaltime.miss_cold 836.161987
   proxy.process.http.transaction_counts.miss_not_cacheable 92715
   proxy.process.http.transaction_totaltime.miss_not_cacheable 613.293030
   proxy.process.http.transaction_counts.miss_changed 0
   proxy.process.http.transaction_totaltime.miss_changed 0.000000
   proxy.process.http.transaction_counts.miss_client_no_cache 0
   proxy.process.http.transaction_totaltime.miss_client_no_cache 0.000000
   proxy.process.http.transaction_counts.errors.aborts 677
   proxy.process.http.transaction_totaltime.errors.aborts 2200.395996
   proxy.process.http.transaction_counts.errors.possible_aborts 0
   proxy.process.http.transaction_totaltime.errors.possible_aborts 0.000000
   proxy.process.http.transaction_counts.errors.connect_failed 0
   proxy.process.http.transaction_totaltime.errors.connect_failed 0.000000
   proxy.process.http.transaction_counts.errors.other 591
   proxy.process.http.transaction_totaltime.errors.other 1.807000
   proxy.process.http.transaction_counts.other.unclassified 0
   proxy.process.http.transaction_totaltime.other.unclassified 0.000000
   proxy.process.http.disallowed_post_100_continue 0
   proxy.process.http.total_x_redirect_count 0
   proxy.process.https.incoming_requests 4922
   proxy.process.https.total_client_connections 4769
   proxy.process.http.origin_connections_throttled_out 0
   proxy.process.http.post_body_too_large 0
   proxy.process.http.milestone.ua_begin 1600
   proxy.process.http.milestone.ua_first_read 4688
   proxy.process.http.milestone.ua_read_header_done 6157
   proxy.process.http.milestone.ua_begin_write 629258
   proxy.process.http.milestone.ua_close 3962584
   proxy.process.http.milestone.server_first_connect 0
   proxy.process.http.milestone.server_connect 0
   proxy.process.http.milestone.server_connect_end 219904
   proxy.process.http.milestone.server_begin_write 236207
   proxy.process.http.milestone.server_first_read 336676
   proxy.process.http.milestone.server_read_header_done 336915
   proxy.process.http.milestone.server_close 1334804
   proxy.process.http.milestone.cache_open_read_begin 0
   proxy.process.http.milestone.cache_open_read_end 235475
   proxy.process.http.milestone.cache_open_write_begin 0
   proxy.process.http.milestone.cache_open_write_end 0
   proxy.process.http.milestone.dns_lookup_begin 0
   proxy.process.http.milestone.dns_lookup_end 0
   proxy.process.http.milestone.sm_start 0
   proxy.process.http.milestone.sm_finish 4015881
   proxy.process.net.calls_to_read 338093
   proxy.process.net.calls_to_read_nodata 72888
   proxy.process.net.calls_to_readfromnet 0
   proxy.process.net.calls_to_readfromnet_afterpoll 0
   proxy.process.net.calls_to_write 162604
   proxy.process.net.calls_to_write_nodata 6508
   proxy.process.net.calls_to_writetonet 99423
   proxy.process.net.calls_to_writetonet_afterpoll 99423
   proxy.process.net.inactivity_cop_lock_acquire_failure 0
   proxy.process.net.net_handler_run 375089
   proxy.process.net.read_bytes 8811506195
   proxy.process.net.write_bytes 6641394736
   proxy.process.net.fastopen_out.attempts 0
   proxy.process.net.fastopen_out.successes 0
   proxy.process.socks.connections_successful 0
   proxy.process.socks.connections_unsuccessful 0
   proxy.process.net.connections_throttled_in 0
   proxy.process.net.connections_throttled_out 0
   proxy.process.cache.read_per_sec 87.875023
   proxy.process.cache.write_per_sec 23.965916
   proxy.process.cache.KB_read_per_sec 92534.703125
   proxy.process.cache.KB_write_per_sec 50934.136719
   proxy.process.hostdb.total_lookups 12355
   proxy.process.hostdb.total_hits 12286
   proxy.process.hostdb.ttl 0.000000
   proxy.process.hostdb.ttl_expires 180
   proxy.process.hostdb.re_dns_on_reload 0
   proxy.process.dns.total_dns_lookups 211
   proxy.process.dns.lookup_avg_time 0
   proxy.process.dns.lookup_successes 211
   proxy.process.dns.fail_avg_time 0
   proxy.process.dns.lookup_failures 0
   proxy.process.dns.retries 0
   proxy.process.dns.max_retries_exceeded 0
   proxy.process.http2.total_client_streams 0
   proxy.process.http2.total_transactions_time 0
   proxy.process.http2.total_client_connections 0
   proxy.process.http2.connection_errors 0
   proxy.process.http2.stream_errors 0
   proxy.process.http2.session_die_default 0
   proxy.process.http2.session_die_other 0
   proxy.process.http2.session_die_eos 0
   proxy.process.http2.session_die_active 0
   proxy.process.http2.session_die_inactive 0
   proxy.process.http2.session_die_error 0
   proxy.process.http2.session_die_high_error_rate 0
   proxy.process.http2.max_settings_per_frame_exceeded 0
   proxy.process.http2.max_settings_per_minute_exceeded 0
   proxy.process.http2.max_settings_frames_per_minute_exceeded 0
   proxy.process.http2.max_ping_frames_per_minute_exceeded 0
   proxy.process.http2.max_priority_frames_per_minute_exceeded 0
   proxy.process.http2.insufficient_avg_window_update 0
   proxy.process.log.event_log_error_ok 0
   proxy.process.log.event_log_error_skip 0
   proxy.process.log.event_log_error_aggr 0
   proxy.process.log.event_log_error_full 0
   proxy.process.log.event_log_error_fail 0
   proxy.process.log.event_log_access_ok 139811
   proxy.process.log.event_log_access_skip 0
   proxy.process.log.event_log_access_aggr 0
   proxy.process.log.event_log_access_full 0
   proxy.process.log.event_log_access_fail 0
   proxy.process.log.num_sent_to_network 0
   proxy.process.log.num_lost_before_sent_to_network 0
   proxy.process.log.num_received_from_network 0
   proxy.process.log.num_flush_to_disk 139763
   proxy.process.log.num_lost_before_flush_to_disk 0
   proxy.process.log.bytes_lost_before_preproc 0
   proxy.process.log.bytes_sent_to_network 0
   proxy.process.log.bytes_lost_before_sent_to_network 0
   proxy.process.log.bytes_received_from_network 0
   proxy.process.log.bytes_flush_to_disk 29170728
   proxy.process.log.bytes_lost_before_flush_to_disk 0
   proxy.process.log.bytes_written_to_disk 29170728
   proxy.process.log.bytes_lost_before_written_to_disk 0
   proxy.process.ssl.user_agent_other_errors 1570
   proxy.process.ssl.user_agent_expired_cert 0
   proxy.process.ssl.user_agent_revoked_cert 0
   proxy.process.ssl.user_agent_unknown_cert 0
   proxy.process.ssl.user_agent_cert_verify_failed 0
   proxy.process.ssl.user_agent_bad_cert 0
   proxy.process.ssl.user_agent_decryption_failed 0
   proxy.process.ssl.user_agent_wrong_version 1
   proxy.process.ssl.user_agent_unknown_ca 0
   proxy.process.ssl.origin_server_other_errors 0
   proxy.process.ssl.origin_server_expired_cert 0
   proxy.process.ssl.origin_server_revoked_cert 0
   proxy.process.ssl.origin_server_unknown_cert 0
   proxy.process.ssl.origin_server_cert_verify_failed 0
   proxy.process.ssl.origin_server_bad_cert 0
   proxy.process.ssl.origin_server_decryption_failed 0
   proxy.process.ssl.origin_server_wrong_version 0
   proxy.process.ssl.origin_server_unknown_ca 0
   proxy.process.ssl.total_handshake_time 155242018186
   proxy.process.ssl.total_success_handshake_count_in 4769
   proxy.process.ssl.total_success_handshake_count_out 0
   proxy.process.ssl.total_tickets_created 321
   proxy.process.ssl.total_tickets_verified 275
   proxy.process.ssl.total_tickets_not_found 9
   proxy.process.ssl.total_tickets_renewed 0
   proxy.process.ssl.total_tickets_verified_old_key 0
   proxy.process.ssl.total_ticket_keys_renewed 0
   proxy.process.ssl.ssl_session_cache_hit 1484
   proxy.process.ssl.ssl_session_cache_new_session 2689
   proxy.process.ssl.ssl_session_cache_miss 962
   proxy.process.ssl.ssl_session_cache_eviction 0
   proxy.process.ssl.ssl_session_cache_lock_contention 0
   proxy.process.ssl.default_record_size_count 0
   proxy.process.ssl.max_record_size_count 0
   proxy.process.ssl.redo_record_size_count 0
   proxy.process.ssl.ssl_error_want_write 7053
   proxy.process.ssl.ssl_error_want_read 14424
   proxy.process.ssl.ssl_error_want_x509_lookup 0
   proxy.process.ssl.ssl_error_syscall 419
   proxy.process.ssl.ssl_error_read_eos 0
   proxy.process.ssl.ssl_error_zero_return 4104
   proxy.process.ssl.ssl_error_ssl 1571
   proxy.process.ssl.ssl_sni_name_set_failure 0
   proxy.process.ssl.ssl_ocsp_revoked_cert_stat 0
   proxy.process.ssl.ssl_ocsp_unknown_cert_stat 0
   proxy.process.ssl.ssl_ocsp_refreshed_cert 178
   proxy.process.ssl.ssl_ocsp_refresh_cert_failure 0
   proxy.config.ssl.CA.cert.path /opt/trafficserver/etc/trafficserver/ssl
   proxy.config.http.enable_http_info 1
   proxy.config.log.rolling_enabled 2
   proxy.config.log.logging_enabled 3
   proxy.config.log.rolling_size_mb 100
   proxy.config.http.enable_http_stats 1
   proxy.config.http.connect_attempts_timeout 10
   proxy.config.diags.debug.enabled 0
   proxy.config.ssl.client.private_key.path /opt/trafficserver/etc/trafficserver/ssl
   proxy.config.http.slow.log.threshold 60000
   proxy.config.admin.user_id ats
   proxy.config.http.cache.required_headers 1
   proxy.config.http.response_server_str Nginx
   proxy.config.log.max_space_mb_headroom 50
   proxy.config.ssl.server.private_key.path /opt/trafficserver/etc/trafficserver/ssl
   proxy.config.ssl.server.ticket_key.filename NULL
   proxy.config.http.server_ports 80 80:ipv6 443:proto=http:ssl 443:ipv6:proto=http:ssl
   proxy.config.dns.round_robin_nameservers 0
   proxy.config.output.logfile.rolling_enabled 2
   proxy.config.ssl.client.CA.cert.path /opt/trafficserver/etc/trafficserver/ssl
   proxy.config.cache.ram_cache.algorithm 0
   proxy.config.body_factory.template_sets_dir /opt/trafficserver/etc/trafficserver/body_factory
   proxy.config.output.logfile.rolling_size_mb 100
   proxy.config.ssl.server.cert.path /opt/trafficserver/etc/trafficserver/ssl
   proxy.config.ssl.ocsp.enabled 1
   proxy.config.url_remap.remap_required 1
   proxy.config.http.cache.ignore_server_no_cache 0
   proxy.config.log.max_space_mb_for_logs 10000
   proxy.config.log.logfile_dir /opt/trafficserver/var/log/trafficserver
   proxy.config.ssl.client.cert.path /opt/trafficserver/etc/trafficserver/ssl
   proxy.config.product_company Apache Software Foundation
   proxy.config.product_vendor Apache
   proxy.config.product_name Traffic Server
   proxy.config.proxy_name ns3085188
   proxy.config.bin_path bin
   proxy.config.proxy_binary traffic_server
   proxy.config.manager_binary traffic_manager
   proxy.config.proxy_binary_opts -M
   proxy.config.env_prep NULL
   proxy.config.config_dir etc/trafficserver
   proxy.config.local_state_dir var/trafficserver
   proxy.config.alarm_email ats
   proxy.config.syslog_facility LOG_DAEMON
   proxy.config.core_limit -1
   proxy.config.crash_log_helper traffic_crashlog
   proxy.config.mlock_enabled 0
   proxy.config.dump_mem_info_frequency 0
   proxy.config.http_ui_enabled 0
   proxy.config.cache.max_disk_errors 5
   proxy.config.output.logfile traffic.out
   proxy.config.output.logfile_perm rw-r--r--
   proxy.config.output.logfile.rolling_interval_sec 3600
   proxy.config.res_track_memory 0
   proxy.config.memory.max_usage 0
   proxy.config.system.file_max_pct 0.900000
   proxy.config.exec_thread.autoconfig 1
   proxy.config.exec_thread.autoconfig.scale 1.500000
   proxy.config.exec_thread.limit 2
   proxy.config.exec_thread.affinity 1
   proxy.config.accept_threads 1
   proxy.config.task_threads 2
   proxy.config.thread.default.stacksize 1048576
   proxy.config.restart.active_client_threshold 0
   proxy.config.restart.stop_listening 0
   proxy.config.stop.shutdown_timeout 0
   proxy.config.thread.max_heartbeat_mseconds 60
   proxy.config.srv_enabled 0
   proxy.config.http.cache.ignore_accept_mismatch 2
   proxy.config.http.cache.ignore_accept_language_mismatch 2
   proxy.config.http.cache.ignore_accept_encoding_mismatch 2
   proxy.config.http.cache.ignore_accept_charset_mismatch 2
   proxy.config.http.websocket.max_number_of_connections -1
   proxy.config.http.number_of_redirections 0
   proxy.config.http.redirect_use_orig_cache_key 0
   proxy.config.http.redirect_host_no_port 1
   proxy.config.http.post_copy_size 2048
   proxy.config.diags.debug.tags http|dns
   proxy.config.diags.debug.client_ip NULL
   proxy.config.diags.action.enabled 0
   proxy.config.diags.action.tags NULL
   proxy.config.diags.show_location 1
   proxy.config.diags.output.diag E
   proxy.config.diags.output.debug E
   proxy.config.diags.output.status L
   proxy.config.diags.output.note L
   proxy.config.diags.output.warning L
   proxy.config.diags.output.error L
   proxy.config.diags.output.fatal L
   proxy.config.diags.output.alert L
   proxy.config.diags.output.emergency L
   proxy.config.diags.logfile_perm rw-r--r--
   proxy.config.diags.logfile.rolling_enabled 0
   proxy.config.diags.logfile.rolling_interval_sec 3600
   proxy.config.diags.logfile.rolling_size_mb 10
   proxy.config.lm.pserver_timeout_secs 1
   proxy.config.lm.pserver_timeout_msecs 0
   proxy.config.admin.autoconf.localhost_only 1
   proxy.config.admin.admin_user admin
   proxy.config.admin.number_config_bak 3
   proxy.config.admin.cli_path cli
   proxy.config.admin.api.restricted 0
   proxy.config.udp.free_cancelled_pkts_sec 10
   proxy.config.udp.periodic_cleanup 10
   proxy.config.udp.send_retries 0
   proxy.config.udp.threads 0
   proxy.config.process_manager.timeout 5
   proxy.config.alarm.bin example_alarm_bin.sh
   proxy.config.alarm.abs_path NULL
   proxy.config.alarm.script_runtime 5
   proxy.config.header.parse.no_host_url_redirect NULL
   proxy.config.http.parse.allow_non_http 1
   proxy.config.http.allow_half_open 1
   proxy.config.http.enabled 1
   proxy.config.http.wait_for_cache 0
   proxy.config.http.insert_request_via_str 1
   proxy.config.http.insert_response_via_str 0
   proxy.config.http.request_via_str ApacheTrafficServer/8.1.1
   proxy.config.http.response_via_str ApacheTrafficServer/8.1.1
   proxy.config.http.response_server_enabled 1
   proxy.config.http.no_dns_just_forward_to_parent 0
   proxy.config.http.uncacheable_requests_bypass_parent 1
   proxy.config.http.no_origin_server_dns 0
   proxy.config.http.use_client_target_addr 0
   proxy.config.http.use_client_source_port 0
   proxy.config.http.keep_alive_enabled_in 1
   proxy.config.http.keep_alive_enabled_out 1
   proxy.config.http.keep_alive_post_out 1
   proxy.config.http.chunking_enabled 1
   proxy.config.http.chunking.size 4096
   proxy.config.http.flow_control.enabled 0
   proxy.config.http.flow_control.high_water 0
   proxy.config.http.flow_control.low_water 0
   proxy.config.http.post.check.content_length.enabled 1
   proxy.config.http.strict_uri_parsing 0
   proxy.config.http.send_http11_requests 1
   proxy.config.http.send_100_continue_response 0
   proxy.config.http.disallow_post_100_continue 0
   proxy.config.http.server_session_sharing.match both
   proxy.config.http.server_session_sharing.pool thread
   proxy.config.http.default_buffer_size 8
   proxy.config.http.default_buffer_water_mark 32768
   proxy.config.http.server_max_connections 0
   proxy.config.http.server_tcp_init_cwnd 0
   proxy.config.http.origin_max_connections 0
   proxy.config.http.origin_max_connections_queue -1
   proxy.config.http.origin_min_keep_alive_connections 0
   proxy.config.http.attach_server_session_to_client 0
   proxy.config.net.max_connections_in 30000
   proxy.config.net.max_connections_active_in 10000
   proxy.config.http.referer_filter 0
   proxy.config.http.referer_format_redirect 0
   proxy.config.http.referer_default_redirect http://www.example.com/
   proxy.config.http.auth_server_session_private 1
   proxy.config.http.max_post_size 0
   proxy.config.http.parent_proxy_routing_enable 0
   proxy.config.http.parent_proxies NULL
   proxy.config.http.parent_proxy.file parent.config
   proxy.config.http.parent_proxy.retry_time 300
   proxy.config.http.parent_proxy.fail_threshold 10
   proxy.config.http.parent_proxy.total_connect_attempts 4
   proxy.config.http.parent_proxy.per_parent_connect_attempts 2
   proxy.config.http.parent_proxy.connect_attempts_timeout 30
   proxy.config.http.parent_proxy.mark_down_hostdb 0
   proxy.config.http.parent_proxy.self_detect 2
   proxy.config.http.forward.proxy_auth_to_parent 0
   proxy.config.http.doc_in_cache_skip_dns 1
   proxy.config.http.keep_alive_no_activity_timeout_in 120
   proxy.config.http.keep_alive_no_activity_timeout_out 120
   proxy.config.websocket.no_activity_timeout 600
   proxy.config.websocket.active_timeout 3600
   proxy.config.http.transaction_no_activity_timeout_in 30
   proxy.config.http.transaction_no_activity_timeout_out 30
   proxy.config.http.transaction_active_timeout_in 900
   proxy.config.http.transaction_active_timeout_out 0
   proxy.config.http.accept_no_activity_timeout 120
   proxy.config.http.background_fill_active_timeout 0
   proxy.config.http.background_fill_completed_threshold 0.000000
   proxy.config.http.connect_attempts_max_retries 3
   proxy.config.http.connect_attempts_max_retries_dead_server 1
   proxy.config.http.connect_attempts_rr_retries 3
   proxy.config.http.post_connect_attempts_timeout 1800
   proxy.config.http.down_server.cache_time 60
   proxy.config.http.down_server.abort_threshold 10
   proxy.config.http.negative_revalidating_enabled 1
   proxy.config.http.negative_revalidating_lifetime 1800
   proxy.config.http.negative_caching_enabled 0
   proxy.config.http.negative_caching_lifetime 1800
   proxy.config.http.negative_caching_list 204 305 403 404 405 414 500 501 502 503 504
   proxy.config.http.anonymize_remove_from 0
   proxy.config.http.anonymize_remove_referer 0
   proxy.config.http.anonymize_remove_user_agent 0
   proxy.config.http.anonymize_remove_cookie 0
   proxy.config.http.anonymize_remove_client_ip 0
   proxy.config.http.insert_client_ip 1
   proxy.config.http.anonymize_other_header_list NULL
   proxy.config.http.insert_squid_x_forwarded_for 1
   proxy.config.http.insert_forwarded none
   proxy.config.http.proxy_protocol_whitelist none
   proxy.config.http.insert_age_in_response 1
   proxy.config.http.allow_multi_range 0
   proxy.config.http.normalize_ae 1
   proxy.config.http.global_user_agent_header NULL
   proxy.config.http.request_header_max_size 131072
   proxy.config.http.response_header_max_size 131072
   proxy.config.http.push_method_enabled 0
   proxy.config.http.cache.http 1
   proxy.config.http.cache.generation -1
   proxy.config.http.cache.allow_empty_doc 1
   proxy.config.http.cache.ignore_client_no_cache 1
   proxy.config.http.cache.ignore_client_cc_max_age 1
   proxy.config.http.cache.ims_on_client_no_cache 1
   proxy.config.http.cache.cache_responses_to_cookies 1
   proxy.config.http.cache.ignore_authentication 0
   proxy.config.http.cache.cache_urls_that_look_dynamic 1
   proxy.config.http.cache.enable_default_vary_headers 0
   proxy.config.http.cache.post_method 0
   proxy.config.http.cache.max_open_read_retries -1
   proxy.config.http.cache.open_read_retry_time 10
   proxy.config.http.cache.max_open_write_retries 1
   proxy.config.http.cache.open_write_fail_action 0
   proxy.config.http.cache.when_to_revalidate 0
   proxy.config.http.cache.max_stale_age 604800
   proxy.config.http.cache.range.lookup 1
   proxy.config.http.cache.range.write 0
   proxy.config.http.cache.heuristic_min_lifetime 3600
   proxy.config.http.cache.heuristic_max_lifetime 86400
   proxy.config.http.cache.heuristic_lm_factor 0.100000
   proxy.config.http.cache.guaranteed_min_lifetime 0
   proxy.config.http.cache.guaranteed_max_lifetime 31536000
   proxy.config.http.cache.vary_default_text NULL
   proxy.config.http.cache.vary_default_images NULL
   proxy.config.http.cache.vary_default_other NULL
   proxy.config.http.errors.log_error_pages 1
   proxy.config.http2.connection.slow.log.threshold 0
   proxy.config.http2.stream.slow.log.threshold 0
   proxy.config.body_factory.enable_customizations 1
   proxy.config.body_factory.enable_logging 0
   proxy.config.body_factory.response_max_size 8192
   proxy.config.body_factory.response_suppression_mode 0
   proxy.config.body_factory.template_base NONE
   proxy.config.socks.socks_needed 0
   proxy.config.socks.socks_version 4
   proxy.config.socks.socks_config_file socks.config
   proxy.config.socks.socks_timeout 100
   proxy.config.socks.server_connect_timeout 10
   proxy.config.socks.per_server_connection_attempts 1
   proxy.config.socks.connection_attempts 4
   proxy.config.socks.server_retry_timeout 300
   proxy.config.socks.default_servers 
   proxy.config.socks.server_retry_time 300
   proxy.config.socks.server_fail_threshold 2
   proxy.config.socks.accept_enabled 0
   proxy.config.socks.accept_port 1080
   proxy.config.socks.http_port 80
   proxy.config.io.max_buffer_size 32768
   proxy.config.net.connections_throttle 30000
   proxy.config.net.listen_backlog -1
   proxy.config.net.defer_accept 45
   proxy.config.net.sock_recv_buffer_size_in 0
   proxy.config.net.sock_send_buffer_size_in 0
   proxy.config.net.sock_option_flag_in 5
   proxy.config.net.sock_packet_mark_in 0
   proxy.config.net.sock_packet_tos_in 0
   proxy.config.net.sock_recv_buffer_size_out 0
   proxy.config.net.sock_send_buffer_size_out 0
   proxy.config.net.sock_option_flag_out 1
   proxy.config.net.sock_packet_mark_out 0
   proxy.config.net.sock_packet_tos_out 0
   proxy.config.net.sock_mss_in 0
   proxy.config.net.poll_timeout 10
   proxy.config.net.default_inactivity_timeout 86400
   proxy.config.net.inactivity_check_frequency 1
   proxy.config.net.event_period 10
   proxy.config.net.accept_period 10
   proxy.config.net.retry_delay 10
   proxy.config.net.throttle_delay 50
   proxy.config.net.sock_option_tfo_queue_size_in 10000
   proxy.config.net.tcp_congestion_control_in 
   proxy.config.net.tcp_congestion_control_out 
   proxy.config.cache.hit_evacuate_percent 0
   proxy.config.cache.hit_evacuate_size_limit 0
   proxy.config.cache.storage_filename storage.config
   proxy.config.cache.control.filename cache.config
   proxy.config.cache.ip_allow.filename ip_allow.config
   proxy.config.cache.hosting_filename hosting.config
   proxy.config.cache.volume_filename volume.config
   proxy.config.cache.permit.pinning 0
   proxy.config.cache.ram_cache.size -1
   proxy.config.cache.ram_cache.use_seen_filter 1
   proxy.config.cache.ram_cache.compress 0
   proxy.config.cache.ram_cache.compress_percent 90
   proxy.config.cache.dir.sync_frequency 60
   proxy.config.cache.hostdb.disable_reverse_lookup 0
   proxy.config.cache.select_alternate 1
   proxy.config.cache.ram_cache_cutoff 4194304
   proxy.config.cache.limits.http.max_alts 5
   proxy.config.cache.force_sector_size 0
   proxy.config.cache.target_fragment_size 1048576
   proxy.config.cache.max_doc_size 0
   proxy.config.cache.min_average_object_size 8000
   proxy.config.cache.threads_per_disk 8
   proxy.config.cache.agg_write_backlog 5242880
   proxy.config.cache.enable_checksum 0
   proxy.config.cache.alt_rewrite_max_size 4096
   proxy.config.cache.enable_read_while_writer 1
   proxy.config.cache.mutex_retry_delay 2
   proxy.config.cache.read_while_writer.max_retries 10
   proxy.config.cache.read_while_writer_retry.delay 50
   proxy.config.dns.lookup_timeout 20
   proxy.config.dns.retries 5
   proxy.config.dns.search_default_domains 0
   proxy.config.dns.failover_number 5
   proxy.config.dns.failover_period 60
   proxy.config.dns.max_dns_in_flight 2048
   proxy.config.dns.validate_query_name 0
   proxy.config.dns.splitDNS.enabled 0
   proxy.config.dns.splitdns.filename splitdns.config
   proxy.config.dns.nameservers NULL
   proxy.config.dns.local_ipv6 NULL
   proxy.config.dns.local_ipv4 NULL
   proxy.config.dns.resolv_conf /etc/resolv.conf
   proxy.config.dns.dedicated_thread 0
   proxy.config.dns.connection.mode 0
   proxy.config.hostdb.ip_resolve NULL
   proxy.config.hostdb 1
   proxy.config.hostdb.filename host.db
   proxy.config.hostdb.max_count -1
   proxy.config.hostdb.round_robin_max_count 16
   proxy.config.hostdb.storage_path var/trafficserver
   proxy.config.hostdb.max_size 10485760
   proxy.config.hostdb.partitions 64
   proxy.config.hostdb.ttl_mode 0
   proxy.config.hostdb.lookup_timeout 30
   proxy.config.hostdb.timeout 86400
   proxy.config.hostdb.verify_after 720
   proxy.config.hostdb.fail.timeout 0
   proxy.config.hostdb.re_dns_on_reload 0
   proxy.config.hostdb.serve_stale_for 0
   proxy.config.hostdb.migrate_on_demand 0
   proxy.config.hostdb.strict_round_robin 0
   proxy.config.hostdb.timed_round_robin 0
   proxy.config.cache.hostdb.sync_frequency 120
   proxy.config.hostdb.host_file.path NULL
   proxy.config.hostdb.host_file.interval 86400
   proxy.config.disable_configuration_modification 0
   proxy.config.http.connect_ports 443
   proxy.config.config_update_interval_ms 3000
   proxy.config.raw_stat_sync_interval_ms 5000
   proxy.config.remote_sync_interval_ms 5000
   proxy.config.log.log_buffer_size 9216
   proxy.config.log.max_secs_per_buffer 5
   proxy.config.log.max_space_mb_for_orphan_logs 25
   proxy.config.log.hostname localhost
   proxy.config.log.logfile_perm rw-r--r--
   proxy.config.log.config.filename logging.yaml
   proxy.config.log.collation_host NULL
   proxy.config.log.collation_port 8085
   proxy.config.log.collation_secret foobar
   proxy.config.log.collation_host_tagged 0
   proxy.config.log.collation_retry_sec 5
   proxy.config.log.collation_max_send_buffers 16
   proxy.config.log.collation_preproc_threads 1
   proxy.config.log.collation_host_timeout 86390
   proxy.config.log.collation_client_timeout 86400
   proxy.config.log.rolling_interval_sec 86400
   proxy.config.log.rolling_offset_hr 0
   proxy.config.log.rolling_max_count 0
   proxy.config.log.rolling_allow_empty 0
   proxy.config.log.auto_delete_rolled_files 1
   proxy.config.log.sampling_frequency 1
   proxy.config.log.space_used_frequency 2
   proxy.config.log.file_stat_frequency 32
   proxy.config.log.ascii_buffer_size 36864
   proxy.config.log.max_line_size 9216
   proxy.config.log.periodic_tasks_interval 5
   proxy.config.reverse_proxy.enabled 1
   proxy.config.url_remap.filename remap.config
   proxy.config.url_remap.pristine_host_hdr 0
   proxy.config.ssl.server.session_ticket.enable 1
   proxy.config.ssl.TLSv1 1
   proxy.config.ssl.TLSv1_1 1
   proxy.config.ssl.TLSv1_2 1
   proxy.config.ssl.TLSv1_3 1
   proxy.config.ssl.client.TLSv1 1
   proxy.config.ssl.client.TLSv1_1 1
   proxy.config.ssl.client.TLSv1_2 1
   proxy.config.ssl.client.TLSv1_3 1
   proxy.config.ssl.server.cipher_suite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
   proxy.config.ssl.client.cipher_suite NULL
   proxy.config.ssl.server.honor_cipher_order 1
   proxy.config.ssl.client.certification_level 0
   proxy.config.ssl.server.cert_chain.filename NULL
   proxy.config.ssl.server.multicert.filename ssl_multicert.config
   proxy.config.ssl.server.multicert.exit_on_load_fail 1
   proxy.config.ssl.servername.filename ssl_server_name.yaml
   proxy.config.ssl.CA.cert.filename NULL
   proxy.config.ssl.client.verify.server 0
   proxy.config.ssl.client.cert.filename NULL
   proxy.config.ssl.client.private_key.filename NULL
   proxy.config.ssl.client.CA.cert.filename NULL
   proxy.config.ssl.session_cache 2
   proxy.config.ssl.session_cache.size 102400
   proxy.config.ssl.session_cache.num_buckets 256
   proxy.config.ssl.session_cache.skip_cache_on_bucket_contention 0
   proxy.config.ssl.max_record_size 0
   proxy.config.ssl.session_cache.timeout 0
   proxy.config.ssl.session_cache.auto_clear 1
   proxy.config.ssl.hsts_max_age -1
   proxy.config.ssl.hsts_include_subdomains 0
   proxy.config.ssl.allow_client_renegotiation 0
   proxy.config.ssl.server.dhparams_file NULL
   proxy.config.ssl.handshake_timeout_in 0
   proxy.config.ssl.sni.map.enable 0
   proxy.config.ssl.wire_trace_enabled 0
   proxy.config.ssl.wire_trace_addr NULL
   proxy.config.ssl.wire_trace_percentage 0
   proxy.config.ssl.wire_trace_server_name NULL
   proxy.config.ssl.cert.load_elevated 0
   proxy.config.ssl.server.groups_list NULL
   proxy.config.ssl.client.groups_list NULL
   proxy.config.ssl.ocsp.cache_timeout 3600
   proxy.config.ssl.ocsp.request_timeout 10
   proxy.config.ssl.ocsp.update_period 60
   proxy.config.ssl.server.TLSv1_3.cipher_suites NULL
   proxy.config.ssl.client.TLSv1_3.cipher_suites NULL
   proxy.config.wccp.addr 
   proxy.config.wccp.services 
   proxy.config.plugin.plugin_dir libexec/trafficserver
   proxy.config.plugin.load_elevated 0
   proxy.config.http.keepalive_internal_vc 0
   proxy.node.hostname_FQ vps-7d4b73ce.vps.ovh.net
   proxy.node.hostname vps-7d4b73ce
   proxy.node.restarts.manager.start_time 1599822724
   proxy.node.restarts.proxy.start_time 1599822725
   proxy.node.restarts.proxy.cache_ready_time 1599822728
   proxy.node.restarts.proxy.stop_time 0
   proxy.node.restarts.proxy.restart_count 1
   proxy.node.version.manager.short 8.1.1
   proxy.node.version.manager.long Apache Traffic Server - traffic_manager - 8.1.1 - (build # 1.el7 on Sep  9 2020 at 13:51:54)
   proxy.node.version.manager.build_number 1.el7
   proxy.node.version.manager.build_time 13:51:54
   proxy.node.version.manager.build_date Sep  9 2020
   proxy.node.version.manager.build_machine ns3085188
   proxy.node.version.manager.build_person root
   proxy.local.http.parent_proxy.disable_connect_tunneling 0
   proxy.config.http.forward_connect_method 0
   proxy.config.http2.stream_priority_enabled 0
   proxy.config.http2.max_concurrent_streams_in 100
   proxy.config.http2.min_concurrent_streams_in 10
   proxy.config.http2.max_active_streams_in 0
   proxy.config.http2.initial_window_size_in 65535
   proxy.config.http2.max_frame_size 16384
   proxy.config.http2.header_table_size 4096
   proxy.config.http2.max_header_list_size 131072
   proxy.config.http2.accept_no_activity_timeout 120
   proxy.config.http2.no_activity_timeout_in 120
   proxy.config.http2.active_timeout_in 0
   proxy.config.http2.push_diary_size 256
   proxy.config.http2.zombie_debug_timeout_in 0
   proxy.config.http2.stream_error_rate_threshold 0.100000
   proxy.config.http2.max_settings_per_frame 7
   proxy.config.http2.max_settings_per_minute 14
   proxy.config.http2.max_settings_frames_per_minute 14
   proxy.config.http2.max_ping_frames_per_minute 60
   proxy.config.http2.max_priority_frames_per_minute 120
   proxy.config.http2.min_avg_window_update 2560.000000
   proxy.config.http2.header_table_size_limit 65536
   proxy.local.incoming_ip_to_bind NULL
   proxy.local.outgoing_ip_to_bind NULL
   proxy.local.log.collation_mode 0
   proxy.config.stat_api.max_stats_allowed 256
   proxy.config.allocator.thread_freelist_size 512
   proxy.config.allocator.thread_freelist_low_watermark 32
   proxy.config.allocator.hugepages 0
   proxy.config.allocator.dontdump_iobuffers 1
   proxy.config.remap.num_remap_threads 0
   proxy.config.cache.http.compatibility.4-2-0-fixup 1
   proxy.config.ssl.async.handshake.enabled 0
   proxy.config.ssl.engine.conf_file NULL
   proxy.node.proxy_running 1
   proxy.node.config.reconfigure_time 1599822724
   proxy.node.config.reconfigure_required 0
   proxy.node.config.restart_required.proxy 0
   proxy.node.config.restart_required.manager 0
   proxy.node.config.draining 0
   proxy.process.http.user_agent_total_request_bytes 35368443
   proxy.process.http.user_agent_total_response_bytes 45897008506
   proxy.process.http.origin_server_total_request_bytes 32268925
   proxy.process.http.origin_server_total_response_bytes 12266452669
   proxy.process.user_agent_total_bytes 45932376949
   proxy.process.origin_server_total_bytes 12298721594
   proxy.process.cache_total_hits 32253
   proxy.process.cache_total_misses 104657
   proxy.process.cache_total_requests 136910
   proxy.process.cache_total_hits_bytes 33635473758
   proxy.process.cache_total_misses_bytes 11909369520
   proxy.process.cache_total_bytes 45544843278
   proxy.process.current_server_connections 40
   proxy.process.version.server.short 8.1.1
   proxy.process.version.server.long Apache Traffic Server - traffic_server - 8.1.1 - (build # 1.el7 on Sep  9 2020 at 13:51:57)
   proxy.process.version.server.build_number 1.el7
   proxy.process.version.server.build_time 13:51:57
   proxy.process.version.server.build_date Sep  9 2020
   proxy.process.version.server.build_machine ns3085188
   proxy.process.version.server.build_person root
   proxy.process.http.background_fill_current_count 0
   proxy.process.http.current_client_connections 170
   proxy.process.http.current_active_client_connections 33
   proxy.process.http.websocket.current_active_client_connections 0
   proxy.process.http.current_client_transactions 33
   proxy.process.http.current_server_transactions 33
   proxy.process.http.current_parent_proxy_connections 0
   proxy.process.http.current_server_connections 41
   proxy.process.http.current_cache_connections 4
   proxy.process.version.server.uuid 6a43413f-328a-4c81-a218-6af7e2f60dbb
   proxy.process.net.accepts_currently_open 0
   proxy.process.net.connections_currently_open 178
   proxy.process.net.default_inactivity_timeout_applied 0
   proxy.process.net.dynamic_keep_alive_timeout_in_count 27
   proxy.process.net.dynamic_keep_alive_timeout_in_total 3240
   proxy.process.socks.connections_currently_open 0
   proxy.process.tcp.total_accepts 1202
   proxy.process.cache.bytes_used 8239697920
   proxy.process.cache.bytes_total 150136348672
   proxy.process.cache.ram_cache.total_bytes 187465728
   proxy.process.cache.ram_cache.bytes_used 1874329600
   proxy.process.cache.ram_cache.hits 10515
   proxy.process.cache.ram_cache.misses 21925
   proxy.process.cache.pread_count 0
   proxy.process.cache.percent_full 5
   proxy.process.cache.lookup.active 0
   proxy.process.cache.lookup.success 0
   proxy.process.cache.lookup.failure 0
   proxy.process.cache.read.active 0
   proxy.process.cache.read.success 32441
   proxy.process.cache.read.failure 7851
   proxy.process.cache.write.active 5
   proxy.process.cache.write.success 7749
   proxy.process.cache.write.failure 15
   proxy.process.cache.write.backlog.failure 14
   proxy.process.cache.update.active 0
   proxy.process.cache.update.success 0
   proxy.process.cache.update.failure 0
   proxy.process.cache.remove.active 0
   proxy.process.cache.remove.success 0
   proxy.process.cache.remove.failure 0
   proxy.process.cache.evacuate.active 0
   proxy.process.cache.evacuate.success 0
   proxy.process.cache.evacuate.failure 0
   proxy.process.cache.scan.active 0
   proxy.process.cache.scan.success 0
   proxy.process.cache.scan.failure 0
   proxy.process.cache.direntries.total 18744544
   proxy.process.cache.direntries.used 7749
   proxy.process.cache.directory_collision 0
   proxy.process.cache.frags_per_doc.1 7749
   proxy.process.cache.frags_per_doc.2 0
   proxy.process.cache.frags_per_doc.3+ 0
   proxy.process.cache.read_busy.success 1
   proxy.process.cache.read_busy.failure 0
   proxy.process.cache.write_bytes_stat 0
   proxy.process.cache.vector_marshals 7750
   proxy.process.cache.hdr_marshals 7750
   proxy.process.cache.hdr_marshal_bytes 22708920
   proxy.process.cache.gc_bytes_evacuated 0
   proxy.process.cache.gc_frags_evacuated 0
   proxy.process.cache.wrap_count 0
   proxy.process.cache.sync.count 2
   proxy.process.cache.sync.bytes 374931456
   proxy.process.cache.sync.time 95355679065
   proxy.process.cache.span.errors.read 0
   proxy.process.cache.span.errors.write 0
   proxy.process.cache.span.failing 0
   proxy.process.cache.span.offline 0
   proxy.process.cache.span.online 1
   proxy.process.dns.success_avg_time 0
   proxy.process.dns.in_flight 0
   proxy.process.eventloop.count.10s 17422
   proxy.process.eventloop.events.10s 200862
   proxy.process.eventloop.events.min.10s 0
   proxy.process.eventloop.events.max.10s 7096
   proxy.process.eventloop.wait.10s 17422
   proxy.process.eventloop.time.min.10s 3520
   proxy.process.eventloop.time.max.10s 278610969
   proxy.process.eventloop.count.100s 160748
   proxy.process.eventloop.events.100s 2430328
   proxy.process.eventloop.events.min.100s 0
   proxy.process.eventloop.events.max.100s 8665
   proxy.process.eventloop.wait.100s 160748
   proxy.process.eventloop.time.min.100s 2960
   proxy.process.eventloop.time.max.100s 1521303972
   proxy.process.eventloop.count.1000s 374694
   proxy.process.eventloop.events.1000s 3514640
   proxy.process.eventloop.events.min.1000s 0
   proxy.process.eventloop.events.max.1000s 8665
   proxy.process.eventloop.wait.1000s 374694
   proxy.process.eventloop.time.min.1000s 2960
   proxy.process.eventloop.time.max.1000s 1521303972
   proxy.process.traffic_server.memory.rss 247361536
   proxy.process.http2.current_client_sessions 0
   proxy.process.http2.current_client_connections 0
   proxy.process.http2.current_active_client_connections 0
   proxy.process.http2.current_client_streams 0
   proxy.process.hostdb.cache.current_items 1
   proxy.process.hostdb.cache.current_size 97
   proxy.process.hostdb.cache.total_inserts 1
   proxy.process.hostdb.cache.total_failed_inserts 0
   proxy.process.hostdb.cache.total_lookups 7853
   proxy.process.hostdb.cache.total_hits 7850
   proxy.process.hostdb.cache.last_sync.time 1599823447
   proxy.process.hostdb.cache.last_sync.total_items 1
   proxy.process.hostdb.cache.last_sync.total_size 97
   proxy.process.log.log_files_open 1
   proxy.process.log.log_files_space_used 2262919069
   plugin.system_stats.loadavg.one 145792
   plugin.system_stats.loadavg.five 54496
   plugin.system_stats.loadavg.fifteen 21760
   plugin.system_stats.current_processes 170
   plugin.system_stats.net.eth0.collisions 0
   plugin.system_stats.net.eth0.multicast 0
   plugin.system_stats.net.eth0.rx_bytes 22467078
   plugin.system_stats.net.eth0.rx_compressed 0
   plugin.system_stats.net.eth0.rx_crc_errors 0
   plugin.system_stats.net.eth0.rx_dropped 0
   plugin.system_stats.net.eth0.rx_errors 0
   plugin.system_stats.net.eth0.rx_fifo_errors 0
   plugin.system_stats.net.eth0.rx_frame_errors 0
   plugin.system_stats.net.eth0.rx_length_errors 0
   plugin.system_stats.net.eth0.rx_missed_errors 0
   plugin.system_stats.net.eth0.rx_nohandler 0
   plugin.system_stats.net.eth0.rx_over_errors 0
   plugin.system_stats.net.eth0.rx_packets 4515730
   plugin.system_stats.net.eth0.tx_aborted_errors 0
   plugin.system_stats.net.eth0.tx_bytes -1853196878
   plugin.system_stats.net.eth0.tx_carrier_errors 0
   plugin.system_stats.net.eth0.tx_compressed 0
   plugin.system_stats.net.eth0.tx_dropped 0
   plugin.system_stats.net.eth0.tx_errors 0
   plugin.system_stats.net.eth0.tx_fifo_errors 0
   plugin.system_stats.net.eth0.tx_heartbeat_errors 0
   plugin.system_stats.net.eth0.tx_packets 3617422
   plugin.system_stats.net.eth0.tx_window_errors 0
   proxy.process.ssl.user_agent_sessions 827
   proxy.process.ssl.user_agent_session_hit 298
   proxy.process.ssl.user_agent_session_miss 0
   proxy.process.ssl.user_agent_session_timeout 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-AES256-GCM-SHA384 823
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-AES256-SHA384 0
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-AES256-SHA384 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-AES256-SHA256 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-AES256-SHA256 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-AES256-SHA256 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-AES256-SHA256 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-CAMELLIA256-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-CAMELLIA256-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-CAMELLIA256-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-CAMELLIA256-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-AES256-SHA384 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-AES256-SHA384 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.AES256-GCM-SHA384 0
   proxy.process.ssl.cipher.user_agent.AES256-SHA256 0
   proxy.process.ssl.cipher.user_agent.AES256-SHA 0
   proxy.process.ssl.cipher.user_agent.CAMELLIA256-SHA 0
   proxy.process.ssl.cipher.user_agent.PSK-AES256-CBC-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-AES128-GCM-SHA256 4
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-SEED-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-SEED-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-SEED-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-SEED-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-RSA-CAMELLIA128-SHA 0
   proxy.process.ssl.cipher.user_agent.DHE-DSS-CAMELLIA128-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-CAMELLIA128-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-CAMELLIA128-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.AES128-GCM-SHA256 0
   proxy.process.ssl.cipher.user_agent.AES128-SHA256 0
   proxy.process.ssl.cipher.user_agent.AES128-SHA 0
   proxy.process.ssl.cipher.user_agent.SEED-SHA 0
   proxy.process.ssl.cipher.user_agent.CAMELLIA128-SHA 0
   proxy.process.ssl.cipher.user_agent.PSK-AES128-CBC-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.EDH-RSA-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.EDH-DSS-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-RSA-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.DH-DSS-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.IDEA-CBC-SHA 0
   proxy.process.ssl.cipher.user_agent.PSK-3DES-EDE-CBC-SHA 0
   proxy.process.ssl.cipher.user_agent.KRB5-IDEA-CBC-SHA 0
   proxy.process.ssl.cipher.user_agent.KRB5-DES-CBC3-SHA 0
   proxy.process.ssl.cipher.user_agent.KRB5-IDEA-CBC-MD5 0
   proxy.process.ssl.cipher.user_agent.KRB5-DES-CBC3-MD5 0
   proxy.process.ssl.cipher.user_agent.ECDHE-RSA-RC4-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDHE-ECDSA-RC4-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-RSA-RC4-SHA 0
   proxy.process.ssl.cipher.user_agent.ECDH-ECDSA-RC4-SHA 0
   proxy.process.ssl.cipher.user_agent.RC4-SHA 0
   proxy.process.ssl.cipher.user_agent.RC4-MD5 0
   proxy.process.ssl.cipher.user_agent.PSK-RC4-SHA 0
   proxy.process.ssl.cipher.user_agent.KRB5-RC4-SHA 0
   proxy.process.ssl.cipher.user_agent.KRB5-RC4-MD5 0
   proxy.process.cache.volume_0.bytes_used 8239697920
   proxy.process.cache.volume_0.bytes_total 150136348672
   proxy.process.cache.volume_0.ram_cache.total_bytes 187465728
   proxy.process.cache.volume_0.ram_cache.bytes_used 1874329600
   proxy.process.cache.volume_0.ram_cache.hits 10515
   proxy.process.cache.volume_0.ram_cache.misses 21925
   proxy.process.cache.volume_0.pread_count 0
   proxy.process.cache.volume_0.percent_full 5
   proxy.process.cache.volume_0.lookup.active 0
   proxy.process.cache.volume_0.lookup.success 0
   proxy.process.cache.volume_0.lookup.failure 0
   proxy.process.cache.volume_0.read.active 0
   proxy.process.cache.volume_0.read.success 32441
   proxy.process.cache.volume_0.read.failure 7851
   proxy.process.cache.volume_0.write.active 5
   proxy.process.cache.volume_0.write.success 7749
   proxy.process.cache.volume_0.write.failure 15
   proxy.process.cache.volume_0.write.backlog.failure 14
   proxy.process.cache.volume_0.update.active 0
   proxy.process.cache.volume_0.update.success 0
   proxy.process.cache.volume_0.update.failure 0
   proxy.process.cache.volume_0.remove.active 0
   proxy.process.cache.volume_0.remove.success 0
   proxy.process.cache.volume_0.remove.failure 0
   proxy.process.cache.volume_0.evacuate.active 0
   proxy.process.cache.volume_0.evacuate.success 0
   proxy.process.cache.volume_0.evacuate.failure 0
   proxy.process.cache.volume_0.scan.active 0
   proxy.process.cache.volume_0.scan.success 0
   proxy.process.cache.volume_0.scan.failure 0
   proxy.process.cache.volume_0.direntries.total 18744544
   proxy.process.cache.volume_0.direntries.used 7749
   proxy.process.cache.volume_0.directory_collision 0
   proxy.process.cache.volume_0.frags_per_doc.1 7749
   proxy.process.cache.volume_0.frags_per_doc.2 0
   proxy.process.cache.volume_0.frags_per_doc.3+ 0
   proxy.process.cache.volume_0.read_busy.success 1
   proxy.process.cache.volume_0.read_busy.failure 0
   proxy.process.cache.volume_0.write_bytes_stat 0
   proxy.process.cache.volume_0.vector_marshals 0
   proxy.process.cache.volume_0.hdr_marshals 0
   proxy.process.cache.volume_0.hdr_marshal_bytes 0
   proxy.process.cache.volume_0.gc_bytes_evacuated 0
   proxy.process.cache.volume_0.gc_frags_evacuated 0
   proxy.process.cache.volume_0.wrap_count 0
   proxy.process.cache.volume_0.sync.count 2
   proxy.process.cache.volume_0.sync.bytes 374931456
   proxy.process.cache.volume_0.sync.time 95355679065
   proxy.process.cache.volume_0.span.errors.read 0
   proxy.process.cache.volume_0.span.errors.write 0
   proxy.process.cache.volume_0.span.failing 0
   proxy.process.cache.volume_0.span.offline 0
   proxy.process.cache.volume_0.span.online 0
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692075443


   Sorry, I don't have a privileged email with \@apache.org. 
   Long thought about this problem. It seems to be correct from a technical point of view. Using throttling helps. But according to the logs, this is not a very effective solution. The ideal solution is when the next fragments are transmitted after the client receives the previous ones with a small margin, and not try to stop the avalanche-like creation of fragments. In the latter case, we have a significant overhead


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691174708


   Can you try the --throttle option?  That's supposed to not allow the in memory cache blocks to stack up.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691682537


   What do your Renato rule lines look like?  It sounds like something is
   wrong with the cache keyt per slice
   
   On Sun, Sep 13, 2020, 08:44 cheluskin <no...@github.com> wrote:
   
   > debug.log
   > <https://github.com/apache/trafficserver/files/5214285/debug.log>
   >
   > Guys, look what he does? I once opened in chrome for one second at the
   > file address and immediately closed it. Why did slice.so split the whole
   > file into chunks and put them in the cache? Does anyone understand this
   > behavior?. If 10 web clients open 10 files, they will start creating
   > thousands of fragments until they crash the server.
   >
   > —
   > You are receiving this because you commented.
   > Reply to this email directly, view it on GitHub
   > <https://github.com/apache/trafficserver/issues/7179#issuecomment-691680389>,
   > or unsubscribe
   > <https://github.com/notifications/unsubscribe-auth/AFSNPDLSHNDP4VRFFRUIBFTSFTLEXANCNFSM4RHRINMQ>
   > .
   >
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692153248


   If things are working correct with the logs turned on you should see a lot of these messages.
   
   ```
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1447760
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1414992
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1382224
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1349672
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1316904
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1284136
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1251368
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1218600
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1185832
   ```
   This was running with ats master.  I'm testing at the moment... looks like it might not be working right in ats81.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692075443


   Sorry, I don't have email with \@apache.org. 
   Long thought about this problem. It seems to be correct from a technical point of view. Using throttling helps. But according to the logs, this is not a very effective solution. The ideal solution is when the next fragments are transmitted after the client receives the previous ones with a small margin, and not try to stop the avalanche-like creation of fragments. In the latter case, we have a significant overhead. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691692752


   Thanks for the help. I tried it. But that doesn't change anything in my case. One user opens a video file in chrome via a direct link. The duration of the video is 50 minutes 670 megabytes. Looks only first 40 seconds. As a result, ATS has already made 464 megabytes of fragments and would have continued to do more if I had not closed the browser. Attached a log. If you have any ideas please let me know.
   [slice-40-seconds-one-user.log](https://github.com/apache/trafficserver/files/5214557/slice-40-seconds-one-user.log)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692040576


   It might be easier to come onto the ASF slack page (https://the-asf.slack.com) to go over some of the behaviors.  I'll run some tests with the plain 8.1 branch with 7008 in the meantime.  


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691682968


   ```
   map    https://cdn.fps.mycdn.com/     http://myorigin.com/ \
   @plugin=slice.so @pparam=--throttle \
   @plugin=cache_range_requests.so
   ```
   
   [slice-debug.log](https://github.com/apache/trafficserver/files/5214352/slice-debug.log)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691682968


   ```
   map    https://cdn.fps.mycdn.com/     http://myorigin.com/ \
   @plugin=slice.so @pparam=--throttle \
   @plugin=cache_range_requests.so
   ```
   [slice-debug.log](https://github.com/apache/trafficserver/files/5214434/slice-debug.log)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691285137






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak closed issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak closed issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691682968


   ```
   map    https://cdn.fps.mycdn.com/     http://myorigin.com/ \
   @plugin=slice.so @pparam=--throttle \
   @plugin=cache_range_requests.so
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-701378404


   I've found a bug with post ats7 slice 416 handling (exposed with self healing) and also something with throttling.  I'm working a PR which makes throttling the only behavior, that simplified the handoff code quite a bit.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691285137


   I tried it. But the memory is still over. Also tried it on a 240G machine. In this case, 67G of memory was used with a disk cache of 120G. And the hi-end cpu is terribly loaded. There is also a huge use of incoming traffic. That's why I think that for some unknown reason, he fetches a lot of extra slices, although clients do not request them


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691686382


   That would add @plugin=header_rewrite.so @pparam=empty.txt
   
   Not sure this will help but it is something to try. I can do more direct
   testing later as I'm on my phone right now.
   
   On Sun, Sep 13, 2020, 09:30 cheluskin <no...@github.com> wrote:
   
   > request is absolute standard chrome direct file get like this
   >
   > curl '
   > https://cdn.fps.cdn1.mycdn.com/contents/videos/17000/17868/17868_720p.mp4'
   > \ -H 'Connection: keep-alive' \ -H 'Pragma: no-cache' \ -H 'Cache-Control:
   > no-cache' \ -H 'Upgrade-Insecure-Requests: 1' \ -H 'User-Agent: Mozilla/5.0
   > (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko)
   > Chrome/85.0.4183.102 Safari/537.36' \ -H 'Accept:
   > text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9'
   > \ -H 'Sec-Fetch-Site: none' \ -H 'Sec-Fetch-Mode: navigate' \ -H
   > 'Sec-Fetch-User: ?1' \ -H 'Sec-Fetch-Dest: document' \ -H 'Accept-Language:
   > ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7' \ --compressed
   >
   > Unfortunately, I am not very friendly with C ++ and I cannot write a dummy
   > plugin for it. But, I tried to leave one range plugin. And it works well in
   > the same configuration. If the url was wrong one range plugin would not
   > work well probably?
   >
   > —
   > You are receiving this because you commented.
   > Reply to this email directly, view it on GitHub
   > <https://github.com/apache/trafficserver/issues/7179#issuecomment-691685972>,
   > or unsubscribe
   > <https://github.com/notifications/unsubscribe-auth/AFSNPDNWNIJYA7A2U3OZIQLSFTQRJANCNFSM4RHRINMQ>
   > .
   >
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691680389


   [debug.log](https://github.com/apache/trafficserver/files/5214285/debug.log)
   
   Guys, look what he does? I once opened in chrome for one second at the file address and immediately closed it. Why did slice.so split the whole file into chunks and put them in the cache? Does anyone understand this behavior?. If 10 web clients open 10 files, they will start creating thousands of fragments until they crash the server.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692153248


   If things are working correct with the logs turned on you should see a lot of these messages.
   
   ```
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1447760
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1414992
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1382224
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1349672
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1316904
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1284136
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1251368
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1218600
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1185832
   ```
   This was running with ats master.  I'm testing at the moment.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692173634


   Indeed, after applying the trotting patch, the memory consumes less and the server continues to work.
   in the logs like yours. But is it possible to do without trotting so that ATS does not load all the segments in advance. This is done by nginx by default, but I don't know the technical details unfortunately


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691174708






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692153248


   If things are working correct with the logs turned on you should see a lot of these messages.
   
   ```
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1447760
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1414992
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1382224
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1349672
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1316904
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1284136
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1251368
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1218600
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1185832
   ```
   This was running with ats master, also works with ats 9.1.x.  Checking into ats 8.1.x


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691683820


   And a sample request?  Something in your configuration might be causing something unexpected.  You might want to try adding a dummy empty plugin in front as there is trouble with effective url vs pristine url...the first plugin gets different values depending on ats versions. Header rewrite with an empty file. Just something to try.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692186676


   By default with an uncached asset ATS actually does internally throttle.  However when ATS discovers that the data is already cached in RAM it decides to "optimize", ignore throttling and stack all those in order memory blocks up in the send to client queue.  This is what I found during testing and why I added the throttling.
   
   We've been using the slice in production for more than a year without throttling turned on using servers with 128GB+ ram without throttling.  However our servers push a lot of traffic with hundreds of remap rules so the slices that are cached in RAM are quickly evicted from ram cache and don't stack up like what you are seeing.  I added this throttling later when observing the ram cache behavior during synthetic testing.
   
   I'm actually surprised that the slice throttling isn't working well enough.  I believe ats deals internally with 32kb chunks, throttling at that level.  The slice plugin throttles at the slice block size, (1MB by default).  Perhaps with enough assets pushing through that 1MB vs 32kb might add up.
   
   It's not difficult to make that throttle size configurable so that it could be set to 32kb or more.  I'd have to start looking at more of the TSIOBuffer handling in the rest of the buffer handoff stack to see what could be done there.  Maybe a more correct solution would be to change (try to configure or add an option) ATS so that it doesn't do the smart optimization and stack up all those sequential ram cache hit blocks (TCP_MEM_HIT).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691285137






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak closed issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak closed issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692186676


   By default with an uncached asset ATS actually does internally throttle.  However when ATS discovers that the data is already cached in RAM it decides to "optimize": ignore throttling and stack all those in order memory blocks up in the send to client queue.  This is what I found during testing and why I added the throttling.
   
   We've been using the slice in production for more than a year without throttling turned on using servers with 128GB+ ram without throttling.  However our servers push a lot of traffic with hundreds of remap rules so the slices that are cached in RAM are quickly evicted from ram cache and don't stack up like what you are seeing.  I added this throttling later when observing the ram cache behavior during synthetic testing.
   
   I'm actually surprised that the slice throttling isn't working well enough.  I believe ats deals internally with 32kb chunks, throttling at that level.  The slice plugin throttles at the slice block size, (1MB by default).  Perhaps with enough assets pushing through that 1MB vs 32kb might add up.
   
   _It's not difficult to make that throttle size configurable so that it could be set to 32kb or more.  I'd have to start looking at more of the TSIOBuffer handling in the rest of the buffer handoff stack to see what could be done there.  Maybe a more correct solution would be to change (try to configure or add an option) ATS so that it doesn't do the smart optimization and stack up all those sequential ram cache hit blocks (TCP_MEM_HIT)._  <-- This has no effect, the way the cache works throttling only works at the segment level.
   
   If throttling is working correctly the segments won't load in advance, generally one segment and perhaps 2 segments ahead will be loaded (I could add a tweak there to make the 2 segments more rare but the gain will be minimal).
   
   I've never seen it load all the segments in advance.  There might be some other settings at play here.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692173634


   Indeed, after applying the trotting patch, the memory consumes less and the server continues to work. But is it possible to do without trotting so that ATS does not load all the segments in advance. This is done by nginx by default, but I don't know the technical details unfortunately


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691174708






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691285137


   I tried it. But the memory is still over. Also tried it on a 240G machine. In this case, 67G of memory was used with a disk cache of 120G. And the hi-end cpu is terribly loaded. There is also a huge use of incoming traffic. That's why I think that for some unknown reason, he fetches a lot of extra slices, although clients do not request them


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691174708


   Can you try the --throttle option?  That's supposed to not allow the in memory cache blocks to stack up.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692153248


   If things are working correct with the logs turned on you should see a lot of these messages.
   
   {code}
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1447760
   [Sep 14 15:58:19.186] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1414992
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1382224
   [Sep 14 15:58:19.187] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1349672
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1316904
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1284136
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1251368
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1218600
   [Sep 14 15:58:21.226] [ET_NET 18] DIAG: (slice) [client.cc: 153] handle_client_resp(): 0x7f6d340c8880 handle_client_resp: throttling 1185832
   {code}
   
   This was running with ats master.  I'm testing at the moment.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691685972


   request is absolute standard chrome direct file get like this
   
   `curl 'https://cdn.fps.cdn1.mycdn.com/contents/videos/17000/17868/17868_720p.mp4' \
     -H 'Connection: keep-alive' \
     -H 'Pragma: no-cache' \
     -H 'Cache-Control: no-cache' \
     -H 'Upgrade-Insecure-Requests: 1' \
     -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36' \
     -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9' \
     -H 'Sec-Fetch-Site: none' \
     -H 'Sec-Fetch-Mode: navigate' \
     -H 'Sec-Fetch-User: ?1' \
     -H 'Sec-Fetch-Dest: document' \
     -H 'Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7' \
     --compressed`
   
   Unfortunately, I am not very friendly with C ++ and I cannot write a dummy plugin for it. But, I tried to leave one range plugin. And it works well in the same configuration. If the url was wrong one range plugin would not work well probably?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] traeak edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
traeak edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692186676


   By default with an uncached asset ATS actually does internally throttle.  However when ATS discovers that the data is already cached in RAM it decides to "optimize": ignore throttling and stack all those in order memory blocks up in the send to client queue.  This is what I found during testing and why I added the throttling.
   
   We've been using the slice in production for more than a year without throttling turned on using servers with 128GB+ ram without throttling.  However our servers push a lot of traffic with hundreds of remap rules so the slices that are cached in RAM are quickly evicted from ram cache and don't stack up like what you are seeing.  I added this throttling later when observing the ram cache behavior during synthetic testing.
   
   I'm actually surprised that the slice throttling isn't working well enough.  I believe ats deals internally with 32kb chunks, throttling at that level.  The slice plugin throttles at the slice block size, (1MB by default).  Perhaps with enough assets pushing through that 1MB vs 32kb might add up.
   
   _It's not difficult to make that throttle size configurable so that it could be set to 32kb or more.  I'd have to start looking at more of the TSIOBuffer handling in the rest of the buffer handoff stack to see what could be done there.  Maybe a more correct solution would be to change (try to configure or add an option) ATS so that it doesn't do the smart optimization and stack up all those sequential ram cache hit blocks (TCP_MEM_HIT)._  <-- This has no effect, the way the cache works throttling only works at the segment level.
   
   If throttling is working correctly the segments won't load in advance, generally one segment and perhaps 2 segments ahead will be loaded (I could add a tweak there to make the 2 segments more rare but the gain will be minimal).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin commented on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin commented on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-691885817


   #7008 partially fix this problem. But slice still fetches a lot of excess traffic.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [trafficserver] cheluskin edited a comment on issue #7179: 8.1.x - 10.0.x slice.so and range plugin uses all memory on the server, but this is not a leak

Posted by GitBox <gi...@apache.org>.
cheluskin edited a comment on issue #7179:
URL: https://github.com/apache/trafficserver/issues/7179#issuecomment-692075443


   Sorry, I don't have email with \@apache.org. 
   Long thought about this problem. It seems to be correct from a technical point of view. Using throttling helps. But according to the logs, this is not a very effective solution. The ideal solution is when the next fragments are transmitted after the clients receives the previous ones with a small margin, and not try to stop the avalanche-like creation of fragments. In the latter case, we have a significant overhead. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org