You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@trafficserver.apache.org by bc...@apache.org on 2020/02/04 22:59:43 UTC

[trafficserver] branch 8.0.x updated (0cdd936 -> 3f3b54f)

This is an automated email from the ASF dual-hosted git repository.

bcall pushed a change to branch 8.0.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


    from 0cdd936  Set wrap after checking all the parents
     new 766f8df  Cleanup trailing whitespaces
     new 3f3b54f  Run dos2unix on all files in tree

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGELOG-8.0.0                                    |  10 +-
 CHANGELOG-8.0.2                                    |   2 +-
 doc/admin-guide/plugins/cachekey.en.rst            |   2 +-
 doc/admin-guide/plugins/lua.en.rst                 |   3 +-
 doc/appendices/command-line/traffic_layout.en.rst  |   6 +-
 .../internal-libraries/ArgParser.en.rst            |   4 +-
 doc/ext/traffic-server.py                          |   2 +-
 include/tscore/MT_hashtable.h                      | 882 ++++++++++-----------
 rc/trafficserver.in                                |   4 +-
 tests/README.md                                    |   4 +-
 tests/gold_tests/autest-site/microserver.test.ext  |   2 +-
 .../gold_tests/autest-site/traffic_replay.test.ext |   4 +-
 .../data/www.customplugin204.test_get.txt          |   4 +-
 .../data/www.customtemplate204.test_get.txt        |   4 +-
 .../body_factory/data/www.default204.test_get.txt  |   4 +-
 .../body_factory/data/www.default304.test_get.txt  |   4 +-
 .../body_factory/data/www.example.test_get_200.txt |   6 +-
 .../body_factory/data/www.example.test_get_304.txt |   8 +-
 .../body_factory/data/www.example.test_head.txt    |   6 +-
 .../data/www.example.test_head_200.txt             |   6 +-
 .../headers/data/www.passthrough.test_get.txt      |   4 +-
 .../headers/data/www.redirect0.test_get.txt        |   4 +-
 .../headers/data/www.redirect301.test_get.txt      |   4 +-
 .../headers/data/www.redirect302.test_get.txt      |   4 +-
 .../headers/data/www.redirect307.test_get.txt      |   4 +-
 .../headers/data/www.redirect308.test_get.txt      |   4 +-
 .../headers/general-connection-failure-502.gold    |  14 +-
 tests/gold_tests/pluginTest/url_sig/url_sig.gold   |  30 +-
 .../gold_tests/pluginTest/xdebug/x_remap/out.gold  |  58 +-
 tests/gold_tests/redirect/gold/redirect.gold       |   6 +-
 tests/tools/README.md                              |  18 +-
 tools/traffic_via.pl                               |  32 +-
 32 files changed, 574 insertions(+), 575 deletions(-)


[trafficserver] 01/02: Cleanup trailing whitespaces

Posted by bc...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bcall pushed a commit to branch 8.0.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

commit 766f8dfc0dbfdb925badd1c9141ff7e62f7ac74b
Author: Masaori Koshiba <ma...@apache.org>
AuthorDate: Tue Jan 7 09:08:24 2020 +0900

    Cleanup trailing whitespaces
---
 CHANGELOG-8.0.0                                    | 10 +++----
 CHANGELOG-8.0.2                                    |  2 +-
 doc/admin-guide/plugins/cachekey.en.rst            |  2 +-
 doc/admin-guide/plugins/lua.en.rst                 |  3 +-
 doc/appendices/command-line/traffic_layout.en.rst  |  6 ++--
 .../internal-libraries/ArgParser.en.rst            |  4 +--
 doc/ext/traffic-server.py                          |  2 +-
 rc/trafficserver.in                                |  4 +--
 tests/README.md                                    |  4 +--
 tests/gold_tests/autest-site/microserver.test.ext  |  2 +-
 .../gold_tests/autest-site/traffic_replay.test.ext |  4 +--
 tests/tools/README.md                              | 18 ++++++------
 tools/traffic_via.pl                               | 32 +++++++++++-----------
 13 files changed, 46 insertions(+), 47 deletions(-)

diff --git a/CHANGELOG-8.0.0 b/CHANGELOG-8.0.0
index 5631aa0..f3d6656 100644
--- a/CHANGELOG-8.0.0
+++ b/CHANGELOG-8.0.0
@@ -369,7 +369,7 @@ Changes with Apache Traffic Server 8.0.0
   #2190 - The content-type of TRACE response should be message/http.
   #2194 - Better initialization in Thread.cc.
   #2195 - Fix race condition in thread startup.
-  #2196 - Adding DNS over TCP support 
+  #2196 - Adding DNS over TCP support
   #2201 - Add experimental plugin to initiate H2 Server push for preload links.
   #2202 - H2 test with traffic -replay
   #2209 - Rework SSL Handshake Hooks and add tls_hooks tests.
@@ -451,7 +451,7 @@ Changes with Apache Traffic Server 8.0.0
   #2432 - Format python code with autopep8
   #2434 - Coverity 1379933:  Control flow issues  (DEADCODE)
   #2435 - Resign server.pem for autests to 10 years
-  #2437 - null transform test 
+  #2437 - null transform test
   #2439 - Coverity CID 1380022: FORWARD_NULL
   #2440 - Optimize: Add startIO & stopIO for NetHandler
   #2441 - Bug fix for event thread names
@@ -537,7 +537,7 @@ Changes with Apache Traffic Server 8.0.0
   #2638 - Turn off exception warning for gcc 7
   #2639 - Fixed issue with clean/distclean
   #2643 - fix the missing lock in TSVConnFdCreate api
-  #2644 - remove dup setting with NO_FD 
+  #2644 - remove dup setting with NO_FD
   #2647 - Added arguments for port number and ip in microdns, microserver extension.
   #2648 - This removes the FILE_WRITE mechanism from the core
   #2649 - Move http 408 response logic into transaction
@@ -558,7 +558,7 @@ Changes with Apache Traffic Server 8.0.0
   #2690 - Fixed chunked_encoding gold file
   #2691 - Kill unused .c file which contains code in a language invented by a Danish guy.
   #2694 - Implement zero-copy within UDPNetProcessorInternal::udp_read_from_net
-  #2696 - Script load speedup for ts_lua remap plugin 
+  #2696 - Script load speedup for ts_lua remap plugin
   #2697 - List packages for building in Ubuntu
   #2698 - CID 1226158: Uninitialized members
   #2714 - Restructured traffic_layout and modified runroot command line
@@ -906,7 +906,7 @@ Changes with Apache Traffic Server 8.0.0
   #3526 - Ran clang-tidy with readability-braces-around-statements
   #3527 - Cleans up some pylint issues in the python code
   #3529 - Correct parameter for certificate verification
-  #3530 - Add time option to traffic_ctl host down.  
+  #3530 - Add time option to traffic_ctl host down.
   #3531 - Doc: Unify ":unit:" to ":units".
   #3532 - Updated to new version of clang-format
   #3533 - Doc: various fixes
diff --git a/CHANGELOG-8.0.2 b/CHANGELOG-8.0.2
index 9fff3e2..1e15928 100644
--- a/CHANGELOG-8.0.2
+++ b/CHANGELOG-8.0.2
@@ -2,7 +2,7 @@ Changes with Apache Traffic Server 8.0.2
   #3601 - Add TLSv1.3 cipher suites for OpenSSL-1.1.1
   #3927 - cppapi :InterceptPlugin fix
   #3939 - Make sure the index stays positive
-  #4217 - Fix a regression in the traffic_ctl host status subcommand. 
+  #4217 - Fix a regression in the traffic_ctl host status subcommand.
   #4369 - Doc: Fix doc build to work with Sphinx 1.8.
   #4541 - Doc minor fixes.
   #4700 - sslheaders experimental plugin:  fix doc typo, improve container use.
diff --git a/doc/admin-guide/plugins/cachekey.en.rst b/doc/admin-guide/plugins/cachekey.en.rst
index 0b631d5..5d8b6a0 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -582,7 +582,7 @@ Cacheurl plugin to cachekey plugin migration
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 The plugin `cachekey` was not meant to replace the cacheurl plugin in terms of having exactly the same cache key strings generated. It just allows the operator to exctract elements from the HTTP URI in the same way the `cacheurl` does (through a regular expression, please see `<capture_definition>` above).
 
-The following examples demonstrate different ways to achieve `cacheurl` compatibility on a cache key string level in order to avoid invalidation of the cache. 
+The following examples demonstrate different ways to achieve `cacheurl` compatibility on a cache key string level in order to avoid invalidation of the cache.
 
 The operator could use `--capture-path-uri`, `--capture-path`, `--capture-prefix-uri`, `--capture-prefix` to capture elements from the URI, path and authority elements.
 
diff --git a/doc/admin-guide/plugins/lua.en.rst b/doc/admin-guide/plugins/lua.en.rst
index 7bc17a9..8369fbd 100644
--- a/doc/admin-guide/plugins/lua.en.rst
+++ b/doc/admin-guide/plugins/lua.en.rst
@@ -2253,7 +2253,7 @@ ts.http.resp_transform.set_upstream_watermark_bytes
 
 **context:** transform handler
 
-**description**: This function can be used to set the watermark bytes of the upstream transform buffer. 
+**description**: This function can be used to set the watermark bytes of the upstream transform buffer.
 
 Setting the watermark bytes above 32kb may improve the performance of the transform handler.
 
@@ -3766,4 +3766,3 @@ More docs
 * https://github.com/portl4t/ts-lua
 
 `TOP <#ts-lua-plugin>`_
-
diff --git a/doc/appendices/command-line/traffic_layout.en.rst b/doc/appendices/command-line/traffic_layout.en.rst
index f9dbe13..f660cad 100644
--- a/doc/appendices/command-line/traffic_layout.en.rst
+++ b/doc/appendices/command-line/traffic_layout.en.rst
@@ -50,7 +50,7 @@ First we need to create a runroot. It can be created simply by calling command `
 
     traffic_layout init --path /path/to/runroot
 
-A runroot will be created in ``/path/to/runroot``, available for other programs to use. 
+A runroot will be created in ``/path/to/runroot``, available for other programs to use.
 If the path is not specified, the current working directory will be used.
 
 To run traffic_manager, for example, using the runroot, there are several ways:
@@ -70,10 +70,10 @@ Subcommands
 
 init
 ----
-Use the current working directory or the specific path to create runroot. 
+Use the current working directory or the specific path to create runroot.
 The path can be absolute or relative.
 
-workflow: 
+workflow:
     #. Create a sandbox directory for programs to run under.
     #. Copy and symlink build time directories and files to the sandbox, allowing users to modify freely.
     #. Emit a YAML file that defines layout structure for other programs to use (relative path).
diff --git a/doc/developer-guide/internal-libraries/ArgParser.en.rst b/doc/developer-guide/internal-libraries/ArgParser.en.rst
index b01ac02..9594e7e 100644
--- a/doc/developer-guide/internal-libraries/ArgParser.en.rst
+++ b/doc/developer-guide/internal-libraries/ArgParser.en.rst
@@ -58,7 +58,7 @@ The parser can be created simply by calling:
 
    ts::ArgParser parser;
 
-or initialize with the following arguments: 
+or initialize with the following arguments:
 *name, help description, environment variable, argument number expected, function*
 
 .. code-block:: cpp
@@ -343,7 +343,7 @@ This program will have such functionality:
         init_command.add_command("subinit", "sub initialize");
 
         parser.add_command("remove", "remove things").add_option("--path", "-p", "specify the path", "HOME", 1);
-        
+
         ts::Arguments parsed_data = parser.parse(argv);
         parsed_data.invoke();
         ...
diff --git a/doc/ext/traffic-server.py b/doc/ext/traffic-server.py
index ce1e1e4..24d9ef6 100644
--- a/doc/ext/traffic-server.py
+++ b/doc/ext/traffic-server.py
@@ -409,7 +409,7 @@ def setup(app):
     app.add_crossref_type('configfile', 'file',
                           objname='Configuration file',
                           indextemplate='pair: %s; Configuration files')
-    
+
     # Very ugly, but as of Sphinx 1.8 it must be done. There is an `override` option to add_crossref_type
     # but it only applies to the directive, not the role (`file` in this case). If this isn't cleared
     # explicitly the build will fail out due to the conflict. In this case, since the role action is the
diff --git a/rc/trafficserver.in b/rc/trafficserver.in
index b5036d9..2d4e648 100644
--- a/rc/trafficserver.in
+++ b/rc/trafficserver.in
@@ -101,7 +101,7 @@ TS_ROOT=${TS_ROOT:-$TS_PREFIX}
 # ####################################
 # run root is not used by default
 # set this value if using a custom layout structure
-# TS_RUNROOT="" 
+# TS_RUNROOT=""
 
 # TS_BASE is offset inside the file system from where the layout starts
 # For standard installations TS_BASE will be empty
@@ -292,7 +292,7 @@ rc_start_hook()
     return $?
 }
 
-# Make sure the NOFILES limit is set high in case this process is not 
+# Make sure the NOFILES limit is set high in case this process is not
 # started from a logged in prompt
 ulimit -n 500000
 
diff --git a/tests/README.md b/tests/README.md
index 8d07029..93f3fe4 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -6,8 +6,8 @@ This directory contains different tests for Apache Trafficserver. It is recommen
 ## Layout
 The current layout is:
 
-**gold_tests/** - contains all the TSQA v4 based tests that run on the Reusable Gold Testing System (AuTest)  
-**tools/** - contains programs used to help with testing.  
+**gold_tests/** - contains all the TSQA v4 based tests that run on the Reusable Gold Testing System (AuTest)
+**tools/** - contains programs used to help with testing.
 **include/** - contains headers used for unit testing.
 
 ## Scripts
diff --git a/tests/gold_tests/autest-site/microserver.test.ext b/tests/gold_tests/autest-site/microserver.test.ext
index bef027f..e8d4949 100644
--- a/tests/gold_tests/autest-site/microserver.test.ext
+++ b/tests/gold_tests/autest-site/microserver.test.ext
@@ -177,7 +177,7 @@ def MakeOriginServer(obj, name, port=None, s_port=None, ip='INADDR_LOOPBACK', de
         command += " --cert {0}".format(cert)
         command += " --s_port {0}".format(s_port)
 
-    # this might break if user specifies both both and ssl 
+    # this might break if user specifies both both and ssl
     if not ssl: # in both or HTTP only mode
         if not port:
             port = get_port(p, "Port")
diff --git a/tests/gold_tests/autest-site/traffic_replay.test.ext b/tests/gold_tests/autest-site/traffic_replay.test.ext
index 340fae9..0779dd5 100644
--- a/tests/gold_tests/autest-site/traffic_replay.test.ext
+++ b/tests/gold_tests/autest-site/traffic_replay.test.ext
@@ -16,7 +16,7 @@
 #  See the License for the specific language governing permissions and
 #  limitations under the License.
 
-# default 'mixed' for connection type since it doesn't hurt 
+# default 'mixed' for connection type since it doesn't hurt
 def Replay(obj, name, replay_dir, key=None, cert=None, conn_type='mixed', options={}):
     # ATS setup - one line because we leave records and remap config to user
     ts = obj.MakeATSProcess("ts", select_ports=False) # select ports can be disabled once we add ssl port selection in extension
@@ -61,7 +61,7 @@ def Replay(obj, name, replay_dir, key=None, cert=None, conn_type='mixed', option
 
     if not cert:
         cert = os.path.join(obj.Variables["AtsTestToolsDir"], "microserver", "ssl", "server.crt")
-    
+
     command = 'traffic-replay --log_dir {0} --type {1} --verify --host {2} --port {3} --s_port {4} '.format(data_dir, conn_type, hostIP, ts.Variables.port, ts.Variables.ssl_port)
 
     if key:
diff --git a/tests/tools/README.md b/tests/tools/README.md
index 4e7e208..bb3e34c 100644
--- a/tests/tools/README.md
+++ b/tests/tools/README.md
@@ -7,21 +7,21 @@ Note these Tools require python 3.4 or better.
 
 Replay client to replay session logs.
 
-Usage: 
+Usage:
 python3.5 traffic-replay/ -type <nossl|ssl|h2|random> -log_dir /path/to/log -v
 
-Session Log format (in JSON): 
+Session Log format (in JSON):
 
- {"version": "0.1", 
+ {"version": "0.1",
   "txns": [
-        {"request": {"headers": "POST ……\r\n\r\n", "timestamp": "..", "body": ".."}, 
+        {"request": {"headers": "POST ……\r\n\r\n", "timestamp": "..", "body": ".."},
         "response": {"headers": "HTTP/1.1..\r\n\r\n", "timestamp": "..", "body": ".."},
-         "uuid": "1"}, 
-        {"request": {"headers": "POST ..….\r\n\r\n", "timestamp": "..", "body": ".."}, 
-        "response": {"headers": "HTTP/1.1..\r\nr\n", "timestamp": "..", "body": ".."}, 
+        "uuid": "1"},
+        {"request": {"headers": "POST ..….\r\n\r\n", "timestamp": "..", "body": ".."},
+        "response": {"headers": "HTTP/1.1..\r\nr\n", "timestamp": "..", "body": ".."},
         "uuid": "2"}
-  ], 
-  "timestamp": "....", 
+  ],
+  "timestamp": "....",
   "encoding": "...."}
   Configuration: The configuration required to run traffic-replay can be specified in traffic-replay/Config.py
 
diff --git a/tools/traffic_via.pl b/tools/traffic_via.pl
index 6eee2a6..7dcf39a 100755
--- a/tools/traffic_via.pl
+++ b/tools/traffic_via.pl
@@ -25,9 +25,9 @@
 # 1. Pass Via Header with -s option \n";
 #    traffic_via [-s viaheader]";
 #           or
-# 2. Pipe curl output 
+# 2. Pipe curl output
 #    curl -v -H "X-Debug: Via" http://ats_server:port 2>&1| ./traffic_via.pl
-# 
+#
 
 use strict;
 use warnings;
@@ -41,7 +41,7 @@ my $help;
 my @proxy_header_array = (
     {
         "Request headers received from client:",
-        { 
+        {
             'I' => "If Modified Since (IMS)",
             'C' => "cookie",
             'E' => "error in request",
@@ -81,7 +81,7 @@ my @proxy_header_array = (
     },
     {
         "Proxy operation result:",
-        { 
+        {
                     'R' => "origin server revalidated",
                     ' ' => "unknown?",
                     'S' => "served",
@@ -148,7 +148,7 @@ my @proxy_header_array = (
                     'S' => "connection opened successfully",
                     'F' => "connection open failed",
         },
-    
+
    },
    {
         "Origin server connection status:",
@@ -181,7 +181,7 @@ if (@ARGV == 0) {
         #Pattern matching for Via
         if ($element =~ /Via:(.*)\[(.*)\]/) {
             #Search and grep via header
-            $via_string = $2;    
+            $via_string = $2;
             chomp($via_string);
             print "Via Header is [$via_string]";
             decode_via_header($via_string);
@@ -191,7 +191,7 @@ if (@ARGV == 0) {
     usage() if (!GetOptions('s=s' => \$via_header,
                 'help|?' => \$help) or
                 defined $help);
-    
+
     if (defined $via_header) {
         #if passed through commandline dashed argument
         print "Via Header is [$via_header]";
@@ -207,10 +207,10 @@ sub decode_via_header {
     my $newHeader;
 
     #Check via header syntax
-    if ($header =~ /([a-zA-Z: ]+)/) {   
+    if ($header =~ /([a-zA-Z: ]+)/) {
         #Get via header length
         $hdrLength = length($header);
-        
+
         # Valid Via header length is 24 or 6.
         # When Via header length is 24, it will have both proxy request header result and operational results.
         if ($hdrLength == 24) {
@@ -228,8 +228,8 @@ sub decode_via_header {
         }
         convert_header_to_array($newHeader);
     }
-    
-    
+
+
 }
 
 sub convert_header_to_array {
@@ -239,7 +239,7 @@ sub convert_header_to_array {
     while ($viaHeader =~ /(.)/g) {
             #Only capital letters indicate flags
             if ($1 !~ m/[a-z]+/) {
-                push(@ResultArray, $1); 
+                push(@ResultArray, $1);
             }
     }
     print "\nVia Header details: \n";
@@ -256,18 +256,18 @@ sub get_via_header_flags {
     my @flagKeys;
     my %flags;
     my @keys;
-    
+
     my @array = @$arrayName;
-            
+
     %flagValues = %{$array[$inputIndex]};
     @flagKeys = keys (%flagValues);
-    
+
     foreach my $keyEntry ( @flagKeys )  {
         printf ("%-55s", $keyEntry);
         %flags = %{$flagValues{$keyEntry}};
         @keys = keys (%flags);
         foreach my $key ( @keys )  {
-            if ($key =~ /$flag/) { 
+            if ($key =~ /$flag/) {
                  #print $flags{$key};
                  printf("%s",$flags{$key});
                  print "\n";


[trafficserver] 02/02: Run dos2unix on all files in tree

Posted by bc...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bcall pushed a commit to branch 8.0.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

commit 3f3b54fe9be3dfedf4185b181602487aef27fd4e
Author: Randall Meyer <rr...@apache.org>
AuthorDate: Wed Nov 20 16:32:18 2019 +0800

    Run dos2unix on all files in tree
    
    Intentionally ignorng files under lib/yamlcpp (since we just maintain a
    copy of their distro)
    
    (cherry picked from commit 994a2f04be6c28ffe7207c08480ad944690a0411)
    
    Additional files on 8.0.x to backport
            include/tscore/MT_hashtable.h
            tests/gold_tests/pluginTest/xdebug/x_remap/out.gold
    
    Conflicts:
    	tests/gold_tests/pluginTest/regex_remap/gold/regex_remap_crash.gold
    	tests/gold_tests/pluginTest/regex_remap/gold/regex_remap_smoke.gold
    	tests/gold_tests/pluginTest/regex_revalidate/gold/regex_reval-hit.gold
    	tests/gold_tests/pluginTest/regex_revalidate/gold/regex_reval-miss.gold
    	tests/gold_tests/pluginTest/regex_revalidate/gold/regex_reval-stale.gold
    	tests/gold_tests/pluginTest/slice/gold/slice_200.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold/slice_206.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold/slice_first.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold/slice_last.stderr.gold
    	tests/gold_tests/pluginTest/slice/gold/slice_last.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold/slice_mid.stderr.gold
    	tests/gold_tests/pluginTest/slice/gold/slice_mid.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold_error/crr.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold_error/etag.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold_error/lm.stdout.gold
    	tests/gold_tests/pluginTest/slice/gold_error/non206.stdout.gold
---
 include/tscore/MT_hashtable.h                      | 882 ++++++++++-----------
 .../data/www.customplugin204.test_get.txt          |   4 +-
 .../data/www.customtemplate204.test_get.txt        |   4 +-
 .../body_factory/data/www.default204.test_get.txt  |   4 +-
 .../body_factory/data/www.default304.test_get.txt  |   4 +-
 .../body_factory/data/www.example.test_get_200.txt |   6 +-
 .../body_factory/data/www.example.test_get_304.txt |   8 +-
 .../body_factory/data/www.example.test_head.txt    |   6 +-
 .../data/www.example.test_head_200.txt             |   6 +-
 .../headers/data/www.passthrough.test_get.txt      |   4 +-
 .../headers/data/www.redirect0.test_get.txt        |   4 +-
 .../headers/data/www.redirect301.test_get.txt      |   4 +-
 .../headers/data/www.redirect302.test_get.txt      |   4 +-
 .../headers/data/www.redirect307.test_get.txt      |   4 +-
 .../headers/data/www.redirect308.test_get.txt      |   4 +-
 .../headers/general-connection-failure-502.gold    |  14 +-
 tests/gold_tests/pluginTest/url_sig/url_sig.gold   |  30 +-
 .../gold_tests/pluginTest/xdebug/x_remap/out.gold  |  58 +-
 tests/gold_tests/redirect/gold/redirect.gold       |   6 +-
 19 files changed, 528 insertions(+), 528 deletions(-)

diff --git a/include/tscore/MT_hashtable.h b/include/tscore/MT_hashtable.h
index bfbae79..2c24225 100644
--- a/include/tscore/MT_hashtable.h
+++ b/include/tscore/MT_hashtable.h
@@ -1,441 +1,441 @@
-/** @file
-
-  A brief file description
-
-  @section license License
-
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
- */
-
-/****************************************************************************
-
-  MT_hashtable.h
-
-  Multithread Safe Hash table implementation
-
-
- ****************************************************************************/
-#pragma once
-
-#define MT_HASHTABLE_PARTITION_BITS 6
-#define MT_HASHTABLE_PARTITIONS (1 << MT_HASHTABLE_PARTITION_BITS)
-#define MT_HASHTABLE_PARTITION_MASK (MT_HASHTABLE_PARTITIONS - 1)
-#define MT_HASHTABLE_MAX_CHAIN_AVG_LEN 4
-template <class key_t, class data_t> struct HashTableEntry {
-  key_t key;
-  data_t data;
-  HashTableEntry *next;
-
-  static HashTableEntry *
-  alloc()
-  {
-    return (HashTableEntry *)ats_malloc(sizeof(HashTableEntry));
-  }
-
-  static void
-  free(HashTableEntry *entry)
-  {
-    ats_free(entry);
-  }
-};
-
-/*
-struct MT_ListEntry{
-  MT_ListEntry():next(NULL),prev(NULL){}
-  MT_ListEntry* next;
-  MT_ListEntry* prev;
-};
-
-#define INIT_CHAIN_HEAD(h) {(h)->next = (h)->prev = (h);}
-#define APPEND_TO_CHAIN(h, p) {(p)->next = (h)->next; (h)->next->prev = (p); (p)->prev = (h); (h)->next = (p);}
-#define REMOVE_FROM_CHAIN(p) {(p)->next->prev = (p)->prev; (p)->prev->next = (p)->next; (p)->prev = (p)->next = NULL;}
-#define GET_OBJ_PTR(p, type, offset) ((type*)((char*)(p) - offset))
-*/
-
-template <class key_t, class data_t> class HashTableIteratorState
-{
-public:
-  HashTableIteratorState() : cur_buck(-1), ppcur(NULL) {}
-  int cur_buck;
-  HashTableEntry<key_t, data_t> **ppcur;
-};
-
-template <class key_t, class data_t> class IMTHashTable
-{
-public:
-  IMTHashTable(int size, bool (*gc_func)(data_t) = NULL, void (*pre_gc_func)(void) = nullptr)
-  {
-    m_gc_func     = gc_func;
-    m_pre_gc_func = pre_gc_func;
-    bucket_num    = size;
-    cur_size      = 0;
-    buckets       = new HashTableEntry<key_t, data_t> *[bucket_num];
-    memset(buckets, 0, bucket_num * sizeof(HashTableEntry<key_t, data_t> *));
-  }
-  ~IMTHashTable() { reset(); }
-  int
-  getBucketNum()
-  {
-    return bucket_num;
-  }
-  int
-  getCurSize()
-  {
-    return cur_size;
-  }
-
-  int
-  bucket_id(key_t key, int a_bucket_num)
-  {
-    return (int)(((key >> MT_HASHTABLE_PARTITION_BITS) ^ key) % a_bucket_num);
-  }
-
-  int
-  bucket_id(key_t key)
-  {
-    return bucket_id(key, bucket_num);
-  }
-
-  void
-  reset()
-  {
-    HashTableEntry<key_t, data_t> *tmp;
-    for (int i = 0; i < bucket_num; i++) {
-      tmp = buckets[i];
-      while (tmp) {
-        buckets[i] = tmp->next;
-        HashTableEntry<key_t, data_t>::free(tmp);
-        tmp = buckets[i];
-      }
-    }
-    delete[] buckets;
-    buckets = NULL;
-  }
-
-  data_t insert_entry(key_t key, data_t data);
-  data_t remove_entry(key_t key);
-  data_t lookup_entry(key_t key);
-
-  data_t first_entry(int bucket_id, HashTableIteratorState<key_t, data_t> *s);
-  static data_t next_entry(HashTableIteratorState<key_t, data_t> *s);
-  static data_t cur_entry(HashTableIteratorState<key_t, data_t> *s);
-  data_t remove_entry(HashTableIteratorState<key_t, data_t> *s);
-
-  void
-  GC(void)
-  {
-    if (m_gc_func == NULL) {
-      return;
-    }
-    if (m_pre_gc_func) {
-      m_pre_gc_func();
-    }
-    for (int i = 0; i < bucket_num; i++) {
-      HashTableEntry<key_t, data_t> *cur  = buckets[i];
-      HashTableEntry<key_t, data_t> *prev = NULL;
-      HashTableEntry<key_t, data_t> *next = NULL;
-      while (cur != NULL) {
-        next = cur->next;
-        if (m_gc_func(cur->data)) {
-          if (prev != NULL) {
-            prev->next = next;
-          } else {
-            buckets[i] = next;
-          }
-          ats_free(cur);
-          cur_size--;
-        } else {
-          prev = cur;
-        }
-        cur = next;
-      }
-    }
-  }
-
-  void
-  resize(int size)
-  {
-    int new_bucket_num                          = size;
-    HashTableEntry<key_t, data_t> **new_buckets = new HashTableEntry<key_t, data_t> *[new_bucket_num];
-    memset(new_buckets, 0, new_bucket_num * sizeof(HashTableEntry<key_t, data_t> *));
-
-    for (int i = 0; i < bucket_num; i++) {
-      HashTableEntry<key_t, data_t> *cur  = buckets[i];
-      HashTableEntry<key_t, data_t> *next = NULL;
-      while (cur != NULL) {
-        next                = cur->next;
-        int new_id          = bucket_id(cur->key, new_bucket_num);
-        cur->next           = new_buckets[new_id];
-        new_buckets[new_id] = cur;
-        cur                 = next;
-      }
-      buckets[i] = NULL;
-    }
-    delete[] buckets;
-    buckets    = new_buckets;
-    bucket_num = new_bucket_num;
-  }
-
-private:
-  HashTableEntry<key_t, data_t> **buckets;
-  int cur_size;
-  int bucket_num;
-  bool (*m_gc_func)(data_t);
-  void (*m_pre_gc_func)(void);
-
-private:
-  IMTHashTable();
-  IMTHashTable(IMTHashTable &);
-};
-
-/*
- * we can use ClassAllocator here if the malloc performance becomes a problem
- */
-
-template <class key_t, class data_t>
-inline data_t
-IMTHashTable<key_t, data_t>::insert_entry(key_t key, data_t data)
-{
-  int id                             = bucket_id(key);
-  HashTableEntry<key_t, data_t> *cur = buckets[id];
-  while (cur != NULL && cur->key != key) {
-    cur = cur->next;
-  }
-  if (cur != NULL) {
-    if (data == cur->data) {
-      return (data_t)0;
-    } else {
-      data_t tmp = cur->data;
-      cur->data  = data;
-      // potential memory leak, need to check the return value by the caller
-      return tmp;
-    }
-  }
-
-  HashTableEntry<key_t, data_t> *newEntry = HashTableEntry<key_t, data_t>::alloc();
-  newEntry->key                           = key;
-  newEntry->data                          = data;
-  newEntry->next                          = buckets[id];
-  buckets[id]                             = newEntry;
-  cur_size++;
-  if (cur_size / bucket_num > MT_HASHTABLE_MAX_CHAIN_AVG_LEN) {
-    GC();
-    if (cur_size / bucket_num > MT_HASHTABLE_MAX_CHAIN_AVG_LEN) {
-      resize(bucket_num * 2);
-    }
-  }
-  return (data_t)0;
-}
-
-template <class key_t, class data_t>
-inline data_t
-IMTHashTable<key_t, data_t>::remove_entry(key_t key)
-{
-  int id                              = bucket_id(key);
-  data_t ret                          = (data_t)0;
-  HashTableEntry<key_t, data_t> *cur  = buckets[id];
-  HashTableEntry<key_t, data_t> *prev = NULL;
-  while (cur != NULL && cur->key != key) {
-    prev = cur;
-    cur  = cur->next;
-  }
-  if (cur != NULL) {
-    if (prev != NULL) {
-      prev->next = cur->next;
-    } else {
-      buckets[id] = cur->next;
-    }
-    ret = cur->data;
-    HashTableEntry<key_t, data_t>::free(cur);
-    cur_size--;
-  }
-
-  return ret;
-}
-
-template <class key_t, class data_t>
-inline data_t
-IMTHashTable<key_t, data_t>::lookup_entry(key_t key)
-{
-  int id                             = bucket_id(key);
-  data_t ret                         = (data_t)0;
-  HashTableEntry<key_t, data_t> *cur = buckets[id];
-  while (cur != NULL && cur->key != key) {
-    cur = cur->next;
-  }
-  if (cur != NULL) {
-    ret = cur->data;
-  }
-  return ret;
-}
-
-template <class key_t, class data_t>
-inline data_t
-IMTHashTable<key_t, data_t>::first_entry(int bucket_id, HashTableIteratorState<key_t, data_t> *s)
-{
-  s->cur_buck = bucket_id;
-  s->ppcur    = &(buckets[bucket_id]);
-  if (*(s->ppcur) != NULL) {
-    return (*(s->ppcur))->data;
-  }
-  return (data_t)0;
-}
-
-template <class key_t, class data_t>
-inline data_t
-IMTHashTable<key_t, data_t>::next_entry(HashTableIteratorState<key_t, data_t> *s)
-{
-  if ((*(s->ppcur)) != NULL) {
-    s->ppcur = &((*(s->ppcur))->next);
-    if (*(s->ppcur) != NULL) {
-      return (*(s->ppcur))->data;
-    }
-  }
-  return (data_t)0;
-}
-
-template <class key_t, class data_t>
-inline data_t
-IMTHashTable<key_t, data_t>::cur_entry(HashTableIteratorState<key_t, data_t> *s)
-{
-  if (*(s->ppcur) == NULL) {
-    return (data_t)0;
-  }
-  return (*(s->ppcur))->data;
-}
-
-template <class key_t, class data_t>
-inline data_t
-IMTHashTable<key_t, data_t>::remove_entry(HashTableIteratorState<key_t, data_t> *s)
-{
-  data_t data                           = (data_t)0;
-  HashTableEntry<key_t, data_t> *pEntry = *(s->ppcur);
-  if (pEntry != NULL) {
-    data          = pEntry->data;
-    (*(s->ppcur)) = pEntry->next;
-    HashTableEntry<key_t, data_t>::free(pEntry);
-    cur_size--;
-  }
-  return data;
-}
-
-template <class key_t, class data_t> class MTHashTable
-{
-public:
-  MTHashTable(int size, bool (*gc_func)(data_t) = NULL, void (*pre_gc_func)(void) = nullptr)
-  {
-    for (int i = 0; i < MT_HASHTABLE_PARTITIONS; i++) {
-      locks[i]      = new_ProxyMutex();
-      hashTables[i] = new IMTHashTable<key_t, data_t>(size, gc_func, pre_gc_func);
-      // INIT_CHAIN_HEAD(&chain_heads[i]);
-      // last_GC_time[i] = 0;
-    }
-    //    cur_items = 0;
-  }
-  ~MTHashTable()
-  {
-    for (int i = 0; i < MT_HASHTABLE_PARTITIONS; i++) {
-      locks[i] = NULL;
-      delete hashTables[i];
-    }
-  }
-
-  ProxyMutex *
-  lock_for_key(key_t key)
-  {
-    return locks[part_num(key)].get();
-  }
-
-  int
-  getSize()
-  {
-    return MT_HASHTABLE_PARTITIONS;
-  }
-  int
-  part_num(key_t key)
-  {
-    return (int)(key & MT_HASHTABLE_PARTITION_MASK);
-  }
-  data_t
-  insert_entry(key_t key, data_t data)
-  {
-    // ink_atomic_increment(&cur_items, 1);
-    return hashTables[part_num(key)]->insert_entry(key, data);
-  }
-  data_t
-  remove_entry(key_t key)
-  {
-    // ink_atomic_increment(&cur_items, -1);
-    return hashTables[part_num(key)]->remove_entry(key);
-  }
-  data_t
-  lookup_entry(key_t key)
-  {
-    return hashTables[part_num(key)]->lookup_entry(key);
-  }
-
-  data_t
-  first_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
-  {
-    data_t ret = (data_t)0;
-    for (int i = 0; i < hashTables[part_id]->getBucketNum(); i++) {
-      ret = hashTables[part_id]->first_entry(i, s);
-      if (ret != (data_t)0) {
-        return ret;
-      }
-    }
-    return (data_t)0;
-  }
-
-  data_t
-  cur_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
-  {
-    data_t data = IMTHashTable<key_t, data_t>::cur_entry(s);
-    if (!data) {
-      data = next_entry(part_id, s);
-    }
-    return data;
-  };
-  data_t
-  next_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
-  {
-    data_t ret = IMTHashTable<key_t, data_t>::next_entry(s);
-    if (ret != (data_t)0) {
-      return ret;
-    }
-    for (int i = s->cur_buck + 1; i < hashTables[part_id]->getBucketNum(); i++) {
-      ret = hashTables[part_id]->first_entry(i, s);
-      if (ret != (data_t)0) {
-        return ret;
-      }
-    }
-    return (data_t)0;
-  }
-  data_t
-  remove_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
-  {
-    // ink_atomic_increment(&cur_items, -1);
-    return hashTables[part_id]->remove_entry(s);
-  }
-
-private:
-  IMTHashTable<key_t, data_t> *hashTables[MT_HASHTABLE_PARTITIONS];
-  Ptr<ProxyMutex> locks[MT_HASHTABLE_PARTITIONS];
-  // MT_ListEntry chain_heads[MT_HASHTABLE_PARTITIONS];
-  // int last_GC_time[MT_HASHTABLE_PARTITIONS];
-  // int32_t cur_items;
-};
+/** @file
+
+  A brief file description
+
+  @section license License
+
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+ */
+
+/****************************************************************************
+
+  MT_hashtable.h
+
+  Multithread Safe Hash table implementation
+
+
+ ****************************************************************************/
+#pragma once
+
+#define MT_HASHTABLE_PARTITION_BITS 6
+#define MT_HASHTABLE_PARTITIONS (1 << MT_HASHTABLE_PARTITION_BITS)
+#define MT_HASHTABLE_PARTITION_MASK (MT_HASHTABLE_PARTITIONS - 1)
+#define MT_HASHTABLE_MAX_CHAIN_AVG_LEN 4
+template <class key_t, class data_t> struct HashTableEntry {
+  key_t key;
+  data_t data;
+  HashTableEntry *next;
+
+  static HashTableEntry *
+  alloc()
+  {
+    return (HashTableEntry *)ats_malloc(sizeof(HashTableEntry));
+  }
+
+  static void
+  free(HashTableEntry *entry)
+  {
+    ats_free(entry);
+  }
+};
+
+/*
+struct MT_ListEntry{
+  MT_ListEntry():next(NULL),prev(NULL){}
+  MT_ListEntry* next;
+  MT_ListEntry* prev;
+};
+
+#define INIT_CHAIN_HEAD(h) {(h)->next = (h)->prev = (h);}
+#define APPEND_TO_CHAIN(h, p) {(p)->next = (h)->next; (h)->next->prev = (p); (p)->prev = (h); (h)->next = (p);}
+#define REMOVE_FROM_CHAIN(p) {(p)->next->prev = (p)->prev; (p)->prev->next = (p)->next; (p)->prev = (p)->next = NULL;}
+#define GET_OBJ_PTR(p, type, offset) ((type*)((char*)(p) - offset))
+*/
+
+template <class key_t, class data_t> class HashTableIteratorState
+{
+public:
+  HashTableIteratorState() : cur_buck(-1), ppcur(NULL) {}
+  int cur_buck;
+  HashTableEntry<key_t, data_t> **ppcur;
+};
+
+template <class key_t, class data_t> class IMTHashTable
+{
+public:
+  IMTHashTable(int size, bool (*gc_func)(data_t) = NULL, void (*pre_gc_func)(void) = nullptr)
+  {
+    m_gc_func     = gc_func;
+    m_pre_gc_func = pre_gc_func;
+    bucket_num    = size;
+    cur_size      = 0;
+    buckets       = new HashTableEntry<key_t, data_t> *[bucket_num];
+    memset(buckets, 0, bucket_num * sizeof(HashTableEntry<key_t, data_t> *));
+  }
+  ~IMTHashTable() { reset(); }
+  int
+  getBucketNum()
+  {
+    return bucket_num;
+  }
+  int
+  getCurSize()
+  {
+    return cur_size;
+  }
+
+  int
+  bucket_id(key_t key, int a_bucket_num)
+  {
+    return (int)(((key >> MT_HASHTABLE_PARTITION_BITS) ^ key) % a_bucket_num);
+  }
+
+  int
+  bucket_id(key_t key)
+  {
+    return bucket_id(key, bucket_num);
+  }
+
+  void
+  reset()
+  {
+    HashTableEntry<key_t, data_t> *tmp;
+    for (int i = 0; i < bucket_num; i++) {
+      tmp = buckets[i];
+      while (tmp) {
+        buckets[i] = tmp->next;
+        HashTableEntry<key_t, data_t>::free(tmp);
+        tmp = buckets[i];
+      }
+    }
+    delete[] buckets;
+    buckets = NULL;
+  }
+
+  data_t insert_entry(key_t key, data_t data);
+  data_t remove_entry(key_t key);
+  data_t lookup_entry(key_t key);
+
+  data_t first_entry(int bucket_id, HashTableIteratorState<key_t, data_t> *s);
+  static data_t next_entry(HashTableIteratorState<key_t, data_t> *s);
+  static data_t cur_entry(HashTableIteratorState<key_t, data_t> *s);
+  data_t remove_entry(HashTableIteratorState<key_t, data_t> *s);
+
+  void
+  GC(void)
+  {
+    if (m_gc_func == NULL) {
+      return;
+    }
+    if (m_pre_gc_func) {
+      m_pre_gc_func();
+    }
+    for (int i = 0; i < bucket_num; i++) {
+      HashTableEntry<key_t, data_t> *cur  = buckets[i];
+      HashTableEntry<key_t, data_t> *prev = NULL;
+      HashTableEntry<key_t, data_t> *next = NULL;
+      while (cur != NULL) {
+        next = cur->next;
+        if (m_gc_func(cur->data)) {
+          if (prev != NULL) {
+            prev->next = next;
+          } else {
+            buckets[i] = next;
+          }
+          ats_free(cur);
+          cur_size--;
+        } else {
+          prev = cur;
+        }
+        cur = next;
+      }
+    }
+  }
+
+  void
+  resize(int size)
+  {
+    int new_bucket_num                          = size;
+    HashTableEntry<key_t, data_t> **new_buckets = new HashTableEntry<key_t, data_t> *[new_bucket_num];
+    memset(new_buckets, 0, new_bucket_num * sizeof(HashTableEntry<key_t, data_t> *));
+
+    for (int i = 0; i < bucket_num; i++) {
+      HashTableEntry<key_t, data_t> *cur  = buckets[i];
+      HashTableEntry<key_t, data_t> *next = NULL;
+      while (cur != NULL) {
+        next                = cur->next;
+        int new_id          = bucket_id(cur->key, new_bucket_num);
+        cur->next           = new_buckets[new_id];
+        new_buckets[new_id] = cur;
+        cur                 = next;
+      }
+      buckets[i] = NULL;
+    }
+    delete[] buckets;
+    buckets    = new_buckets;
+    bucket_num = new_bucket_num;
+  }
+
+private:
+  HashTableEntry<key_t, data_t> **buckets;
+  int cur_size;
+  int bucket_num;
+  bool (*m_gc_func)(data_t);
+  void (*m_pre_gc_func)(void);
+
+private:
+  IMTHashTable();
+  IMTHashTable(IMTHashTable &);
+};
+
+/*
+ * we can use ClassAllocator here if the malloc performance becomes a problem
+ */
+
+template <class key_t, class data_t>
+inline data_t
+IMTHashTable<key_t, data_t>::insert_entry(key_t key, data_t data)
+{
+  int id                             = bucket_id(key);
+  HashTableEntry<key_t, data_t> *cur = buckets[id];
+  while (cur != NULL && cur->key != key) {
+    cur = cur->next;
+  }
+  if (cur != NULL) {
+    if (data == cur->data) {
+      return (data_t)0;
+    } else {
+      data_t tmp = cur->data;
+      cur->data  = data;
+      // potential memory leak, need to check the return value by the caller
+      return tmp;
+    }
+  }
+
+  HashTableEntry<key_t, data_t> *newEntry = HashTableEntry<key_t, data_t>::alloc();
+  newEntry->key                           = key;
+  newEntry->data                          = data;
+  newEntry->next                          = buckets[id];
+  buckets[id]                             = newEntry;
+  cur_size++;
+  if (cur_size / bucket_num > MT_HASHTABLE_MAX_CHAIN_AVG_LEN) {
+    GC();
+    if (cur_size / bucket_num > MT_HASHTABLE_MAX_CHAIN_AVG_LEN) {
+      resize(bucket_num * 2);
+    }
+  }
+  return (data_t)0;
+}
+
+template <class key_t, class data_t>
+inline data_t
+IMTHashTable<key_t, data_t>::remove_entry(key_t key)
+{
+  int id                              = bucket_id(key);
+  data_t ret                          = (data_t)0;
+  HashTableEntry<key_t, data_t> *cur  = buckets[id];
+  HashTableEntry<key_t, data_t> *prev = NULL;
+  while (cur != NULL && cur->key != key) {
+    prev = cur;
+    cur  = cur->next;
+  }
+  if (cur != NULL) {
+    if (prev != NULL) {
+      prev->next = cur->next;
+    } else {
+      buckets[id] = cur->next;
+    }
+    ret = cur->data;
+    HashTableEntry<key_t, data_t>::free(cur);
+    cur_size--;
+  }
+
+  return ret;
+}
+
+template <class key_t, class data_t>
+inline data_t
+IMTHashTable<key_t, data_t>::lookup_entry(key_t key)
+{
+  int id                             = bucket_id(key);
+  data_t ret                         = (data_t)0;
+  HashTableEntry<key_t, data_t> *cur = buckets[id];
+  while (cur != NULL && cur->key != key) {
+    cur = cur->next;
+  }
+  if (cur != NULL) {
+    ret = cur->data;
+  }
+  return ret;
+}
+
+template <class key_t, class data_t>
+inline data_t
+IMTHashTable<key_t, data_t>::first_entry(int bucket_id, HashTableIteratorState<key_t, data_t> *s)
+{
+  s->cur_buck = bucket_id;
+  s->ppcur    = &(buckets[bucket_id]);
+  if (*(s->ppcur) != NULL) {
+    return (*(s->ppcur))->data;
+  }
+  return (data_t)0;
+}
+
+template <class key_t, class data_t>
+inline data_t
+IMTHashTable<key_t, data_t>::next_entry(HashTableIteratorState<key_t, data_t> *s)
+{
+  if ((*(s->ppcur)) != NULL) {
+    s->ppcur = &((*(s->ppcur))->next);
+    if (*(s->ppcur) != NULL) {
+      return (*(s->ppcur))->data;
+    }
+  }
+  return (data_t)0;
+}
+
+template <class key_t, class data_t>
+inline data_t
+IMTHashTable<key_t, data_t>::cur_entry(HashTableIteratorState<key_t, data_t> *s)
+{
+  if (*(s->ppcur) == NULL) {
+    return (data_t)0;
+  }
+  return (*(s->ppcur))->data;
+}
+
+template <class key_t, class data_t>
+inline data_t
+IMTHashTable<key_t, data_t>::remove_entry(HashTableIteratorState<key_t, data_t> *s)
+{
+  data_t data                           = (data_t)0;
+  HashTableEntry<key_t, data_t> *pEntry = *(s->ppcur);
+  if (pEntry != NULL) {
+    data          = pEntry->data;
+    (*(s->ppcur)) = pEntry->next;
+    HashTableEntry<key_t, data_t>::free(pEntry);
+    cur_size--;
+  }
+  return data;
+}
+
+template <class key_t, class data_t> class MTHashTable
+{
+public:
+  MTHashTable(int size, bool (*gc_func)(data_t) = NULL, void (*pre_gc_func)(void) = nullptr)
+  {
+    for (int i = 0; i < MT_HASHTABLE_PARTITIONS; i++) {
+      locks[i]      = new_ProxyMutex();
+      hashTables[i] = new IMTHashTable<key_t, data_t>(size, gc_func, pre_gc_func);
+      // INIT_CHAIN_HEAD(&chain_heads[i]);
+      // last_GC_time[i] = 0;
+    }
+    //    cur_items = 0;
+  }
+  ~MTHashTable()
+  {
+    for (int i = 0; i < MT_HASHTABLE_PARTITIONS; i++) {
+      locks[i] = NULL;
+      delete hashTables[i];
+    }
+  }
+
+  ProxyMutex *
+  lock_for_key(key_t key)
+  {
+    return locks[part_num(key)].get();
+  }
+
+  int
+  getSize()
+  {
+    return MT_HASHTABLE_PARTITIONS;
+  }
+  int
+  part_num(key_t key)
+  {
+    return (int)(key & MT_HASHTABLE_PARTITION_MASK);
+  }
+  data_t
+  insert_entry(key_t key, data_t data)
+  {
+    // ink_atomic_increment(&cur_items, 1);
+    return hashTables[part_num(key)]->insert_entry(key, data);
+  }
+  data_t
+  remove_entry(key_t key)
+  {
+    // ink_atomic_increment(&cur_items, -1);
+    return hashTables[part_num(key)]->remove_entry(key);
+  }
+  data_t
+  lookup_entry(key_t key)
+  {
+    return hashTables[part_num(key)]->lookup_entry(key);
+  }
+
+  data_t
+  first_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
+  {
+    data_t ret = (data_t)0;
+    for (int i = 0; i < hashTables[part_id]->getBucketNum(); i++) {
+      ret = hashTables[part_id]->first_entry(i, s);
+      if (ret != (data_t)0) {
+        return ret;
+      }
+    }
+    return (data_t)0;
+  }
+
+  data_t
+  cur_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
+  {
+    data_t data = IMTHashTable<key_t, data_t>::cur_entry(s);
+    if (!data) {
+      data = next_entry(part_id, s);
+    }
+    return data;
+  };
+  data_t
+  next_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
+  {
+    data_t ret = IMTHashTable<key_t, data_t>::next_entry(s);
+    if (ret != (data_t)0) {
+      return ret;
+    }
+    for (int i = s->cur_buck + 1; i < hashTables[part_id]->getBucketNum(); i++) {
+      ret = hashTables[part_id]->first_entry(i, s);
+      if (ret != (data_t)0) {
+        return ret;
+      }
+    }
+    return (data_t)0;
+  }
+  data_t
+  remove_entry(int part_id, HashTableIteratorState<key_t, data_t> *s)
+  {
+    // ink_atomic_increment(&cur_items, -1);
+    return hashTables[part_id]->remove_entry(s);
+  }
+
+private:
+  IMTHashTable<key_t, data_t> *hashTables[MT_HASHTABLE_PARTITIONS];
+  Ptr<ProxyMutex> locks[MT_HASHTABLE_PARTITIONS];
+  // MT_ListEntry chain_heads[MT_HASHTABLE_PARTITIONS];
+  // int last_GC_time[MT_HASHTABLE_PARTITIONS];
+  // int32_t cur_items;
+};
diff --git a/tests/gold_tests/body_factory/data/www.customplugin204.test_get.txt b/tests/gold_tests/body_factory/data/www.customplugin204.test_get.txt
index be603f9..4cf018a 100644
--- a/tests/gold_tests/body_factory/data/www.customplugin204.test_get.txt
+++ b/tests/gold_tests/body_factory/data/www.customplugin204.test_get.txt
@@ -1,2 +1,2 @@
-GET HTTP://www.customplugin204.test/ HTTP/1.1
-
+GET HTTP://www.customplugin204.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/body_factory/data/www.customtemplate204.test_get.txt b/tests/gold_tests/body_factory/data/www.customtemplate204.test_get.txt
index 395d798..f16fbc7 100644
--- a/tests/gold_tests/body_factory/data/www.customtemplate204.test_get.txt
+++ b/tests/gold_tests/body_factory/data/www.customtemplate204.test_get.txt
@@ -1,2 +1,2 @@
-GET HTTP://www.customtemplate204.test/ HTTP/1.1
-
+GET HTTP://www.customtemplate204.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/body_factory/data/www.default204.test_get.txt b/tests/gold_tests/body_factory/data/www.default204.test_get.txt
index e77408a..1a421dd 100644
--- a/tests/gold_tests/body_factory/data/www.default204.test_get.txt
+++ b/tests/gold_tests/body_factory/data/www.default204.test_get.txt
@@ -1,2 +1,2 @@
-GET HTTP://www.default204.test/ HTTP/1.1
-
+GET HTTP://www.default204.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/body_factory/data/www.default304.test_get.txt b/tests/gold_tests/body_factory/data/www.default304.test_get.txt
index c9064fa..d251435 100644
--- a/tests/gold_tests/body_factory/data/www.default304.test_get.txt
+++ b/tests/gold_tests/body_factory/data/www.default304.test_get.txt
@@ -1,2 +1,2 @@
-GET HTTP://www.default304.test/ HTTP/1.1
-
+GET HTTP://www.default304.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/body_factory/data/www.example.test_get_200.txt b/tests/gold_tests/body_factory/data/www.example.test_get_200.txt
index c53681f..5994603 100644
--- a/tests/gold_tests/body_factory/data/www.example.test_get_200.txt
+++ b/tests/gold_tests/body_factory/data/www.example.test_get_200.txt
@@ -1,3 +1,3 @@
-GET /get200 HTTP/1.1
-Host: www.example.test
-
+GET /get200 HTTP/1.1
+Host: www.example.test
+
diff --git a/tests/gold_tests/body_factory/data/www.example.test_get_304.txt b/tests/gold_tests/body_factory/data/www.example.test_get_304.txt
index 8d0aecf..03a8d59 100644
--- a/tests/gold_tests/body_factory/data/www.example.test_get_304.txt
+++ b/tests/gold_tests/body_factory/data/www.example.test_get_304.txt
@@ -1,4 +1,4 @@
-GET /get304 HTTP/1.1
-Host: www.example.test
-If-Modified-Since: Thu, 1 Jan 1970 00:00:00 GMT
-
+GET /get304 HTTP/1.1
+Host: www.example.test
+If-Modified-Since: Thu, 1 Jan 1970 00:00:00 GMT
+
diff --git a/tests/gold_tests/body_factory/data/www.example.test_head.txt b/tests/gold_tests/body_factory/data/www.example.test_head.txt
index c5a5c97..514c107 100644
--- a/tests/gold_tests/body_factory/data/www.example.test_head.txt
+++ b/tests/gold_tests/body_factory/data/www.example.test_head.txt
@@ -1,3 +1,3 @@
-HEAD http://www.example.test/ HTTP/1.1
-Host: www.example.test
-
+HEAD http://www.example.test/ HTTP/1.1
+Host: www.example.test
+
diff --git a/tests/gold_tests/body_factory/data/www.example.test_head_200.txt b/tests/gold_tests/body_factory/data/www.example.test_head_200.txt
index 6d214e1..fc0f1d5 100644
--- a/tests/gold_tests/body_factory/data/www.example.test_head_200.txt
+++ b/tests/gold_tests/body_factory/data/www.example.test_head_200.txt
@@ -1,3 +1,3 @@
-HEAD /head200 HTTP/1.1
-Host: www.example.test
-
+HEAD /head200 HTTP/1.1
+Host: www.example.test
+
diff --git a/tests/gold_tests/headers/data/www.passthrough.test_get.txt b/tests/gold_tests/headers/data/www.passthrough.test_get.txt
index 0cba742..ffe69c2 100644
--- a/tests/gold_tests/headers/data/www.passthrough.test_get.txt
+++ b/tests/gold_tests/headers/data/www.passthrough.test_get.txt
@@ -1,2 +1,2 @@
-GET http://www.passthrough.test/ HTTP/1.1
-
+GET http://www.passthrough.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/headers/data/www.redirect0.test_get.txt b/tests/gold_tests/headers/data/www.redirect0.test_get.txt
index 40fce98..3d3d219 100644
--- a/tests/gold_tests/headers/data/www.redirect0.test_get.txt
+++ b/tests/gold_tests/headers/data/www.redirect0.test_get.txt
@@ -1,2 +1,2 @@
-GET http://www.redirect0.test/ HTTP/1.1
-
+GET http://www.redirect0.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/headers/data/www.redirect301.test_get.txt b/tests/gold_tests/headers/data/www.redirect301.test_get.txt
index 4835267..e51c353 100644
--- a/tests/gold_tests/headers/data/www.redirect301.test_get.txt
+++ b/tests/gold_tests/headers/data/www.redirect301.test_get.txt
@@ -1,2 +1,2 @@
-GET http://www.redirect301.test/ HTTP/1.1
-
+GET http://www.redirect301.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/headers/data/www.redirect302.test_get.txt b/tests/gold_tests/headers/data/www.redirect302.test_get.txt
index 6aae266..35f7750 100644
--- a/tests/gold_tests/headers/data/www.redirect302.test_get.txt
+++ b/tests/gold_tests/headers/data/www.redirect302.test_get.txt
@@ -1,2 +1,2 @@
-GET http://www.redirect302.test/ HTTP/1.1
-
+GET http://www.redirect302.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/headers/data/www.redirect307.test_get.txt b/tests/gold_tests/headers/data/www.redirect307.test_get.txt
index b37c8ae..49bb1bc 100644
--- a/tests/gold_tests/headers/data/www.redirect307.test_get.txt
+++ b/tests/gold_tests/headers/data/www.redirect307.test_get.txt
@@ -1,2 +1,2 @@
-GET http://www.redirect307.test/ HTTP/1.1
-
+GET http://www.redirect307.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/headers/data/www.redirect308.test_get.txt b/tests/gold_tests/headers/data/www.redirect308.test_get.txt
index 05bcbf8..097e95e 100644
--- a/tests/gold_tests/headers/data/www.redirect308.test_get.txt
+++ b/tests/gold_tests/headers/data/www.redirect308.test_get.txt
@@ -1,2 +1,2 @@
-GET http://www.redirect308.test/ HTTP/1.1
-
+GET http://www.redirect308.test/ HTTP/1.1
+
diff --git a/tests/gold_tests/headers/general-connection-failure-502.gold b/tests/gold_tests/headers/general-connection-failure-502.gold
index 4749e20..a5e2732 100644
--- a/tests/gold_tests/headers/general-connection-failure-502.gold
+++ b/tests/gold_tests/headers/general-connection-failure-502.gold
@@ -1,10 +1,10 @@
-HTTP/1.1 502 connect failed
-Connection: keep-alive
-Cache-Control: no-store
-Content-Type: text/html
-Content-Language: en
-Content-Length: 247
-
+HTTP/1.1 502 connect failed
+Connection: keep-alive
+Cache-Control: no-store
+Content-Type: text/html
+Content-Language: en
+Content-Length: 247
+
 <HTML>
 <HEAD>
 <TITLE>Could Not Connect</TITLE>
diff --git a/tests/gold_tests/pluginTest/url_sig/url_sig.gold b/tests/gold_tests/pluginTest/url_sig/url_sig.gold
index 256898b..12043d8 100644
--- a/tests/gold_tests/pluginTest/url_sig/url_sig.gold
+++ b/tests/gold_tests/pluginTest/url_sig/url_sig.gold
@@ -1,15 +1,15 @@
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 403 Forbidden
-< HTTP/1.1 200 OK
-< HTTP/1.1 200 OK
-< HTTP/1.1 200 OK
-< HTTP/1.1 200 OK
-< HTTP/1.1 200 OK
-< HTTP/1.1 200 OK
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 403 Forbidden
+< HTTP/1.1 200 OK
+< HTTP/1.1 200 OK
+< HTTP/1.1 200 OK
+< HTTP/1.1 200 OK
+< HTTP/1.1 200 OK
+< HTTP/1.1 200 OK
diff --git a/tests/gold_tests/pluginTest/xdebug/x_remap/out.gold b/tests/gold_tests/pluginTest/xdebug/x_remap/out.gold
index cb8fd49..057b5ce 100644
--- a/tests/gold_tests/pluginTest/xdebug/x_remap/out.gold
+++ b/tests/gold_tests/pluginTest/xdebug/x_remap/out.gold
@@ -1,13 +1,13 @@
-HTTP/1.1 502 Cannot find server.
+HTTP/1.1 502 Cannot find server.
 Date:``
-Connection: keep-alive
+Connection: keep-alive
 Server:``
-Cache-Control: no-store
-Content-Type: text/html
-Content-Language: en
-X-Remap: from=Not-Found, to=Not-Found
-Content-Length: 391
-
+Cache-Control: no-store
+Content-Type: text/html
+Content-Language: en
+X-Remap: from=Not-Found, to=Not-Found
+Content-Length: 391
+
 <HTML>
 <HEAD>
 <TITLE>Unknown Host</TITLE>
@@ -26,36 +26,36 @@ name and try again.
 <HR>
 </BODY>
 ======
-HTTP/1.1 200 OK
+HTTP/1.1 200 OK
 Date:``
 Age:``
-Transfer-Encoding: chunked
-Connection: keep-alive
+Transfer-Encoding: chunked
+Connection: keep-alive
 Server:``
-X-Remap: from=http://one/, to=http://127.0.0.1:SERVER_PORT/
-
-0
-
+X-Remap: from=http://one/, to=http://127.0.0.1:SERVER_PORT/
+
+0
+
 ======
-HTTP/1.1 200 OK
+HTTP/1.1 200 OK
 Date:``
 Age:``
-Transfer-Encoding: chunked
-Connection: keep-alive
+Transfer-Encoding: chunked
+Connection: keep-alive
 Server:``
-X-Remap: from=http://two/, to=http://127.0.0.1:SERVER_PORT/
-
-0
-
+X-Remap: from=http://two/, to=http://127.0.0.1:SERVER_PORT/
+
+0
+
 ======
-HTTP/1.1 200 OK
+HTTP/1.1 200 OK
 Date:``
 Age:``
-Transfer-Encoding: chunked
-Connection: keep-alive
+Transfer-Encoding: chunked
+Connection: keep-alive
 Server:``
-X-Remap: from=http://three[0-9]+/, to=http://127.0.0.1:SERVER_PORT/
-
-0
-
+X-Remap: from=http://three[0-9]+/, to=http://127.0.0.1:SERVER_PORT/
+
+0
+
 ======
diff --git a/tests/gold_tests/redirect/gold/redirect.gold b/tests/gold_tests/redirect/gold/redirect.gold
index 9500bf9..3082e74 100644
--- a/tests/gold_tests/redirect/gold/redirect.gold
+++ b/tests/gold_tests/redirect/gold/redirect.gold
@@ -1,5 +1,5 @@
-HTTP/1.1 204 No Content
+HTTP/1.1 204 No Content
 ``
-Age: ``
-Connection: keep-alive
+Age: ``
+Connection: keep-alive