You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@camel.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2019/05/14 13:05:00 UTC

[jira] [Work logged] (CAMEL-13521) Add reverse proxy option in camel-netty4-http

     [ https://issues.apache.org/jira/browse/CAMEL-13521?focusedWorklogId=241698&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-241698 ]

ASF GitHub Bot logged work on CAMEL-13521:
------------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/May/19 13:04
            Start Date: 14/May/19 13:04
    Worklog Time Spent: 10m 
      Work Description: zregvart commented on pull request #2911: CAMEL-13521: Add reverse proxy option in camel-netty4-http
URL: https://github.com/apache/camel/pull/2911
 
 
   This adds support for reverse proxy functionality in `camel-netty4-http` component.
   
   While testing this I can see that the latency on the 99 percentile is ~50msec on my machine, would love to know if that can be optimized even further. Transfer rate is definitely something that could be looked at.
   
   In my tests I used a route like:
   
   ```java
   from("netty-http:proxy://0.0.0.0:8080")
       .toD("netty-http:"
           + "${headers." + Exchange.HTTP_SCHEME + "}://"
           + "${headers." + Exchange.HTTP_HOST + "}:"
           + "${headers." + Exchange.HTTP_PORT + "}")
   ```
   
   I did notice some string/lambda allocations in the Pipeline that we might want to take a look at also.
   
   After some warmup here's a test I run with 1000 requests with concurrency of 100:
   ```shell
   $ ab -c 100 -n 1000 -k -X localhost:8080 http://localhost:8000/hello
   This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
   Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
   Licensed to The Apache Software Foundation, http://www.apache.org/
   
   Benchmarking localhost [through localhost:8080] (be patient)
   Completed 100 requests
   Completed 200 requests
   Completed 300 requests
   Completed 400 requests
   Completed 500 requests
   Completed 600 requests
   Completed 700 requests
   Completed 800 requests
   Completed 900 requests
   Completed 1000 requests
   Finished 1000 requests
   
   
   Server Software:        nginx/1.15.12
   Server Hostname:        localhost
   Server Port:            8000
   
   Document Path:          /hello
   Document Length:        13 bytes
   
   Concurrency Level:      100
   Time taken for tests:   0.534 seconds
   Complete requests:      1000
   Failed requests:        0
   Keep-Alive requests:    995
   Total transferred:      263975 bytes
   HTML transferred:       13000 bytes
   Requests per second:    1872.85 [#/sec] (mean)
   Time per request:       53.395 [ms] (mean)
   Time per request:       0.534 [ms] (mean, across all concurrent requests)
   Transfer rate:          482.80 [Kbytes/sec] received
   
   Connection Times (ms)
                 min  mean[+/-sd] median   max
   Connect:        0    1   3.1      0      15
   Processing:     1   50  26.0     46     145
   Waiting:        1   50  26.1     45     145
   Total:          1   51  26.2     47     149
   
   Percentage of the requests served within a certain time (ms)
     50%     47
     66%     59
     75%     66
     80%     71
     90%     86
     95%    100
     98%    114
     99%    123
    100%    149 (longest request)
   ```
   
   Versus going directly to the origin:
   ```shell
   $ ab -c 100 -n 1000 -k http://localhost:8000/hello
   This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
   Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
   Licensed to The Apache Software Foundation, http://www.apache.org/
   
   Benchmarking localhost (be patient)
   Completed 100 requests
   Completed 200 requests
   Completed 300 requests
   Completed 400 requests
   Completed 500 requests
   Completed 600 requests
   Completed 700 requests
   Completed 800 requests
   Completed 900 requests
   Completed 1000 requests
   Finished 1000 requests
   
   
   Server Software:        nginx/1.15.12
   Server Hostname:        localhost
   Server Port:            8000
   
   Document Path:          /hello
   Document Length:        13 bytes
   
   Concurrency Level:      100
   Time taken for tests:   0.082 seconds
   Complete requests:      1000
   Failed requests:        0
   Keep-Alive requests:    1000
   Total transferred:      264000 bytes
   HTML transferred:       13000 bytes
   Requests per second:    12125.62 [#/sec] (mean)
   Time per request:       8.247 [ms] (mean)
   Time per request:       0.082 [ms] (mean, across all concurrent requests)
   Transfer rate:          3126.14 [Kbytes/sec] received
   
   Connection Times (ms)
                 min  mean[+/-sd] median   max
   Connect:        0    1   2.5      0      12
   Processing:     1    6  14.3      2      67
   Waiting:        1    6  14.2      2      67
   Total:          1    7  16.0      2      76
   
   Percentage of the requests served within a certain time (ms)
     50%      2
     66%      2
     75%      3
     80%      3
     90%     13
     95%     64
     98%     70
     99%     71
    100%     76 (longest request)
   
   ```
   
   Submitted for review.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 241698)
            Time Spent: 10m
    Remaining Estimate: 0h

> Add reverse proxy option in camel-netty4-http
> ---------------------------------------------
>
>                 Key: CAMEL-13521
>                 URL: https://issues.apache.org/jira/browse/CAMEL-13521
>             Project: Camel
>          Issue Type: New Feature
>          Components: camel-netty4-http
>            Reporter: Zoran Regvart
>            Assignee: Zoran Regvart
>            Priority: Major
>             Fix For: 3.0.0-M3
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think it would make sense to add support for reverse proxy operation in camel-netty4-http.
> With it one can have Camel act as a HTTP proxy perform some transformation/mediation/routing and make it easy to include in the architecture without much changing the client or the service.
> Perhaps adding a new protocol scheme {{proxy}} would be a good start, so the endpoint URI would look something like {{netty-http:proxy://0.0.0.0}}.
> I can work on a pull request to showcase this feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)