You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@thrift.apache.org by "Aron Sogor (JIRA)" <ji...@apache.org> on 2009/12/29 21:53:29 UTC
[jira] Created: (THRIFT-669) Use Http chunk encoding to do full
duplex transfer in a single post
Use Http chunk encoding to do full duplex transfer in a single post
-------------------------------------------------------------------
Key: THRIFT-669
URL: https://issues.apache.org/jira/browse/THRIFT-669
Project: Thrift
Issue Type: Bug
Affects Versions: 0.2
Reporter: Aron Sogor
Attachments: TFullDuplexHttpClient.java
Instead of each method call being a separate post, use chunk-encoded request. If you look at the traffic in wireshark many times the payload is much smaller than the HTTP header. Using chunk encoding, the per method overhead of the http header is gone. Running a simple test of getting a time as i32, using http post vs chunk encoding I got from 100+ms to ~40ms per request as the servlet container did not have to process the overhead of a "new request".
More I think with jetty and continuation the long running connections could actually scale and perform a lot better than the current HttpClient.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (THRIFT-669) Use Http chunk encoding to do full
duplex transfer in a single post
Posted by "Aron Sogor (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/THRIFT-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12795165#action_12795165 ]
Aron Sogor commented on THRIFT-669:
-----------------------------------
It is a single request BUT No buffer. Http chunk encoding allows for sending a the request and the response in chunks.
So each method call is a request chunk each response is a response chunk.
In pseudo:
<HTTP REQUEST HEADER>
<chunk1 request>
<HTTP RESPONSE HEADER>
<chunk1 response>
<chunk2 request>
<chunk 2 response>
Here is a capture from wireshark(no color) calling gettime in a loop:
CONNECT /ds/ HTTP/1.1
Host: localhost:8080
User-Agent: BattleNet
Transfer-Encoding: chunked
content-type: application/x-thrift
Accept: */*
11
........time.....HTTP/1.1 200 OK
Content-Type: application/x-thrift
Transfer-Encoding: chunked
Server: Jetty(7.0.1.v20091125)
18
........time............
11
........time.....
18
........time............
11
........time.....
<and many more>
This is different from the current http client where each method call creates an - HTTP REQUEST/RESPONSE HEADER
Check with wireshark :)
> Use Http chunk encoding to do full duplex transfer in a single post
> -------------------------------------------------------------------
>
> Key: THRIFT-669
> URL: https://issues.apache.org/jira/browse/THRIFT-669
> Project: Thrift
> Issue Type: Bug
> Affects Versions: 0.2
> Reporter: Aron Sogor
> Attachments: TFullDuplexHttpClient.java
>
>
> Instead of each method call being a separate post, use chunk-encoded request. If you look at the traffic in wireshark many times the payload is much smaller than the HTTP header. Using chunk encoding, the per method overhead of the http header is gone. Running a simple test of getting a time as i32, using http post vs chunk encoding I got from 100+ms to ~40ms per request as the servlet container did not have to process the overhead of a "new request".
> More I think with jetty and continuation the long running connections could actually scale and perform a lot better than the current HttpClient.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (THRIFT-669) Use Http chunk encoding to do full
duplex transfer in a single post
Posted by "Aron Sogor (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/THRIFT-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12836501#action_12836501 ]
Aron Sogor commented on THRIFT-669:
-----------------------------------
I wrote up some more explonation :http://hungariannotation.blogspot.com/2010/02/no-rpc-for-async-io-case-for-flash-as3.html
also sent a patch for JAva and As3 in https://issues.apache.org/jira/browse/THRIFT-518
> Use Http chunk encoding to do full duplex transfer in a single post
> -------------------------------------------------------------------
>
> Key: THRIFT-669
> URL: https://issues.apache.org/jira/browse/THRIFT-669
> Project: Thrift
> Issue Type: Bug
> Affects Versions: 0.2
> Reporter: Aron Sogor
> Attachments: TFullDuplexHttpClient.java
>
>
> Instead of each method call being a separate post, use chunk-encoded request. If you look at the traffic in wireshark many times the payload is much smaller than the HTTP header. Using chunk encoding, the per method overhead of the http header is gone. Running a simple test of getting a time as i32, using http post vs chunk encoding I got from 100+ms to ~40ms per request as the servlet container did not have to process the overhead of a "new request".
> More I think with jetty and continuation the long running connections could actually scale and perform a lot better than the current HttpClient.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (THRIFT-669) Use Http chunk encoding to do full
duplex transfer in a single post
Posted by "Aron Sogor (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/THRIFT-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Aron Sogor updated THRIFT-669:
------------------------------
Attachment: TFullDuplexHttpClient.java
Chunk encoding client
> Use Http chunk encoding to do full duplex transfer in a single post
> -------------------------------------------------------------------
>
> Key: THRIFT-669
> URL: https://issues.apache.org/jira/browse/THRIFT-669
> Project: Thrift
> Issue Type: Bug
> Affects Versions: 0.2
> Reporter: Aron Sogor
> Attachments: TFullDuplexHttpClient.java
>
>
> Instead of each method call being a separate post, use chunk-encoded request. If you look at the traffic in wireshark many times the payload is much smaller than the HTTP header. Using chunk encoding, the per method overhead of the http header is gone. Running a simple test of getting a time as i32, using http post vs chunk encoding I got from 100+ms to ~40ms per request as the servlet container did not have to process the overhead of a "new request".
> More I think with jetty and continuation the long running connections could actually scale and perform a lot better than the current HttpClient.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (THRIFT-669) Use Http chunk encoding to do full
duplex transfer in a single post
Posted by "Bryan Duxbury (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/THRIFT-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12795152#action_12795152 ]
Bryan Duxbury commented on THRIFT-669:
--------------------------------------
Does this buffer more than one method call into a single request?
> Use Http chunk encoding to do full duplex transfer in a single post
> -------------------------------------------------------------------
>
> Key: THRIFT-669
> URL: https://issues.apache.org/jira/browse/THRIFT-669
> Project: Thrift
> Issue Type: Bug
> Affects Versions: 0.2
> Reporter: Aron Sogor
> Attachments: TFullDuplexHttpClient.java
>
>
> Instead of each method call being a separate post, use chunk-encoded request. If you look at the traffic in wireshark many times the payload is much smaller than the HTTP header. Using chunk encoding, the per method overhead of the http header is gone. Running a simple test of getting a time as i32, using http post vs chunk encoding I got from 100+ms to ~40ms per request as the servlet container did not have to process the overhead of a "new request".
> More I think with jetty and continuation the long running connections could actually scale and perform a lot better than the current HttpClient.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (THRIFT-669) Use Http chunk encoding to do full
duplex transfer in a single post
Posted by "Bryan Duxbury (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/THRIFT-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12905523#action_12905523 ]
Bryan Duxbury commented on THRIFT-669:
--------------------------------------
I'd kind of like to commit this patch, but there are a few oddities I think we might want to address.
In read(), there's an else case that prints some anonymous-looking variable to stdout. Is this intentional, or debugging?
In write, the user agent is set as "BattleNet". Seems misleading. Maybe "TFullDuplexHttpClient" instead?
Would be nice to move the constructor closer to the top of the file.
open() should probably throw TTransportExceptions in the case of problems instead of just printing the stack trace.
The file is indented with tabs. We use spaces.
> Use Http chunk encoding to do full duplex transfer in a single post
> -------------------------------------------------------------------
>
> Key: THRIFT-669
> URL: https://issues.apache.org/jira/browse/THRIFT-669
> Project: Thrift
> Issue Type: Bug
> Affects Versions: 0.2
> Reporter: Aron Sogor
> Attachments: TFullDuplexHttpClient.java
>
>
> Instead of each method call being a separate post, use chunk-encoded request. If you look at the traffic in wireshark many times the payload is much smaller than the HTTP header. Using chunk encoding, the per method overhead of the http header is gone. Running a simple test of getting a time as i32, using http post vs chunk encoding I got from 100+ms to ~40ms per request as the servlet container did not have to process the overhead of a "new request".
> More I think with jetty and continuation the long running connections could actually scale and perform a lot better than the current HttpClient.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.