You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "runzhiwang (Jira)" <ji...@apache.org> on 2020/04/17 03:36:00 UTC

[jira] [Updated] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

     [ https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

runzhiwang updated HDDS-3223:
-----------------------------
    Description:     (was: By s3gateway, write a 187MB file cost 5 seconds, but read it cost 17 seconds. Both write and read will split 187MB file into 24 parts, so write/read has 24 POST/GET requests, I find s3g process the first 10 GET requests in parallel and process the next 14 GET requests in sequential. I use {code:java}tcpdump -i eth0 -s 0 -A 'tcp dst port 9878 and tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'  -w read.cap{code} to capture the GET request to s3gateway , as the first image shows. The first 10 GET requests range from 3.54 second to 3.56 second. But the next 14 GET requests range from 4.41 second to 12.23 second.  I also capture the PUT request to s3gateway, as the second image shows, the 24 PUT requests range from 0.63 second to 3.48 second, that's the reason why write is faster than read. I think the reason is in aws-cli. I will continue to find it out.
 !screenshot-3.png!
 !screenshot-5.png! )

> Read a big object cost 2 times more than write it by s3g
> --------------------------------------------------------
>
>                 Key: HDDS-3223
>                 URL: https://issues.apache.org/jira/browse/HDDS-3223
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: runzhiwang
>            Assignee: runzhiwang
>            Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org