You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2022/04/10 13:58:19 UTC

[GitHub] [ozone] cchenax commented on pull request #3280: HDDS-6503 EC: Add ec write channel

cchenax commented on PR #3280:
URL: https://github.com/apache/ozone/pull/3280#issuecomment-1094280579

   > What is the performance problem you have observed with the current approach and why is it bad if EC reads and writes use the RATIS or STANDALONE ports? I don't think we want a specific port for EC reads and writes. They should be able to go over the same existing ports. Reading and writing EC blocks is the same as for STANDLONE blocks at the moment.
   
   
   
   > I have a couple of problems here also... Let me explain.
   > 
   > In the GRPC client we do use async requests for reads, and for EC we do so for writes as well. (See the shouldBlockAndWaitAsyncreply method in XceiverClientGrpc class and in ECXCeiverClientGrpc class on the EC branch, and the usage of the method.)
   > 
   > What this effectively means is that all the queries are async on the client side for EC, so they hit the server in parallel from the same client as the client requests.
   > 
   > Let's see the server side... We have a netty server inside the XCeiverServerGrpc which uses a configured number of threads to process requests with GrpcXcevierService configured as a service endpoint. Inside netty, the request processing threads can be hit by a request in parallel, and the request processing starts inside the netty worker thread. Even though we use stream based grpc communication, for every request on the client side, we open a request stream to the server, and just send one message and wait for it to complete, so the server will process every request in parallel within the netty worker thread.
   > 
   > All in all this means that for EC we send the requests in parallel (and syncronize what needs to be syncronized in a higher level), while we process the requests on the server side in parallel.
   > 
   > For me to believe that it gives a better performance to add one more port on the server side to process read and write request in a different server side netty instance needs hard proof, either by benchmarks or by a very well formed reasoning. I would not go and just add one more port and one more server side service implementation for that based on what I know so far for sure.
   > 
   > There is one more hassle with adding a new port name, please see the problems specific to extending the port list inside the DN in [HDDS-4731](https://issues.apache.org/jira/browse/HDDS-4731) JIRA ticket, that problem has to be managed in this PR before merging the proposed changes.
   
   okļ¼Œthank you very much, I will see this add port patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org