You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@uniffle.apache.org by GitBox <gi...@apache.org> on 2022/07/27 08:09:44 UTC

[GitHub] [incubator-uniffle] colinmjj commented on issue #89: [Improvement] Add a load policy based on disk performance

colinmjj commented on issue #89:
URL: https://github.com/apache/incubator-uniffle/issues/89#issuecomment-1196400738

   @smallzhongfeng The workload of Shuffle Server depends on a lot of things, eg, Memory, Disk IO, NetworkIO, etc. To simplify the assignment strategy, memory is chosen as the most important metric, because any problem in shuffle server will cause much memory usage. For your case, if there has problem in Disk IO, data won't be flushed as expected, and more and more data will be stored in memory.
   Uniffle is kind of producer & consumer model, and memory is the cache, I think we can check the workload according to memory usage and do the assignment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@uniffle.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org