You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/07/06 16:38:23 UTC

[GitHub] [spark] tgravescs commented on pull request #28412: [SPARK-31608][CORE][WEBUI] Add a new type of KVStore to make loading UI faster

tgravescs commented on pull request #28412:
URL: https://github.com/apache/spark/pull/28412#issuecomment-654344387


   thanks for getting the numbers. yeah in general I agree, I would rather overestimate the memory usage as GC and OOM are bad. in practice if people find it using way less memory for their use cases they could make spark.history.store.hybridStore.maxMemoryUsage larger then real to fit more apps.
   
   One question I do have is how much is memory usage affected by like the number of tasks?  I imagine that could be a lot of the usage here from 1000 tasks. I'm curious for instance if someone has the use case of just a lot of very small applications. I guess it could go the other way as well though, if cluster mainly has huge jobs where you have 50000 tasks per job
   
   without more stats I would think 10x for Zstd and 4x for others seems pretty safe.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org