You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "HeartSaVioR (via GitHub)" <gi...@apache.org> on 2023/08/21 22:05:13 UTC

[GitHub] [spark] HeartSaVioR commented on pull request #42567: [SPARK-44878][SS] Disable strict limit for RocksDB write manager to avoid insertion exception on cache full

HeartSaVioR commented on PR #42567:
URL: https://github.com/apache/spark/pull/42567#issuecomment-1687107008

   I'd still like to understand how this is different from not capping the memory at all. Does capping the memory avoid RocksDB using the memory excessively? Or is there no difference between capping with soft limit vs no capping at all?
   
   Also, there is another aspect to think of - OOM kills the executor which could affect all stateful, stateless, batch queries. This error will only affect stateful queries. If people intends to set the limit on RocksDB memory usage considering this fact, soft limiting would break the intention, although they may still need to restart the cluster or at least executor to apply the new setting of memory limit on RocksDB. Looks to be very tricky to adjust from users' point of view when the error happens...
   
   Ideally we will need to rebalance the state if the memory hit happens, but maybe not happening in the short term.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org