You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jungtaek Lim (Jira)" <ji...@apache.org> on 2023/07/04 01:24:00 UTC

[jira] [Commented] (SPARK-43311) RocksDB state store provider memory management enhancements

    [ https://issues.apache.org/jira/browse/SPARK-43311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17739761#comment-17739761 ] 

Jungtaek Lim commented on SPARK-43311:
--------------------------------------

[~ianmanning] 

Sorry, Spark community has been avoiding to bring new features/improvements into bugfix version. We are about to start the release phase for Spark 3.5.0, so stay tuned!

> RocksDB state store provider memory management enhancements
> -----------------------------------------------------------
>
>                 Key: SPARK-43311
>                 URL: https://issues.apache.org/jira/browse/SPARK-43311
>             Project: Spark
>          Issue Type: Improvement
>          Components: Structured Streaming
>    Affects Versions: 3.4.0
>            Reporter: Anish Shrigondekar
>            Assignee: Anish Shrigondekar
>            Priority: Major
>             Fix For: 3.5.0
>
>
> Today when RocksDB is used as a State Store provider, memory usage when writing using writeBatch is not capped. Also, a related issue is that the state store coordinator can create multiple RocksDB instances on a single node without enforcing a global limit on native memory usage. Due to these issues we could run into OOM issues and task failures. 
>  
> We are looking to improve this behavior by doing a series of improvements such as:
>  * remove writeBatch and use native RocksDB operations
>  * use writeBufferManager to manage global limit for all instances on a single node and accounting memtable + filter/index blocks usage as part of block cacheWith these changes we will be avoiding OOM issues around RocksDB native memory usage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org