You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Ivan Bessonov (Jira)" <ji...@apache.org> on 2024/01/30 08:46:00 UTC
[jira] [Resolved] (IGNITE-20067) Optimize "StorageUpdateHandler#handleUpdateAll"
[ https://issues.apache.org/jira/browse/IGNITE-20067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ivan Bessonov resolved IGNITE-20067.
------------------------------------
Fix Version/s: 3.0.0-beta2
Reviewer: Ivan Bessonov
Resolution: Fixed
> Optimize "StorageUpdateHandler#handleUpdateAll"
> -----------------------------------------------
>
> Key: IGNITE-20067
> URL: https://issues.apache.org/jira/browse/IGNITE-20067
> Project: Ignite
> Issue Type: Improvement
> Reporter: Ivan Bessonov
> Assignee: Philipp Shergalis
> Priority: Major
> Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Time Spent: 1h 40m
> Remaining Estimate: 0h
>
> In current implementation, the size of a single batch inside of the "runConsistently" is unpredictable, because the collection of rows is received from the message.
> Generally speaking, it's a good idea to make the scope of single "runConsistently" smaller - it would lead to faster work in all storage engines:
> * for rocksdb, write batches would become smaller;
> * for page memory, spikes on checkpoint would become smaller.
> There are two criteria that we could use:
> * number of rows stored;
> * cumulative number of inserted bytes.
> Raft does the same approximation when batching log records, for example. This should not affect the data consistency, because updateAll itself is idempotent by its nature
--
This message was sent by Atlassian Jira
(v8.20.10#820010)