You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Kirill Tkalenko (Jira)" <ji...@apache.org> on 2021/09/01 09:02:00 UTC
[jira] [Comment Edited] (IGNITE-13558) GridCacheProcessor should
implement better parallelization when restoring partition states on startup
[ https://issues.apache.org/jira/browse/IGNITE-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17407983#comment-17407983 ]
Kirill Tkalenko edited comment on IGNITE-13558 at 9/1/21, 9:01 AM:
-------------------------------------------------------------------
Results of the run on my laptop (groups: 59, total partitions: 105_790):
||PR||Run 1||Run 2||Run 3||
|9243|17 501 ms|20_812 ms|19_157 ms|
|9334|15_054 ms|18_500 ms|17_930 ms|
I propose to focus on approach: worker per-partition.
was (Author: ktkalenko@gridgain.com):
Results of the run on my laptop (groups: 59, total partitions: 105_790):
||PR||Run 1||Run 2||Run 3||
|9243|17 501 ms|20_812 ms|19_157 ms|
|9334|15_054 ms|18_500 ms|17_930 ms|
> GridCacheProcessor should implement better parallelization when restoring partition states on startup
> -----------------------------------------------------------------------------------------------------
>
> Key: IGNITE-13558
> URL: https://issues.apache.org/jira/browse/IGNITE-13558
> Project: Ignite
> Issue Type: Improvement
> Components: persistence
> Reporter: Sergey Chugunov
> Assignee: Denis Chudov
> Priority: Major
> Time Spent: 3h
> Remaining Estimate: 0h
>
> GridCacheProcessor#restorePartitionStates method tries to employ striped pool to restore partition states in parallel but level of parallelization is down only to cache group per thread.
> It is not enough and not utilizes resources effectively in case of one cache group much bigger than the others.
> We need to parallel restore process down to individual partitions to get the most from the available resources and speed up node startup.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)