You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Semen Boikov (JIRA)" <ji...@apache.org> on 2017/04/06 10:45:41 UTC

[jira] [Resolved] (IGNITE-4661) Optimizations: optimize PagesList.removeDataPage

     [ https://issues.apache.org/jira/browse/IGNITE-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Semen Boikov resolved IGNITE-4661.
----------------------------------
    Resolution: Won't Fix

Tried various aproaches with hash map, but did not get performance improvments, closing issue.

> Optimizations: optimize PagesList.removeDataPage
> ------------------------------------------------
>
>                 Key: IGNITE-4661
>                 URL: https://issues.apache.org/jira/browse/IGNITE-4661
>             Project: Ignite
>          Issue Type: Task
>          Components: cache
>            Reporter: Semen Boikov
>            Assignee: Igor Seliverstov
>             Fix For: 2.0
>
>         Attachments: Pagemem_benchmark_results.xlsx
>
>
> Optimization for new PageMemory approach (IGNITE-3477, branch ignite-3477).
> Currently PagesList.removeDataPage requires linear search by page ID, need check if it makes sense to change structure of PagesList's element from list to hash table.
> Here are links to proposed hash table alrorithm:
> http://codecapsule.com/2013/11/11/robin-hood-hashing
> http://codecapsule.com/2013/11/17/robin-hood-hashing-backward-shift-deletion/
> Note: with hash table approach 'take' from PagesList will require linear search, so we'll also need some heuristic to make it more optimal.
> For more details see:
> IgniteCacheOffheapManagerImpl.update -> FreeListImpl.insertDataRow, 
> IgniteCacheOffheapManagerImpl.update -> FreeListImpl.removeDataRowByLink.
> To check result of optimization IgnitePutRandomValueSizeBenchmark can be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)