You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Stanilovsky Evgeny (JIRA)" <ji...@apache.org> on 2018/07/20 11:37:00 UTC
[jira] [Updated] (IGNITE-8892) Iterating over large dataset via
ScanQuery can fails with OOME.
[ https://issues.apache.org/jira/browse/IGNITE-8892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Stanilovsky Evgeny updated IGNITE-8892:
---------------------------------------
Attachment: list-master.jfr
list-8892.jfr
> Iterating over large dataset via ScanQuery can fails with OOME.
> ---------------------------------------------------------------
>
> Key: IGNITE-8892
> URL: https://issues.apache.org/jira/browse/IGNITE-8892
> Project: Ignite
> Issue Type: Bug
> Components: cache
> Reporter: Andrew Mashenkov
> Assignee: Andrew Mashenkov
> Priority: Critical
> Labels: OutOfMemoryError
> Fix For: 2.7
>
> Attachments: ScanQueryOOM.java, list-8892.jfr, list-master.jfr
>
>
> Seems, iterating over query iterator (ScanQuery at least, but may be other affected as well) on client node cause memory leakage.
> The use case is quite simple.
> Start server and client. Put much data into cache, then iterate over all entries via ScanQuery.
> Looks like JVM crashed due to OOM as GridCacheDistributedQueryFuture.allCol map contains to many entries.
> I've put 15kk entries into cache and client failed with OOM after iterating over 10kk entry.
> In heapdump I observer 10kk GridCacheDistributedQueryFuture entries.
> We have to check if collection cleared correctly and it is really need to collect all entries.
> PFA repro.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)