You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Shalin Shekhar Mangar (JIRA)" <ji...@apache.org> on 2014/05/31 07:33:01 UTC
[jira] [Commented] (SOLR-5808) collections?action=SPLITSHARD
running out of heap space due to large segments
[ https://issues.apache.org/jira/browse/SOLR-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14014530#comment-14014530 ]
Shalin Shekhar Mangar commented on SOLR-5808:
---------------------------------------------
I just ran into this as well. A large segment on a 500M doc index (12GB heap) took down the node. I'll investigate and try to reduce the memory requirements.
> collections?action=SPLITSHARD running out of heap space due to large segments
> -----------------------------------------------------------------------------
>
> Key: SOLR-5808
> URL: https://issues.apache.org/jira/browse/SOLR-5808
> Project: Solr
> Issue Type: Bug
> Components: update
> Affects Versions: 4.7
> Reporter: Will Butler
> Assignee: Shalin Shekhar Mangar
> Labels: outofmemory, shard, split
>
> This issue is related to [https://issues.apache.org/jira/browse/SOLR-5214]. Although memory issues due to merging have been resolved, we still run out of memory when splitting a shard containing a large segment (created by optimizing). The Lucene MultiPassIndexSplitter is able to split the index without error.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org