You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Rodrigo Oliveira <ad...@gmail.com> on 2019/07/26 13:16:47 UTC

[SOLR] - Best Practices/node down

Hi,

Anyone can help me?

I have a cluster from Solr with Zookeeper (5 nodes - 48 Gb each node -
Xms:28 Gb - Xmx:32 Gb). The bigger problem is my environment, because I am
in a process of migration from MYSQL to SOLR.

In this case, I've had just 10% of my migration completed, and the problem
it ocurred.

At the log, I've had the message.

*LOG1*

2399323.AUTOCREATED/lang/stopwords_sv.txt
2019-07-24 22:37:00.620 INFO
 (OverseerThreadFactory-13-thread-75-processing-n:54.54.54.152:8983_solr) [
  ] o.a.s.c.c.ZkConfigManager Copying zk node
/configs/_default/lang/stopwords_fi.txt to
/configs/list_2399323.AUTOCREATED/lang/stopwords_fi.txt
2019-07-24 22:37:00.681 INFO  (qtp1157726741-24363) [
x:list_2356456_shard2_replica_n4] o.a.s.h.a.CoreAdminOperation core create
command
qt=/admin/cores&coreNodeName=core_node7&collection.configName=list_2356456.AUTOCREATED&newCollection=true&name=list_2356456_shard2_replica_n4&action=CREATE&numShards=2&collection=list_2356456&shard=shard2&wt=javabin&version=2&replicaType=NRT
2019-07-24 22:37:00.760 INFO
 (OverseerThreadFactory-13-thread-75-processing-n:54.54.54.152:8983_solr) [
  ] o.a.s.c.c.ZkConfigManager Copying zk node
/configs/_default/lang/hyphenations_ga.txt to
/configs/list_2399323.AUTOCREATED/lang/hyphenations_ga.txt
2019-07-24 22:37:00.878 INFO  (qtp1157726741-24771) [c:list_2389896
s:shard1 r:core_node6 x:list_2389896_shard1_replica_n2]
o.a.s.c.ZkController Persisted config data to node
/configs/list_2389896.AUTOCREATED/managed-schema
2019-07-24 22:37:00.878 INFO  (zkCallback-5-thread-59) [   ]
o.a.s.s.ZkIndexSchemaReader A schema change: WatchedEvent
state:SyncConnected type:NodeDataChanged
path:/configs/list_2389896.AUTOCREATED/managed-schema, has occurred -
updating schema from ZooKeeper ...
2019-07-24 22:37:00.923 INFO
 (OverseerThreadFactory-13-thread-75-processing-n:54.54.54.152:8983_solr) [
  ] o.a.s.c.c.ZkConfigManager Copying zk node
/configs/_default/lang/stopwords_gl.txt to
/configs/list_2399323.AUTOCREATED/lang/stopwords_gl.txt
2019-07-24 22:37:00.923 INFO  (Thread-1476) [   ] o.a.s.c.SolrCore config
update listener called for core list_2389896_shard1_replica_n2
2019-07-24 22:37:01.033 INFO  (Thread-1476) [   ] o.a.s.c.SolrCore core
reload list_2389896_shard1_replica_n2
2019-07-24 22:37:01.068 INFO
 (OverseerThreadFactory-13-thread-75-processing-n:54.54.54.152:8983_solr) [
  ] o.a.s.c.c.ZkConfigManager Copying zk node /configs/_default/params.json
to /configs/list_2399323.AUTOCREATED/params.json
2019-07-24 22:37:01.134 INFO  (Thread-1471) [   ] o.a.s.c.CoreContainer
Reloading SolrCore 'list_1217965_shard2_replica_n6' using configuration
from collection list_1217965
2019-07-24 22:37:01.150 INFO  (Thread-1471) [c:list_1217965 s:shard2
r:core_node8 x:list_1217965_shard2_replica_n6] o.a.s.m.r.SolrJmxReporter
JMX monitoring for 'solr.core.list_1217965.shard2.replica_n6' (registry
'solr.core.list_1217965.shard2.replica_n6') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@62bd765
2019-07-24 22:37:01.150 INFO  (Thread-1471) [c:list_1217965 s:shard2
r:core_node8 x:list_1217965_shard2_replica_n6] o.a.s.c.SolrCore
[[list_1217965_shard2_replica_n6] ] Opening new SolrCore at
[/solr/server/solr/list_1217965_shard2_replica_n6],
dataDir=[/solr/server/solr/list_1217965_shard2_replica_n6/data/]
2019-07-24 22:37:01.152 INFO  (Thread-1471) [c:list_1217965 s:shard2
r:core_node8 x:list_1217965_shard2_replica_n6] o.a.s.r.XSLTResponseWriter
xsltCacheLifetimeSeconds=5

*LOG2*

Running OOM killer script for process 21252 for Solr on port 8983
Killed process 21252

I got it. Out Of Memory.

Any workaround to this case? Any suggestion? Any tips? 10% of migration and
Out Of Memory. Why the Solr it's consumed all memory?

Regards,

Re: [SOLR] - Best Practices/node down

Posted by Erick Erickson <er...@gmail.com>.
Also, you say "In this case, I've had just 10% of my migration completed”. Exactly how are you migrating the data? And how much data are you moving? In particular, what are you commit settings?

Best,
Erick

> On Jul 26, 2019, at 9:16 AM, Rodrigo Oliveira <ad...@gmail.com> wrote:
> 
> In this case, I've had just 10% of my migration completed


Re: [SOLR] - Best Practices/node down

Posted by Rodrigo Oliveira <ad...@gmail.com>.
Hi,

So sorry, but my explanation is incompleted.

My primary database is mongodb, I am using solr only search.

Than you for tips and tricks. It,s my first time using solr.

Regards



Em sex, 26 de jul de 2019 11:36, Shawn Heisey <ap...@elyograg.org>
escreveu:

> On 7/26/2019 7:16 AM, Rodrigo Oliveira wrote:
> > I have a cluster from Solr with Zookeeper (5 nodes - 48 Gb each node -
> > Xms:28 Gb - Xmx:32 Gb). The bigger problem is my environment, because I
> am
> > in a process of migration from MYSQL to SOLR.
>
> Solr is not intended as a primary data store.  There are things related
> to primary data store usage that MySQL can do which Solr either can't do
> at all or has a difficult time doing.  Databases and search engines are
> each optimized for entirely different tasks.
>
> It is reasonable to have your data in both a database and a search
> engine ... but to *switch* from a database to Solr sounds like a really
> bad idea.
>
> Don't get me wrong... I'm one of the biggest fans of Solr you'll come
> across... but I am aware of its limitations as well as its strengths.
>
> If the amount of data involved is small, using Solr as a primary data
> store might prove to be worthwhile ... but if I examine everything you
> have said, it doesn't sound like the amount of data is small.
>
> > Running OOM killer script for process 21252 for Solr on port 8983
> > Killed process 21252
> >
> > I got it. Out Of Memory.
> >
> > Any workaround to this case? Any suggestion? Any tips? 10% of migration
> and
> > Out Of Memory. Why the Solr it's consumed all memory?
>
> This log is generated by the OOM killer script.  It does not output any
> indication about WHY the error occurred.  It simply indicates when the
> error occurred and what it did in response -- which is to terminate Solr.
>
> There are several possible reasons for Java's OOME.  Only a couple of
> those actually involve running out of memory.  It might not be memory at
> all.  But to find out, you will need to find the actual
> OutOfMemoryException in solr.log or one of the rotated versions
> (assuming it got logged at all), which will indicate the root issue.
>
> There are precisely two solutions for OOME, and frequently only one of
> them is actually possible:  Increase the resource that ran out, or
> figure out how to change the configuration so the program requires less
> of that resource.  As already mentioned, you will need to figure out
> which resource was depleted.
>
> If you can't find the actual exception, analyzing the GC log that Solr
> writes might help determine whether the depleted resource was heap memory.
>
> Thanks,
> Shawn
>

Re: [SOLR] - Best Practices/node down

Posted by Shawn Heisey <ap...@elyograg.org>.
On 7/26/2019 7:16 AM, Rodrigo Oliveira wrote:
> I have a cluster from Solr with Zookeeper (5 nodes - 48 Gb each node -
> Xms:28 Gb - Xmx:32 Gb). The bigger problem is my environment, because I am
> in a process of migration from MYSQL to SOLR.

Solr is not intended as a primary data store.  There are things related 
to primary data store usage that MySQL can do which Solr either can't do 
at all or has a difficult time doing.  Databases and search engines are 
each optimized for entirely different tasks.

It is reasonable to have your data in both a database and a search 
engine ... but to *switch* from a database to Solr sounds like a really 
bad idea.

Don't get me wrong... I'm one of the biggest fans of Solr you'll come 
across... but I am aware of its limitations as well as its strengths.

If the amount of data involved is small, using Solr as a primary data 
store might prove to be worthwhile ... but if I examine everything you 
have said, it doesn't sound like the amount of data is small.

> Running OOM killer script for process 21252 for Solr on port 8983
> Killed process 21252
> 
> I got it. Out Of Memory.
> 
> Any workaround to this case? Any suggestion? Any tips? 10% of migration and
> Out Of Memory. Why the Solr it's consumed all memory?

This log is generated by the OOM killer script.  It does not output any 
indication about WHY the error occurred.  It simply indicates when the 
error occurred and what it did in response -- which is to terminate Solr.

There are several possible reasons for Java's OOME.  Only a couple of 
those actually involve running out of memory.  It might not be memory at 
all.  But to find out, you will need to find the actual 
OutOfMemoryException in solr.log or one of the rotated versions 
(assuming it got logged at all), which will indicate the root issue.

There are precisely two solutions for OOME, and frequently only one of 
them is actually possible:  Increase the resource that ran out, or 
figure out how to change the configuration so the program requires less 
of that resource.  As already mentioned, you will need to figure out 
which resource was depleted.

If you can't find the actual exception, analyzing the GC log that Solr 
writes might help determine whether the depleted resource was heap memory.

Thanks,
Shawn