You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Narayana B <na...@gmail.com> on 2016/08/18 15:51:18 UTC

Solr 6.1.0, zookeeper 3.4.8, Solrj and SolrCloud

Hi SolrTeam,

I see session exipre and my solr index fails.

please help me here, my infra details are shared below

I have total 3 compute
nodes[pcam-stg-app-02,pcam-stg-app-03,pcam-stg-app-04]

1) 3 nodes are running with zoo1, zoo2, zoo3 instances

/apps/scm-core/zookeeper/zkData/zkData1/myid  value 1
/apps/scm-core/zookeeper/zkData/zkData2/myid  value 2
/apps/scm-core/zookeeper/zkData/zkData3/myid  value 3

zoo1.cfg my setup

tickTime=2000
initLimit=5
syncLimit=2
dataDir=/apps/scm-core/zookeeper/zkData/zkData1
clientPort=2181
server.1=pcam-stg-app-01:2888:3888
server.2=pcam-stg-app-02:2888:3888
server.3=pcam-stg-app-03:2888:3888
server.4=pcam-stg-app-04:2888:3888
dataLogDir=/apps/scm-core/zookeeper/zkLogData/zkLogData1
# Default 64M, changed to 128M, represented in KiloBytes
preAllocSize=131072
# Default : 100000
snapCount=100
globalOutstandingLimit=1000
maxClientCnxns=100
autopurge.snapRetainCount=3
autopurge.purgeInterval=23
minSessionTimeout=40000
maxSessionTimeout=300000

[zk: pcam-stg-app-02:2181(CONNECTED) 0] ls /
[zookeeper, solr]
[zk: pcam-stg-app-02:2181(CONNECTED) 1] ls /solr
[configs, overseer, aliases.json, live_nodes, collections, overseer_elect,
security.json, clusterstate.json]



2) 2 nodes are running solrcloud
    pcam-stg-app-03: solr port 8983, solr port 8984
    pcam-stg-app-04: solr port 8983, solr port 8984


Config upload to zookeeper

server/scripts/cloud-scripts/zkcli.sh -zkhost
pcam-stg-app-02:2181,pcam-stg-app-03:2181,pcam-stg-app-04:2181/solr \
-cmd upconfig -confname scdata -confdir
/apps/scm-core/solr/solr-6.1.0/server/solr/configsets/data_driven_schema_configs/conf

Collection creation url:

http://pcam-stg-app-03:8983/solr/admin/collections?action=CREATE&name=scdata_test&numShards=2&replicationFactor=2&maxShardsPerNode=2&createNodeSet=pcam-stg-app-03:8983_solr,pcam-stg-app-03:8984_solr,pcam-stg-app-04:8983_solr,pcam-stg-app-04:8984_solr&collection.configName=scdata

solrj client


String zkHosts =
"pcam-stg-app-02:2181,pcam-stg-app-03:2181,pcam-stg-app-04:2181/solr";
CloudSolrClient solrClient = new
CloudSolrClient.Builder().withZkHost(zkHosts).build();
solrClient.setDefaultCollection("scdata_test");
solrClient.setParallelUpdates(true);

List<SolrInputDocument> cpnSpendSavingsList = new ArrayList<>();
i have done data setter to cpnSpendSavingsList

solrClient.addBeans(cpnSpendSavingsList);
solrClient.commit();




SessionExpire Error for the collections

Why this SessionExpire error comes when i start bulk insert/update to solr


org.apache.solr.common.SolrException: Could not load collection from ZK:
scdata_test
        at
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1047)
        at
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:610)
        at
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:211)
        at
org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:113)
        at
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1239)
        at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:961)
        at
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
        at
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
        at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
        at
org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:357)
        at
org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:329)
        at
com.cisco.pcam.spark.stream.HiveDataProcessStream.main(HiveDataProcessStream.java:165)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
        at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /collections/scdata_test/state.json

Re: Solr 6.1.0, zookeeper 3.4.8, Solrj and SolrCloud

Posted by danny teichthal <da...@gmail.com>.
Hi,
Not sure if it is related, but could be - I see that you do this =
CloudSolrClient
solrClient = new
CloudSolrClient.Builder().withZkHost(zkHosts).build();
Are you creating a new client on each update?
If yes, pay attention that the Solr Client should be a singleton.

Regarding session timeout, what value did you set - zkClientTimeout?
The parameter - maxSessionTimeout, controls this time out on zookeeper side.
zkClientTimeout - controls your client timeout.


Session expiry can also be affected from:
1. Garbage collection on Solr node/Zookeeper.
2. Slow IO on disk.
3. Network latency.

You should check these metrics on your system at the time you got this
expiry to see if it might be related.
If your zkClientTimeout  is set to a small value in addition to one of the
factors above - you could get many of these exceptions.






On Thu, Aug 18, 2016 at 6:51 PM, Narayana B <na...@gmail.com> wrote:

> Hi SolrTeam,
>
> I see session exipre and my solr index fails.
>
> please help me here, my infra details are shared below
>
> I have total 3 compute
> nodes[pcam-stg-app-02,pcam-stg-app-03,pcam-stg-app-04]
>
> 1) 3 nodes are running with zoo1, zoo2, zoo3 instances
>
> /apps/scm-core/zookeeper/zkData/zkData1/myid  value 1
> /apps/scm-core/zookeeper/zkData/zkData2/myid  value 2
> /apps/scm-core/zookeeper/zkData/zkData3/myid  value 3
>
> zoo1.cfg my setup
>
> tickTime=2000
> initLimit=5
> syncLimit=2
> dataDir=/apps/scm-core/zookeeper/zkData/zkData1
> clientPort=2181
> server.1=pcam-stg-app-01:2888:3888
> server.2=pcam-stg-app-02:2888:3888
> server.3=pcam-stg-app-03:2888:3888
> server.4=pcam-stg-app-04:2888:3888
> dataLogDir=/apps/scm-core/zookeeper/zkLogData/zkLogData1
> # Default 64M, changed to 128M, represented in KiloBytes
> preAllocSize=131072
> # Default : 100000
> snapCount=100
> globalOutstandingLimit=1000
> maxClientCnxns=100
> autopurge.snapRetainCount=3
> autopurge.purgeInterval=23
> minSessionTimeout=40000
> maxSessionTimeout=300000
>
> [zk: pcam-stg-app-02:2181(CONNECTED) 0] ls /
> [zookeeper, solr]
> [zk: pcam-stg-app-02:2181(CONNECTED) 1] ls /solr
> [configs, overseer, aliases.json, live_nodes, collections, overseer_elect,
> security.json, clusterstate.json]
>
>
>
> 2) 2 nodes are running solrcloud
>     pcam-stg-app-03: solr port 8983, solr port 8984
>     pcam-stg-app-04: solr port 8983, solr port 8984
>
>
> Config upload to zookeeper
>
> server/scripts/cloud-scripts/zkcli.sh -zkhost
> pcam-stg-app-02:2181,pcam-stg-app-03:2181,pcam-stg-app-04:2181/solr \
> -cmd upconfig -confname scdata -confdir
> /apps/scm-core/solr/solr-6.1.0/server/solr/configsets/data_
> driven_schema_configs/conf
>
> Collection creation url:
>
> http://pcam-stg-app-03:8983/solr/admin/collections?action=
> CREATE&name=scdata_test&numShards=2&replicationFactor=
> 2&maxShardsPerNode=2&createNodeSet=pcam-stg-app-03:
> 8983_solr,pcam-stg-app-03:8984_solr,pcam-stg-app-04:
> 8983_solr,pcam-stg-app-04:8984_solr&collection.configName=scdata
>
> solrj client
>
>
> String zkHosts =
> "pcam-stg-app-02:2181,pcam-stg-app-03:2181,pcam-stg-app-04:2181/solr";
> CloudSolrClient solrClient = new
> CloudSolrClient.Builder().withZkHost(zkHosts).build();
> solrClient.setDefaultCollection("scdata_test");
> solrClient.setParallelUpdates(true);
>
> List<SolrInputDocument> cpnSpendSavingsList = new ArrayList<>();
> i have done data setter to cpnSpendSavingsList
>
> solrClient.addBeans(cpnSpendSavingsList);
> solrClient.commit();
>
>
>
>
> SessionExpire Error for the collections
>
> Why this SessionExpire error comes when i start bulk insert/update to solr
>
>
> org.apache.solr.common.SolrException: Could not load collection from ZK:
> scdata_test
>         at
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(
> ZkStateReader.java:1047)
>         at
> org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(
> ZkStateReader.java:610)
>         at
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(
> ClusterState.java:211)
>         at
> org.apache.solr.common.cloud.ClusterState.hasCollection(
> ClusterState.java:113)
>         at
> org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(
> CloudSolrClient.java:1239)
>         at
> org.apache.solr.client.solrj.impl.CloudSolrClient.
> requestWithRetryOnStaleState(CloudSolrClient.java:961)
>         at
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(
> CloudSolrClient.java:934)
>         at
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>         at org.apache.solr.client.solrj.SolrClient.add(SolrClient.
> java:106)
>         at
> org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:357)
>         at
> org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:329)
>         at
> com.cisco.pcam.spark.stream.HiveDataProcessStream.main(
> HiveDataProcessStream.java:165)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
>         at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
>         at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:111)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired for /collections/scdata_test/state.json
>