You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Darrell Burgan <Da...@infor.com> on 2014/02/03 17:48:20 UTC

SolrCloud multiple data center support

Hello, we are using Solr in a SolrCloud configuration, with two Solr instances running with three Zookeepers in a single data center. We presently have a single search index with about 35 million entries in it, about 60GB disk space on each of the two Solr servers (120GB total). I would expect our usage of Solr to grow to include other search indexes, and likely larger data volumes.

I'm writing because we're needing to grow beyond a single data center, with two (potentially incompatible) goals:


1.       We need to be able to have a hot disaster recovery site, in a completely separate data center, that has a near-realtime replica of the search index.


2.       We'd like to have the option to have multiple active/active data centers that each see and update the same search index, distributed across data centers.

The options I'm aware of from reading archives:


a.       Simply set up the remote Solr instances as active parts of the same SolrCloud cluster. This will  essentially involve us standing up multiple Zookeepers in the second data center, and multiple Solr instances, and they will all keep each other in sync magically. This will also solve both of our goals. However, I'm concerned about performance and whether SolrCloud is smart enough to route local search queries only to local Solr servers ... ? Also, how does such a cluster tolerate and recover from network partitions?


b.      The remote Solr instances form their own completely unrelated SolrCloud cluster. I have to invent some kind of replication logic of my own to sync data between them. This replication would have to be bidirectional to satisfy both of our goals. I strongly dislike this option since the application really should not concern itself with data distribution. But I'll do it if I must.

So my questions are:


-          Can anyone give me any guidance as to option a? Anyone using this in a real production setting? Words of wisdom? Does it work?


-          Are there any other options that I'm not considering?


-          What is Solr's answer to such configurations (we can't be alone in needing one)? Any big enhancements coming on the Solr road map to deal with this?

Thanks!
Darrell Burgan


[Description: Infor]<http://www.infor.com/>

Darrell Burgan | Chief Architect, PeopleAnswers
office: 214 445 2172 | mobile: 214 564 4450 | fax: 972 692 5386 | darrell.burgan@infor.com<ma...@infor.com> | http://www.infor.com

CONFIDENTIALITY NOTE: This email (including any attachments) is confidential and may be protected by legal privilege. If you are not the intended recipient, be aware that any disclosure, copying, distribution, or use of the information contained herein is prohibited.  If you have received this message in error, please notify the sender by replying to this message and then delete this message in its entirety. Thank you for your cooperation.


RE: SolrCloud multiple data center support

Posted by Darrell Burgan <Da...@infor.com>.
Let's say I was primarily interested in ensuring there is a DR copy of the search index that is replicated to the remote data center, but I do not want the Solr instances in the remote data center to be part of the SolrCloud cluster, and that I am willing to accept some downtime in bringing up a Solr cluster in the remote data center if we have to use it. Can I use old http-based replication from a remote slave against one of the SolrCloud servers to accomplish that?

Primary Data Center
	3 x Zookeeper
	2 x Solr (clustered via SolrCloud)
	1 x collection
	1 x shard

Remote Data Center
	1 x Solr (configured as standalone replication slave against one of the primary data center Solr servers)

Would this work to at least get the data to the remote data center in a reliable way?

Thanks,
Darrell


-----Original Message-----
From: Shawn Heisey [mailto:solr@elyograg.org] 
Sent: Wednesday, February 05, 2014 12:39 AM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud multiple data center support

On 2/4/2014 10:14 PM, Darrell Burgan wrote:
> Interesting about the Zookeeper quorum problem. What if we were to run three Zookeepers in our primary data center and four in the backup data center. If we failed over, we wouldn't have a quorum, but we could kill one of the Zookeepers to restore a quorum, couldn't we? If we did extend the SolrCloud cluster into a second data center, wouldn't queries against the cluster be routed to the second data center sometimes? 

If you have seven zookeeper servers in your ensemble, at least four of them must be operational to have quorum.  With N instances, int(N/2)+1 of them need to be running.  In order to restore quorum when a data center outage takes out half your quorum, you would need to reconfigure each surviving instance in the cluster so that it had fewer servers in it, then restart all the ZK instances.  I have no idea what would happen when the down data center is restored, but to get it working right, you'd have to reconfigure and restart again.

Zookeeper simply isn't designed to deal with data center failure in a two center scenario.  You can have workable solution if you have at least three data centers and you assume that you won't ever have a situation where more than one goes down.  I don't know that you can make that assumption, of course.

If you have replicas for one collection in two data centers, SolrCloud will direct queries to all of the replicas, meaning that some of them will have high latency.  There is currently no logic to specify or prefer "local" replicas.

Right now the only viable solution with two data centers is independent SolrCloud installs that are kept up to date independently.

I've never looked at Flume.  My indexing program will update multiple independent copies of the index.  All my servers are in the same location, but it would theoretically work with multiple locations too.

Thanks,
Shawn


RE: SolrCloud multiple data center support

Posted by Darrell Burgan <Da...@infor.com>.
Here's what we've decided to do. All updates and deletes from our collections will no longer be applied directly to SolrCloud via Solrj. Instead, they will become messages of a certain topic that go through a RabbitMQ exchange, where an agent in each data center subscribes to the topic with a queue specific to its data center. We will run each agent as a separate webapp inside the same Tomcat instance that hosts Solr itself, on each of our Solr servers. As messages come in, the agent receives them, and then uses Solrj to update them directly into SolrCloud.

The key is RabbitMQ's ability to send the same message to multiple queues that subscribe to the same topic. If each data center sets up a single queue that subscribes to the correct topic, both data centers will receive all the update and delete messages, and will update their indexes accordingly. The net is we have two completely separate SolrCloud clusters, with 2 Solr servers and 3 Zookeepers each, which are all kept up to date in almost lock step.

We're planning on using this capability both to provide for a hot disaster recovery backup in a remote data center, as well as to provide distributed active/active search indexes across many data centers. As long as every update/delete message goes into the same federated RabbitMQ exchange and queue, all data centers will receive the update/delete messages and keep their indexes up to date independently.

We're also talking to the folks at DataStax about their commercial product, which seems to layer Solr atop the Cassandra distributed data store. This might provide and even more elegant solution that what we're doing. But that is a bit further down the road.

Thanks for the help,
Darrell Burgan



-----Original Message-----
From: Darrell Burgan 
Sent: Wednesday, February 05, 2014 6:48 PM
To: solr-user@lucene.apache.org
Subject: RE: SolrCloud multiple data center support

Let's say I was primarily interested in ensuring there is a DR copy of the search index that is replicated to the remote data center, but I do not want the Solr instances in the remote data center to be part of the SolrCloud cluster, and that I am willing to accept some downtime in bringing up a Solr cluster in the remote data center if we have to use it. Can I use old http-based replication from a remote slave against one of the SolrCloud servers to accomplish that?

Primary Data Center
	3 x Zookeeper
	2 x Solr (clustered via SolrCloud)
	1 x collection
	1 x shard

Remote Data Center
	1 x Solr (configured as standalone replication slave against one of the primary data center Solr servers)

Would this work to at least get the data to the remote data center in a reliable way?

Thanks,
Darrell


Re: SolrCloud multiple data center support

Posted by Shawn Heisey <so...@elyograg.org>.
On 2/4/2014 10:14 PM, Darrell Burgan wrote:
> Interesting about the Zookeeper quorum problem. What if we were to run three Zookeepers in our primary data center and four in the backup data center. If we failed over, we wouldn't have a quorum, but we could kill one of the Zookeepers to restore a quorum, couldn't we? If we did extend the SolrCloud cluster into a second data center, wouldn't queries against the cluster be routed to the second data center sometimes? 

If you have seven zookeeper servers in your ensemble, at least four of
them must be operational to have quorum.  With N instances, int(N/2)+1
of them need to be running.  In order to restore quorum when a data
center outage takes out half your quorum, you would need to reconfigure
each surviving instance in the cluster so that it had fewer servers in
it, then restart all the ZK instances.  I have no idea what would happen
when the down data center is restored, but to get it working right,
you'd have to reconfigure and restart again.

Zookeeper simply isn't designed to deal with data center failure in a
two center scenario.  You can have workable solution if you have at
least three data centers and you assume that you won't ever have a
situation where more than one goes down.  I don't know that you can make
that assumption, of course.

If you have replicas for one collection in two data centers, SolrCloud
will direct queries to all of the replicas, meaning that some of them
will have high latency.  There is currently no logic to specify or
prefer "local" replicas.

Right now the only viable solution with two data centers is independent
SolrCloud installs that are kept up to date independently.

I've never looked at Flume.  My indexing program will update multiple
independent copies of the index.  All my servers are in the same
location, but it would theoretically work with multiple locations too.

Thanks,
Shawn


RE: SolrCloud multiple data center support

Posted by Darrell Burgan <Da...@infor.com>.
Interesting about the Zookeeper quorum problem. What if we were to run three Zookeepers in our primary data center and four in the backup data center. If we failed over, we wouldn't have a quorum, but we could kill one of the Zookeepers to restore a quorum, couldn't we? If we did extend the SolrCloud cluster into a second data center, wouldn't queries against the cluster be routed to the second data center sometimes? 

Unfortunately we do generally need near real time, as our search index is under constant update, although we could afford for updates to be delayed for a while. We feed the search index based upon the contents of a queue. But we definitely cannot bring Solr down to re-establish SolrCloud's cluster.

I will look into Flume and see what it offers us as well.

Thanks for the input!

Darrell


-----Original Message-----
From: Daniel Collins [mailto:danwcollins@gmail.com] 
Sent: Monday, February 03, 2014 4:16 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud multiple data center support

Option a) doesn't really work out of the box, *if you need NRT support*.
 The main reason (for us at least) is the ZK ensemble and maintaining quorum. If you have a single ensemble, say 3 ZKs in 1 DC and 2 in another, then if you lose DC 2, you lose 2 ZKs and the rest are fine.  But if you lose the main DC that has 3 ZKs, you lose quorum.  Searches will be ok, but if you are an NRT-setup, your updates will all stall until you get another ZK started (and reload the whole Solr Cloud to give them the ID of that new ZK).

For us, availability is more important than consistency, so we currently have 2 independent setups, 1 ZK ensemble and Solr Cloud per DC.  We already had an indexing system that serviced DCs so we didn't need something like Flume.  We also have external systems that handle routing to some extent, so we can route "locally" to each Cloud, and not have to worry about cross-DC traffic.

One solution to that is have a 3rd DC with few instances in, say another 2 ZKs. That would take your total ensemble to 7, and you can lose 3 whilst still maintaining quorum.  Since ZK is relatively light-weight, that 3rd "Data Centre" doesn't have to be as robust, or contain Solr replicas, its just a place to house 1 or 2 machines for holding ZKs.  We will probably migrate to this kind of setup soon as it ticks more of our boxes.

One other option is in ZK trunk (but not yet in a release) is the ability to dynamically reconfigure ZK ensembles ( https://issues.apache.org/jira/browse/ZOOKEEPER-107).  That would give the ability to create new ZK instances in the event of a DC failure, and reconfigure the Solr Cloud without having to reload everything. That would help to some extent.

If you don't need NRT, then the solution is somewhat easier, as you don't have to worry as much about ZK quorum, a single ZK ensemble across DCs might be sufficient for you in that case.

Re: SolrCloud multiple data center support

Posted by Daniel Collins <da...@gmail.com>.
https://issues.apache.org/jira/browse/ZOOKEEPER-107 maybe be implemented,
but it isn't in a release as yet :)

Its slated for 3.5.0 but given 3.4.0 came out in November 2011, and there
has been no minor release since then.  The 3.4.x release is only releasing
critical fixes now, so any new functionality would have to wait for 3.5.x
and I don't think there are any estimates on when that might come out...
 Until that is released officially, Solr can't really depend on it.



On 23 June 2014 15:36, Arcadius Ahouansou <ar...@menelic.com> wrote:

> On 3 February 2014 22:16, Daniel Collins <da...@gmail.com> wrote:
>
> >
> > One other option is in ZK trunk (but not yet in a release) is the ability
> > to dynamically reconfigure ZK ensembles (
> > https://issues.apache.org/jira/browse/ZOOKEEPER-107).  That would give
> the
> > ability to create new ZK instances in the event of a DC failure, and
> > reconfigure the Solr Cloud without having to reload everything. That
> would
> > help to some extent.
> >
>
>
> ZOOKEEPER-107 has now been implemented.
> I checked the Solr Jira and it seems there is nothing for multi-data-center
> support.
>
> Do we need to create a ticket or is there already one?
>
> Thanks.
>
> Arcadius.
>

Re: SolrCloud multiple data center support

Posted by Arcadius Ahouansou <ar...@menelic.com>.
I have just created https://issues.apache.org/jira/browse/SOLR-6205
I hope the description makes sens.

Thanks.

Arcadius.



On 23 June 2014 18:49, Mark Miller <ma...@gmail.com> wrote:

> We have been waiting for that issue to be finished before thinking too
> hard about how it can improve things. There have been a couple ideas (I’ve
> mostly wanted it for improving the internal zk mode situation), but no
> JIRAs yet that I know of.
> --
> Mark Miller
> about.me/markrmiller
>
> On June 23, 2014 at 10:37:27 AM, Arcadius Ahouansou (arcadius@menelic.com)
> wrote:
>
> On 3 February 2014 22:16, Daniel Collins <da...@gmail.com> wrote:
>
> >
> > One other option is in ZK trunk (but not yet in a release) is the ability
> > to dynamically reconfigure ZK ensembles (
> > https://issues.apache.org/jira/browse/ZOOKEEPER-107). That would give
> the
> > ability to create new ZK instances in the event of a DC failure, and
> > reconfigure the Solr Cloud without having to reload everything. That
> would
> > help to some extent.
> >
>
>
> ZOOKEEPER-107 has now been implemented.
> I checked the Solr Jira and it seems there is nothing for multi-data-center
> support.
>
> Do we need to create a ticket or is there already one?
>
> Thanks.
>
> Arcadius.
>



-- 
Arcadius Ahouansou
Menelic Ltd | Information is Power
M: 07908761999
W: www.menelic.com
---

Re: SolrCloud multiple data center support

Posted by Mark Miller <ma...@gmail.com>.
We have been waiting for that issue to be finished before thinking too hard about how it can improve things. There have been a couple ideas (I’ve mostly wanted it for improving the internal zk mode situation), but no JIRAs yet that I know of.
-- 
Mark Miller
about.me/markrmiller

On June 23, 2014 at 10:37:27 AM, Arcadius Ahouansou (arcadius@menelic.com) wrote:

On 3 February 2014 22:16, Daniel Collins <da...@gmail.com> wrote:  

>  
> One other option is in ZK trunk (but not yet in a release) is the ability  
> to dynamically reconfigure ZK ensembles (  
> https://issues.apache.org/jira/browse/ZOOKEEPER-107). That would give the  
> ability to create new ZK instances in the event of a DC failure, and  
> reconfigure the Solr Cloud without having to reload everything. That would  
> help to some extent.  
>  


ZOOKEEPER-107 has now been implemented.  
I checked the Solr Jira and it seems there is nothing for multi-data-center  
support.  

Do we need to create a ticket or is there already one?  

Thanks.  

Arcadius.  

Re: SolrCloud multiple data center support

Posted by Arcadius Ahouansou <ar...@menelic.com>.
On 3 February 2014 22:16, Daniel Collins <da...@gmail.com> wrote:

>
> One other option is in ZK trunk (but not yet in a release) is the ability
> to dynamically reconfigure ZK ensembles (
> https://issues.apache.org/jira/browse/ZOOKEEPER-107).  That would give the
> ability to create new ZK instances in the event of a DC failure, and
> reconfigure the Solr Cloud without having to reload everything. That would
> help to some extent.
>


ZOOKEEPER-107 has now been implemented.
I checked the Solr Jira and it seems there is nothing for multi-data-center
support.

Do we need to create a ticket or is there already one?

Thanks.

Arcadius.

Re: SolrCloud multiple data center support

Posted by Daniel Collins <da...@gmail.com>.
Option a) doesn't really work out of the box, *if you need NRT support*.
 The main reason (for us at least) is the ZK ensemble and maintaining
quorum. If you have a single ensemble, say 3 ZKs in 1 DC and 2 in another,
then if you lose DC 2, you lose 2 ZKs and the rest are fine.  But if you
lose the main DC that has 3 ZKs, you lose quorum.  Searches will be ok, but
if you are an NRT-setup, your updates will all stall until you get another
ZK started (and reload the whole Solr Cloud to give them the ID of that new
ZK).

For us, availability is more important than consistency, so we currently
have 2 independent setups, 1 ZK ensemble and Solr Cloud per DC.  We already
had an indexing system that serviced DCs so we didn't need something like
Flume.  We also have external systems that handle routing to some extent,
so we can route "locally" to each Cloud, and not have to worry about
cross-DC traffic.

One solution to that is have a 3rd DC with few instances in, say another 2
ZKs. That would take your total ensemble to 7, and you can lose 3 whilst
still maintaining quorum.  Since ZK is relatively light-weight, that 3rd
"Data Centre" doesn't have to be as robust, or contain Solr replicas, its
just a place to house 1 or 2 machines for holding ZKs.  We will probably
migrate to this kind of setup soon as it ticks more of our boxes.

One other option is in ZK trunk (but not yet in a release) is the ability
to dynamically reconfigure ZK ensembles (
https://issues.apache.org/jira/browse/ZOOKEEPER-107).  That would give the
ability to create new ZK instances in the event of a DC failure, and
reconfigure the Solr Cloud without having to reload everything. That would
help to some extent.

If you don't need NRT, then the solution is somewhat easier, as you don't
have to worry as much about ZK quorum, a single ZK ensemble across DCs
might be sufficient for you in that case.


On 3 February 2014 17:44, Mark Miller <ma...@gmail.com> wrote:

> SolrCloud has not tackled multi data center yet.
>
> I don't think a or b are very good options yet.
>
> Honestly, I think the best current bet is to use something like Apache
> Flume to send data to both data centers - it will handle retries and
> keeping things in sync and splitting the stream. Doesn't satisfy all use
> cases though.
>
> At some point, multi data center support will happen.
>
> I can't remember where ZooKeeper's support for it is at, but with that and
> some logic to favor nodes in your data center, that might be a viable route.
>
> - Mark
>
> http://about.me/markrmiller
>
> On Feb 3, 2014, at 11:48 AM, Darrell Burgan <Da...@infor.com>
> wrote:
>
> > Hello, we are using Solr in a SolrCloud configuration, with two Solr
> instances running with three Zookeepers in a single data center. We
> presently have a single search index with about 35 million entries in it,
> about 60GB disk space on each of the two Solr servers (120GB total). I
> would expect our usage of Solr to grow to include other search indexes, and
> likely larger data volumes.
> >
> > I'm writing because we're needing to grow beyond a single data center,
> with two (potentially incompatible) goals:
> >
> > 1.       We need to be able to have a hot disaster recovery site, in a
> completely separate data center, that has a near-realtime replica of the
> search index.
> >
> > 2.       We'd like to have the option to have multiple active/active
> data centers that each see and update the same search index, distributed
> across data centers.
> >
> > The options I'm aware of from reading archives:
> >
> > a.       Simply set up the remote Solr instances as active parts of the
> same SolrCloud cluster. This will  essentially involve us standing up
> multiple Zookeepers in the second data center, and multiple Solr instances,
> and they will all keep each other in sync magically. This will also solve
> both of our goals. However, I'm concerned about performance and whether
> SolrCloud is smart enough to route local search queries only to local Solr
> servers ... ? Also, how does such a cluster tolerate and recover from network
> partitions?
> >
> > b.      The remote Solr instances form their own completely unrelated
> SolrCloud cluster. I have to invent some kind of replication logic of my
> own to sync data between them. This replication would have to be
> bidirectional to satisfy both of our goals. I strongly dislike this option
> since the application really should not concern itself with data
> distribution. But I'll do it if I must.
> >
> > So my questions are:
> >
> > -          Can anyone give me any guidance as to option a? Anyone using
> this in a real production setting? Words of wisdom? Does it work?
> >
> > -          Are there any other options that I'm not considering?
> >
> > -          What is Solr's answer to such configurations (we can't be
> alone in needing one)? Any big enhancements coming on the Solr road map to
> deal with this?
> >
> > Thanks!
> > Darrell Burgan
> >
> >
> >
> > Darrell Burgan | Chief Architect, PeopleAnswers
> > office: 214 445 2172 | mobile: 214 564 4450 | fax: 972 692 5386 |
> darrell.burgan@infor.com | http://www.infor.com
> > CONFIDENTIALITY NOTE: This email (including any attachments) is
> confidential and may be protected by legal privilege. If you are not the
> intended recipient, be aware that any disclosure, copying, distribution, or
> use of the information contained herein is prohibited.  If you have
> received this message in error, please notify the sender by replying to
> this message and then delete this message in its entirety. Thank you for
> your cooperation.
> >
>
>

RE: SolrCloud multiple data center support

Posted by Darrell Burgan <Da...@infor.com>.
Thanks - I was unaware of Flume and will investigate it. It looks like it has specific features for replicating Solr data? Have you or has anyone on the list used it for this purpose?
Thanks again,
Darrel


-----Original Message-----
From: Mark Miller [mailto:markrmiller@gmail.com] 
Sent: Monday, February 03, 2014 11:44 AM
To: solr-user
Subject: Re: SolrCloud multiple data center support

SolrCloud has not tackled multi data center yet.

I don't think a or b are very good options yet.

Honestly, I think the best current bet is to use something like Apache Flume to send data to both data centers - it will handle retries and keeping things in sync and splitting the stream. Doesn't satisfy all use cases though.

At some point, multi data center support will happen.

I can't remember where ZooKeeper's support for it is at, but with that and some logic to favor nodes in your data center, that might be a viable route.

- Mark

Re: SolrCloud multiple data center support

Posted by Mark Miller <ma...@gmail.com>.
SolrCloud has not tackled multi data center yet.

I don’t think a or b are very good options yet.

Honestly, I think the best current bet is to use something like Apache Flume to send data to both data centers - it will handle retries and keeping things in sync and splitting the stream. Doesn’t satisfy all use cases though.

At some point, multi data center support will happen.

I can’t remember where ZooKeeper’s support for it is at, but with that and some logic to favor nodes in your data center, that might be a viable route.

- Mark

http://about.me/markrmiller

On Feb 3, 2014, at 11:48 AM, Darrell Burgan <Da...@infor.com> wrote:

> Hello, we are using Solr in a SolrCloud configuration, with two Solr instances running with three Zookeepers in a single data center. We presently have a single search index with about 35 million entries in it, about 60GB disk space on each of the two Solr servers (120GB total). I would expect our usage of Solr to grow to include other search indexes, and likely larger data volumes.
>  
> I’m writing because we’re needing to grow beyond a single data center, with two (potentially incompatible) goals:
>  
> 1.       We need to be able to have a hot disaster recovery site, in a completely separate data center, that has a near-realtime replica of the search index.
> 
> 2.       We’d like to have the option to have multiple active/active data centers that each see and update the same search index, distributed across data centers.
>  
> The options I’m aware of from reading archives:
>  
> a.       Simply set up the remote Solr instances as active parts of the same SolrCloud cluster. This will  essentially involve us standing up multiple Zookeepers in the second data center, and multiple Solr instances, and they will all keep each other in sync magically. This will also solve both of our goals. However, I’m concerned about performance and whether SolrCloud is smart enough to route local search queries only to local Solr servers … ? Also, how does such a cluster tolerate and recover from network partitions?
> 
> b.      The remote Solr instances form their own completely unrelated SolrCloud cluster. I have to invent some kind of replication logic of my own to sync data between them. This replication would have to be bidirectional to satisfy both of our goals. I strongly dislike this option since the application really should not concern itself with data distribution. But I’ll do it if I must.
>  
> So my questions are:
>  
> -          Can anyone give me any guidance as to option a? Anyone using this in a real production setting? Words of wisdom? Does it work?
> 
> -          Are there any other options that I’m not considering?
> 
> -          What is Solr’s answer to such configurations (we can’t be alone in needing one)? Any big enhancements coming on the Solr road map to deal with this?
>  
> Thanks!
> Darrell Burgan
>  
>  
> 
> Darrell Burgan | Chief Architect, PeopleAnswers
> office: 214 445 2172 | mobile: 214 564 4450 | fax: 972 692 5386 | darrell.burgan@infor.com | http://www.infor.com
> CONFIDENTIALITY NOTE: This email (including any attachments) is confidential and may be protected by legal privilege. If you are not the intended recipient, be aware that any disclosure, copying, distribution, or use of the information contained herein is prohibited.  If you have received this message in error, please notify the sender by replying to this message and then delete this message in its entirety. Thank you for your cooperation.
>