You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ho...@apache.org on 2021/01/20 22:58:56 UTC

[lucene-solr-operator] branch gh-pages created (now 6c77114)

This is an automated email from the ASF dual-hosted git repository.

houston pushed a change to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/lucene-solr-operator.git.


      at 6c77114  Change location of downloads to GH Pages, and move charts dir

This branch includes the following new commits:

     new 1719210  Starting with documentation and examples from main.
     new 239e58a  Make changes for using github pages for documentation.
     new 6c77114  Change location of downloads to GH Pages, and move charts dir

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[lucene-solr-operator] 03/03: Change location of downloads to GH Pages, and move charts dir

Posted by ho...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

houston pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/lucene-solr-operator.git

commit 6c7711400cf946f3b6970ca987e03696d5c41d02
Author: Houston Putman <ho...@apache.org>
AuthorDate: Wed Jan 20 17:58:29 2021 -0500

    Change location of downloads to GH Pages, and move charts dir
---
 {docs/charts => charts}/index.yaml | 0
 docs/local_tutorial.md             | 2 +-
 docs/running-the-operator.md       | 2 +-
 3 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/charts/index.yaml b/charts/index.yaml
similarity index 100%
rename from docs/charts/index.yaml
rename to charts/index.yaml
diff --git a/docs/local_tutorial.md b/docs/local_tutorial.md
index b1c3dd8..972211c 100644
--- a/docs/local_tutorial.md
+++ b/docs/local_tutorial.md
@@ -62,7 +62,7 @@ Before installing the Solr Operator, we need to install the [Zookeeper Operator]
 Eventually this will be a dependency on the helm chart, but for now we can run an easy `kubectl apply`.
 
 ```bash
-kubectl apply -f https://raw.githubusercontent.com/apache/lucene-solr-operator/main/example/dependencies/zk_operator.yaml
+kubectl apply -f https://apache.github.io/lucene-solr-operator/example/dependencies/zk_operator.yaml
 ```
 
 Now add the Solr Operator Helm repository. (You should only need to do this once)
diff --git a/docs/running-the-operator.md b/docs/running-the-operator.md
index 56a4f54..a724ab4 100644
--- a/docs/running-the-operator.md
+++ b/docs/running-the-operator.md
@@ -7,7 +7,7 @@ This is because the Solr Operator, in most instances, relies on the Zookeeper Op
 Eventually this will be a dependency on the helm chart, but for now we can run an easy `kubectl apply`.
 
 ```bash
-kubectl apply -f https://raw.githubusercontent.com/apache/lucene-solr-operator/main/example/dependencies/zk_operator.yaml
+kubectl apply -f https://apache.github.io/lucene-solr-operator/example/dependencies/zk_operator.yaml
 ```
 
 ## Using the Solr Operator Helm Chart


[lucene-solr-operator] 01/03: Starting with documentation and examples from main.

Posted by ho...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

houston pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/lucene-solr-operator.git

commit 17192106021cc6654a485fb6ddaa3ad5f14222e4
Author: Houston Putman <ho...@apache.org>
AuthorDate: Wed Jan 20 17:40:17 2021 -0500

    Starting with documentation and examples from main.
---
 .gitignore                                     |  28 +++
 LICENSE                                        | 202 +++++++++++++++++++++
 NOTICE.txt                                     |  28 +++
 README.md                                      | 141 ++++++++++++++
 docs/README.md                                 |  14 ++
 docs/charts/index.yaml                         | 106 +++++++++++
 docs/development.md                            | 136 ++++++++++++++
 docs/local_tutorial.md                         | 242 +++++++++++++++++++++++++
 docs/release-instructions.md                   |  57 ++++++
 docs/running-the-operator.md                   |  73 ++++++++
 docs/solr-backup/README.md                     |  15 ++
 docs/solr-cloud/README.md                      | 103 +++++++++++
 docs/solr-cloud/dependencies.md                |  48 +++++
 docs/solr-cloud/managed-updates.md             |  56 ++++++
 docs/solr-cloud/solr-cloud-crd.md              | 130 +++++++++++++
 docs/solr-collection-alias/README.md           |  20 ++
 docs/solr-collection/README.md                 |  58 ++++++
 docs/solr-prometheus-exporter/README.md        |  52 ++++++
 example/dependencies/zk_operator.yaml          | 134 ++++++++++++++
 example/test_solrbackup.yaml                   |  14 ++
 example/test_solrcloud.yaml                    |  48 +++++
 example/test_solrcloud_addressability.yaml     |  36 ++++
 example/test_solrcloud_private_repo.yaml       |  10 +
 example/test_solrcloud_toleration_example.yaml |  53 ++++++
 example/test_solrcollection.yaml               |  43 +++++
 example/test_solrcollection_alias.yaml         |   9 +
 example/test_solrprometheusexporter.yaml       |  13 ++
 27 files changed, 1869 insertions(+)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..281f2c6
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,28 @@
+
+# Binaries for programs and plugins
+*.exe
+*.exe~
+*.dll
+*.so
+*.dylib
+bin
+release-artifacts
+
+# Test binary, build with `go test -c`
+*.test
+
+# Output of the go coverage tool, specifically when used with LiteIDE
+*.out
+
+# Kubernetes Generated files - skip generated files, except for vendored files
+
+!vendor/**/zz_generated.*
+
+# editor and IDE paraphernalia
+.idea
+*.swp
+*.swo
+*~
+
+# Remove the kustomize file for the manager, as it contains local information
+config/manager/kustomization.yaml
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..78d810f
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright 2017 Bloomberg Finance L.P.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
\ No newline at end of file
diff --git a/NOTICE.txt b/NOTICE.txt
new file mode 100644
index 0000000..9dde201
--- /dev/null
+++ b/NOTICE.txt
@@ -0,0 +1,28 @@
+==============================================================
+ Apache Solr Operator
+ Copyright 2006-2021 The Apache Software Foundation
+==============================================================
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+Includes software from other Apache Software Foundation projects.
+
+This includes code provided by Bloomberg Finance LP as a Software Grant
+to The ASF.
+=========================================================================
+==  Solr Operator Bloomberg Notice                                     ==
+=========================================================================
+Copyright 2019 Bloomberg Finance LP.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..664c064
--- /dev/null
+++ b/README.md
@@ -0,0 +1,141 @@
+# Solr Operator
+[![Latest Version](https://img.shields.io/github/tag/apache/lucene-solr-operator)](https://github.com/apache/lucene-solr-operator/releases)
+[![License](https://img.shields.io/badge/LICENSE-Apache2.0-ff69b4.svg)](http://www.apache.org/licenses/LICENSE-2.0.html)
+[![Go Report Card](https://goreportcard.com/badge/github.com/apache/lucene-solr-operator)](https://goreportcard.com/report/github.com/apache/lucene-solr-operator)
+[![Commit since last release](https://img.shields.io/github/commits-since/apache/lucene-solr-operator/latest.svg)](https://github.com/apache/lucene-solr-operator/commits/main)
+[![Docker Pulls](https://img.shields.io/docker/pulls/bloomberg/solr-operator)](https://hub.docker.com/r/bloomberg/solr-operator/)
+[![Slack](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://kubernetes.slack.com/messages/solr-operator)
+[![Mailing List]]
+
+The __Solr Operator__ manages Apache Solr Clouds within Kubernetes. It is built on top of the [Kube Builder](https://github.com/kubernetes-sigs/kubebuilder) framework.
+
+The project is currently in beta (`v1beta1`), and while we do not anticipate changing the API in backwards-incompatible ways there is no such guarantee yet.
+
+If you run into issues using the Solr Operator, please:
+- Reference the [version compatibility and upgrade/deprecation notes](#version-compatibility--upgrade-notes) provided below
+- Create a Github Issue in this repo, describing your problem with as much detail as possible
+- Reach out on our Slack channel!
+
+Join us on the [#solr-operator](https://kubernetes.slack.com/messages/solr-operator) channel in the official Kubernetes slack workspace.
+
+## Menu
+
+- [Documentation](#documentation)
+- [Version Compatibility and Upgrade Notes](#version-compatability--upgrade-notes)
+- [Contributions](#contributions)
+- [License](#license)
+- [Code of Conduct](#code-of-conduct)
+- [Security Vulnerability Reporting](#security-vulnerability-reporting)
+
+## Documentation
+
+Please visit the following pages for documentation on using and developing the Solr Operator:
+
+- [Local Tutorial](docs/local_tutorial.md)
+- [Running the Solr Operator](docs/running-the-operator.md)
+- Available Solr Resources
+    - [Solr Clouds](docs/solr-cloud)
+    - [Solr Collections](docs/solr-collection)
+    - [Solr Backups](docs/solr-backup)
+    - [Solr Metrics](docs/solr-prometheus-exporter)
+    - [Solr Collection Aliases](docs/solr-collection-alias)
+- [Development](docs/development.md)
+
+## Version Compatibility & Upgrade Notes
+
+#### v0.2.7
+- Do to the addition of possible sidecar/initContainers for SolrClouds, the version of CRDs used had to be upgraded to `apiextensions.k8s.io/v1`.
+  
+  **This means that Kubernetes support is now limited to 1.16+.**
+  If you are unable to use a newer version of Kubernetes, please install the `v0.2.6` version of the Solr Operator for use with Kubernetes 1.15 and below.
+
+- The location of backup-restore volume mounts in Solr containers has changed from `/var/solr/solr-backup-restore` to `/var/solr/data/backup-restore`.
+This change was made to ensure that there were no issues using the backup API with solr 8.6+, which restricts the locations that backup data can be saved to and read from.
+This change should be transparent if you are merely using the SolrBackup CRD.
+All files permissions issues with SolrBackups should now be addressed.
+
+- The default `PodManagementPolicy` for StatefulSets has been changed to `Parallel` from `OrderedReady`.
+This change will not affect existing StatefulSets, as `PodManagementPolicy` cannot be updated.
+In order to continue using `OrderedReady` on new SolrClouds, please use the following setting:  
+`SolrCloud.spec.customSolrKubeOptions.statefulSetOptions.podManagementPolicy`
+
+- The `SolrCloud` and `SolrPrometheusExporter` services' portNames have changed to `"solr-client"` and `"solr-metrics"` from `"ext-solr-client"` and `"ext-solr-metrics"`, respectively.
+This is due to a bug in Kubernetes where `portName` and `targetPort` must match for services.
+
+- Support for `etcd`/`zetcd` deployments has been removed.  
+  The section for a Zookeeper cluster Spec `SolrCloud.spec.zookeeperRef.provided.zookeeper` has been **DEPRECATED**.
+  The same fields (except for the deprecated `persistentVolumeClaimSpec` option) are now available under `SolrCloud.spec.zookeeperRef.provided`.
+
+- Data Storage options have been expanded, and moved from their old locations.
+  - `SolrCloud.spec.dataPvcSpec` has been **DEPRECATED**.  
+    Please instead use the following instead: `SolrCloud.spec.dataStorage.persistent.pvcTemplate.spec=<spec>`  
+  - `SolrCloud.spec.backupRestoreVolume` has been **DEPRECATED**.  
+    Please instead use the following instead: `SolrCloud.spec.dataStorage.backupRestoreOptions.Volume=<volume-source>`
+
+#### v0.2.6
+- The solr-operator argument `--ingressBaseDomain` has been **DEPRECATED**.
+In order to set the external baseDomain of your clouds, please begin to use `SolrCloud.spec.solrAddressability.external.domainName` instead.
+You will also need to set `SolrCloud.spec.solrAddressability.external.method` to `Ingress`.
+The `--ingressBaseDomain` argument is backwards compatible, and all existing SolrCloud objects will be auto-updated once your operator is upgraded to `v0.2.6`.
+The argument will be removed in a future version (`v0.3.0`).
+
+#### v0.2.4
+- The default supported version of the Zookeeper Operator has been upgraded to `v0.2.6`.  
+If you are using the provided zookeeper option for your SolrClouds, then you will want to upgrade your zookeeper operator version as well as the version and image of the zookeeper that you are running.
+You can find examples of the zookeeper operator as well as solrClouds that use provided zookeepers in the [examples](/example) directory.  
+Please refer to the [Zookeeper Operator release notes](https://github.com/pravega/zookeeper-operator/releases) before upgrading.
+
+#### v0.2.3
+- If you do not use an ingress with the Solr Operator, the Solr Hostname and Port will change when upgrading to this version. This is to fix an outstanding bug. Because of the headless service port change, you will likely see an outage for inter-node communication until all pods have been restarted.
+
+#### v0.2.2
+- `SolrCloud.spec.solrPodPolicy` has been **DEPRECATED** in favor of the `SolrCloud.spec.customSolrKubeOptions.podOptions` option.  
+This option is backwards compatible, but will be removed in a future version (`v0.3.0`).
+
+- `SolrPrometheusExporter.spec.solrPodPolicy` has been **DEPRECATED** in favor of the `SolrPrometheusExporter.spec.customKubeOptions.podOptions` option.  
+This option is backwards compatible, but will be removed in a future version (`v0.3.0`).
+
+#### v0.2.1
+- The zkConnectionString used for provided zookeepers changed from using the string provided in the `ZkCluster.Status`, which used an IP, to using the service name. This will cause a rolling restart of your solrs using the provided zookeeper option, but there will be no data loss.
+
+#### v0.2.0
+- Uses `gomod` instead of `dep`
+- `SolrCloud.spec.zookeeperRef.provided.zookeeper.persistentVolumeClaimSpec` has been **DEPRECATED** in favor of the `SolrCloud.zookeeperRef.provided.zookeeper.persistence` option.  
+This option is backwards compatible, but will be removed in a future version (`v0.3.0`).
+- An upgrade to the ZKOperator version `0.2.4` is required.
+
+#### v0.1.1
+- `SolrCloud.Spec.persistentVolumeClaim` was renamed to `SolrCloud.Spec.dataPvcSpec`
+
+### Compatibility with Kubernetes Versions
+
+#### Fully Compatible - v1.16+
+
+If you require compatibility with previous versions, please install version `v0.2.6` of the Solr Operator.
+
+## Contributions
+
+We :heart: contributions.
+
+Have you had a good experience with the **Solr Operator**? Why not share some love and contribute code, or just let us know about any issues you had with it?
+
+We welcome issue reports [here](../../issues); be sure to choose the proper issue template for your issue, so that we can be sure you're providing the necessary information.
+
+## License
+
+Please read the [LICENSE](LICENSE) file here.
+
+## Code of Conduct
+
+This space applies the ASF [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
+If you have any concerns about the Code, or behavior which you have experienced in the project, please
+contact us at private@lucene.apache.org .
+
+## Security Vulnerability Reporting
+
+If you believe you have identified a security vulnerability in this project, please send email to the ASF security
+team at security@apache.org, detailing the suspected issue and any methods you've found to reproduce it. More details
+can be found [here](https://www.apache.org/security/)
+
+Please do NOT open an issue in the GitHub repository, as we'd prefer to keep vulnerability reports private until
+we've had an opportunity to review and address them.
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000..0664b3f
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,14 @@
+# Documentation
+
+Please visit the following pages for documentation on using and developing the Solr Operator:
+
+- [Local Tutorial](local_tutorial.md)
+- [Running the Solr Operator](running-the-operator.md)
+- Available Solr Resources
+    - [Solr Clouds](solr-cloud)
+    - [Solr Collections](solr-collection)
+    - [Solr Backups](solr-backup)
+    - [Solr Metrics](solr-prometheus-exporter)
+    - [Solr Collection Aliases](solr-collection-alias)
+- [Development](development.md)
+- [TODO: Architecture Overview](architecture-overview.md)
\ No newline at end of file
diff --git a/docs/charts/index.yaml b/docs/charts/index.yaml
new file mode 100644
index 0000000..d68a3fd
--- /dev/null
+++ b/docs/charts/index.yaml
@@ -0,0 +1,106 @@
+apiVersion: v1
+entries:
+  solr-operator:
+  - apiVersion: v1
+    appVersion: v0.2.8
+    created: "2021-01-11T15:21:23.033109-05:00"
+    description: The Solr Operator enables easy management of Solr resources within
+      Kubernetes.
+    digest: 0aafa7d978f376e368ef4d609e917ba8cd6dc087535e2777479a1b87cd65d9c9
+    home: https://github.com/apache/lucene-solr-operator
+    icon: https://lucene.apache.org/theme/images/solr/identity/Solr_Logo_on_white.png
+    keywords:
+    - solr
+    - apache
+    - search
+    - lucene
+    - operator
+    kubeVersion: '>= 1.16.0-0'
+    maintainers:
+    - email: houston@apache.org
+      name: Houston Putman
+    - email: bsankaranara@bloomberg.net
+      name: Balaji Sankaranarayanan
+    name: solr-operator
+    sources:
+    - https://github.com/apache/lucene-solr-operator
+    urls:
+    - https://github.com/apache/lucene-solr-operator/releases/download/v0.2.8/solr-operator-0.2.8.tgz
+    version: 0.2.8
+  - apiVersion: v1
+    appVersion: v0.2.7
+    created: "2020-12-14T17:55:28.546801-05:00"
+    description: The Solr Operator enables easy management of Solr resources within
+      Kubernetes.
+    digest: 9627073abdbd7c3ac9a0616f0582e78c6889eea9963b6ef5b38aa1e3370fd433
+    home: https://github.com/apache/lucene-solr-operator
+    icon: https://lucene.apache.org/theme/images/solr/identity/Solr_Logo_on_white.png
+    keywords:
+    - solr
+    - apache
+    - search
+    - lucene
+    - operator
+    kubeVersion: '>= 1.13.0-0'
+    maintainers:
+    - email: houston@apache.org
+      name: Houston Putman
+    - email: bsankaranara@bloomberg.net
+      name: Balaji Sankaranarayanan
+    name: solr-operator
+    sources:
+    - https://github.com/apache/lucene-solr-operator
+    urls:
+    - https://github.com/apache/lucene-solr-operator/releases/download/v0.2.7/solr-operator-0.2.7.tgz
+    version: 0.2.7
+  - apiVersion: v1
+    appVersion: v0.2.6
+    created: "2020-08-10T15:25:32.770735-04:00"
+    description: The Solr Operator enables easy management of Solr resources within
+      Kubernetes.
+    digest: 15ea4636403cdd7d6a565fc599c67b908f4293c0bff8d1a85dad5bc7c29311df
+    home: https://github.com/apache/lucene-solr-operator
+    icon: https://lucene.apache.org/theme/images/solr/identity/Solr_Logo_on_white.png
+    keywords:
+    - solr
+    - apache
+    - search
+    - lucene
+    - operator
+    kubeVersion: '>= 1.13.0-0'
+    maintainers:
+    - email: houston@apache.org
+      name: Houston Putman
+    - email: bsankaranara@bloomberg.net
+      name: Balaji Sankaranarayanan
+    name: solr-operator
+    sources:
+    - https://github.com/apache/lucene-solr-operator
+    urls:
+    - https://github.com/apache/lucene-solr-operator/releases/download/v0.2.6/solr-operator-0.2.6.tgz
+    version: 0.2.6
+  - apiVersion: v1
+    appVersion: v0.2.5
+    created: "2020-05-20T12:28:07.211507-04:00"
+    description: The Solr Operator enables easy management of Solr resources within
+      Kubernetes.
+    digest: 8ccc461fbc1ccd6c149fc34b40f049155f41131c03d291bca1f469cddb0c09dd
+    home: https://github.com/apache/lucene-solr-operator
+    icon: https://lucene.apache.org/theme/images/solr/identity/Solr_Logo_on_white.png
+    keywords:
+    - solr
+    - apache
+    - search
+    - lucene
+    - operator
+    kubeVersion: '>= 1.13.0-0'
+    maintainers:
+    - email: houston@apache.org
+      name: Houston Putman
+    name: solr-operator
+    sources:
+    - https://github.com/apache/lucene-solr-operator
+    urls:
+    - https://github.com/apache/lucene-solr-operator/releases/download/v0.2.5/solr-operator-0.2.5.tgz
+    version: 0.2.5
+generated: "2021-01-11T15:21:23.028609-05:00"
diff --git a/docs/development.md b/docs/development.md
new file mode 100644
index 0000000..3bf51c5
--- /dev/null
+++ b/docs/development.md
@@ -0,0 +1,136 @@
+# Developing the Solr Operator
+
+This page details the steps for developing the Solr Operator, and all necessary steps to follow before creating a PR to the repo.
+
+ - [Setup](#setup)
+    - [Setup Docker for Mac with K8S](#setup-docker-for-mac-with-k8s-with-an-ingress-controller)
+    - [Install the necessary Dependencies](#install-the-necessary-dependencies)
+ - [Build the Solr CRDs](#build-the-solr-crds)
+ - [Build and Run the Solr Operator](#build-and-run-local-versions)
+    - [Build the Solr Operator](#building-the-solr-operator)
+    - [Running the Solr Operator](#running-the-solr-operator)
+ - [Steps to take before creating a PR](#before-you-create-a-pr)
+ 
+## Setup
+
+### Setup Docker for Mac with K8S with an Ingress Controller
+
+Please follow the instructions from the [local tutorial](local_tutorial.md#setup-docker-for-mac-with-k8s).
+
+### Install the necessary dependencies
+
+Install the Zookeeper, which this operator depends on by default.
+Each is optional, as described in the [Zookeeper Reference](solr-cloud/solr-cloud-crd.md#zookeeper-reference) section in the CRD docs.
+
+```bash
+$ kubectl apply -f example/dependencies
+```
+
+Install necessary dependencies for building and deploying the operator.
+```bash
+$ export PATH="$PATH:$GOPATH/bin" # You likely want to add this line to your ~/.bashrc or ~/.bash_aliases
+$ ./hack/install_dependencies.sh
+```
+
+Beware that you must be running an updated version of `controller-gen`. To update to a compatible version, run:
+
+```bash
+$ go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.2.2
+```
+
+## Build the Solr CRDs
+
+If you have changed anything in the [APIs directory](/api/v1beta1), you will need to run the following command to regenerate all Solr CRDs.
+
+```bash
+$ make manifests
+```
+
+In order to apply these CRDs to your kube cluster, merely run the following:
+
+```bash
+$ make install
+```
+
+## Build and Run local versions
+
+It is very useful to build and run your local version of the operator to test functionality.
+
+### Building the Solr Operator
+
+#### Building a Go binary
+
+Building the Go binary files is quite straightforward:
+
+```bash
+$ go build
+```
+
+This is useful for testing that your code builds correctly, as well as using the `make run` command detailed below.
+
+#### Building the Docker image
+
+Building and releasing a test operator image with a custom Docker namespace.
+
+```bash
+$ NAMESPACE=your-namespace/ make docker-build docker-push
+```
+
+You can test the vendor docker container by running
+
+```bash
+$ NAMESPACE=your-namespace/ make docker-vendor-build docker-vendor-push
+```
+
+You can control the namespace and version for your solr-operator docker image via the ENV variables:
+- `NAMESPACE`, defaults to `bloomberg/`. **This must end with a forward slash.** This can also include the docker repository information for private repos.
+- `NAME`, defaults to `solr-operator`.
+- `VERSION`, defaults to the git HEAD tag. (e.g. `v0.2.5-1-g06f4e2a`).  
+You can check what version you are using by running `make version`.
+
+The image will be created under the tag `$(NAMESPACE)$(NAME):$(VERSION)` as well as `$(NAMESPACE)$(NAME):latest`.
+
+
+### Running the Solr Operator
+
+There are a few options for running the Solr Operator version you are developing.
+
+- You can deploy the Solr Operator by using our provided [Helm Chart](/helm/solr-operator/README.md).
+You will need to [build a docker image](#building-the-docker-image) for your version of the operator.
+Then update the values for the helm chart to use the version that you have built.
+- There are two useful `make` commands provided to help with running development versions of the operator:
+    - `make run` - This command will start the solr-operator process locally (not within kubernetes).
+    This does not require building a docker image.
+    - `make deploy` - This command will apply the docker image with your local version to your kubernetes cluster.
+    This requires [building a docker image](#building-the-docker-image).
+    
+**Warning**: If you are running kubernetes locally and do not want to push your image to docker hub or a private repository, you will need to set the `imagePullPolicy: Never` on your Solr Operator Deployment.
+That way Kubernetes does not try to pull your image from whatever repo it is listed under (or docker hub by default).
+
+## Testing
+
+If you are creating new functionality for the operator, please include that functionality in an existing test or a new test before creating a PR.
+Most tests can be found in the [controller](/controllers) directory, with names that end in `_test.go`.
+
+PRs will automatically run the unit tests, and will block merging if the tests fail.
+
+You can run these tests locally via the following make command:
+
+```bash
+$ make test
+```
+
+## Before you create a PR
+
+The CRD should be updated anytime you update the API.
+
+```bash
+$ make manifests
+```
+
+
+Make sure that you have updated the go.mod file:
+
+```bash
+$ make mod-tidy
+```
diff --git a/docs/local_tutorial.md b/docs/local_tutorial.md
new file mode 100644
index 0000000..b1c3dd8
--- /dev/null
+++ b/docs/local_tutorial.md
@@ -0,0 +1,242 @@
+# Solr on Kubernetes on local Mac
+
+This tutorial shows how to setup Solr under Kubernetes on your local mac. The plan is as follows:
+
+ 1. [Setup Kubernetes and Dependencies](#setup-kubernetes-and-dependencies)
+    1. [Setup Docker for Mac with K8S](#setup-docker-for-mac-with-k8s)
+    2. [Install an Ingress Controller to reach the cluster on localhost](#install-an-ingress-controller)
+ 3. [Install Solr Operator](#install-solr-operator)
+ 4. [Start your Solr cluster](#start-your-solr-cluster)
+ 5. [Create a collection and index some documents](#create-a-collection-and-index-some-documents)
+ 6. [Scale from 3 to 5 nodes](#scale-from-3-to-5-nodes)
+ 7. [Upgrade to newer Solr version](#upgrade-to-newer-version)
+ 8. [Install Kubernetes Dashboard (optional)](#install-kubernetes-dashboard-optional)
+ 9. [Delete the solrCloud cluster named 'example'](#delete-the-solrcloud-cluster-named-example)
+
+## Setup Kubernetes and Dependencies
+
+### Setup Docker for Mac with K8s
+
+```bash
+# Install Homebrew, if you don't have it already
+/bin/bash -c "$(curl -fsSL \
+	https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
+
+# Install Docker Desktop for Mac (use edge version to get latest k8s)
+brew cask install docker-edge
+
+# Enable Kubernetes in Docker Settings, or run the command below:
+sed -i -e 's/"kubernetesEnabled" : false/"kubernetesEnabled" : true/g' \
+    ~/Library/Group\ Containers/group.com.docker/settings.json
+
+# Start Docker for mac from Finder, or run the command below
+open /Applications/Docker.app
+
+# Install Helm, which we'll use to install the operator, and 'watch'
+brew install helm watch
+```
+
+### Install an Ingress Controller
+
+Kubernetes services are by default only accessible from within the k8s cluster. To make them adressable from our laptop, we'll add an ingress controller
+
+```bash
+# Install the nginx ingress controller
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
+
+# Inspect that the ingress controller is running by visiting the Kubernetes dashboard 
+# and selecting namespace `ingress-nginx`, or running this command:
+kubectl get all --namespace ingress-nginx
+
+# Edit your /etc/hosts file (`sudo vi /etc/hosts`) and replace the 127.0.0.1 line with:
+127.0.0.1	localhost default-example-solrcloud.ing.local.domain ing.local.domain default-example-solrcloud-0.ing.local.domain default-example-solrcloud-1.ing.local.domain default-example-solrcloud-2.ing.local.domain dinghy-ping.localhost
+```
+
+Once we have installed Solr to our k8s, this will allow us to address the nodes locally.
+
+## Install the Solr Operator
+
+Now that we have the prerequisites setup, let us install Solr Operator which will let us easily manage a large Solr cluster:
+
+Before installing the Solr Operator, we need to install the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator).
+Eventually this will be a dependency on the helm chart, but for now we can run an easy `kubectl apply`.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/apache/lucene-solr-operator/main/example/dependencies/zk_operator.yaml
+```
+
+Now add the Solr Operator Helm repository. (You should only need to do this once)
+
+```bash
+$ helm repo add solr-operator https://apache.github.io/lucene-solr-operator/charts
+```
+
+Next, install the Solr Operator chart. Note this is using Helm v3, in order to use Helm v2 please consult the [Helm Chart documentation](https://hub.helm.sh/charts/solr-operator/solr-operator).
+
+```bash
+# Install the operator (specifying ingressBaseDomain to match our ingressController)
+$ helm install solr-operator solr-operator/solr-operator --set-string ingressBaseDomain=ing.local.domain
+```
+
+After installing, you can check to see what lives in the cluster to make sure that the Solr and ZooKeeper operators have started correctly.
+```
+$ kubectl get all
+
+NAME                                       READY   STATUS             RESTARTS   AGE
+pod/solr-operator-8449d4d96f-cmf8p         1/1     Running            0          47h
+pod/zk-operator-674676769c-gd4jr           1/1     Running            0          49d
+
+NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
+deployment.apps/solr-operator              1/1     1            1           49d
+deployment.apps/zk-operator                1/1     1            1           49d
+
+NAME                                       DESIRED   CURRENT   READY   AGE
+replicaset.apps/solr-operator-8449d4d96f   1         1         1       2d1h
+replicaset.apps/zk-operator-674676769c     1         1         1       49d
+```
+
+After inspecting the status of you Kube cluster, you should see a deployment for the Solr Operator as well as the Zookeeper Operator.
+
+## Start an example Solr Cloud cluster
+
+To start a Solr Cloud cluster, we will create a yaml that will tell the Solr Operator what version of Solr Cloud to run, and how many nodes, with how much memory etc.
+
+```bash
+# Create a spec for a 3-node cluster v8.3 with 300m RAM each:
+cat <<EOF > solrCloud-example.yaml
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: example
+spec:
+  replicas: 3
+  solrImage:
+    tag: "8.3"
+  solrJavaMem: "-Xms300m -Xmx300m"
+EOF
+
+# Install Solr from that spec
+kubectl apply -f solrCloud-example.yaml
+
+# The solr-operator has created a new resource type 'solrclouds' which we can query
+# Check the status live as the deploy happens
+kubectl get solrclouds -w
+
+# Open a web browser to see a solr node:
+# Note that this is the service level, so will round-robin between the nodes
+open "http://default-example-solrcloud.ing.local.domain/solr/#/~cloud?view=nodes"
+```
+
+## Create a collection and index some documents
+
+We'll use the Operator's built in collection creation option
+
+```bash
+# Create the spec
+cat <<EOF > collection.yaml
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollection
+metadata:
+  name: mycoll
+spec:
+  solrCloud: example
+  collection: mycoll
+  autoAddReplicas: true
+  routerName: compositeId
+  numShards: 1
+  replicationFactor: 3
+  maxShardsPerNode: 2
+  collectionConfigName: "_default"
+EOF
+
+# Execute the command and check in Admin UI that it succeeds
+kubectl apply -f collection.yaml
+
+# Check in Admin UI that collection is created
+open "http://default-example-solrcloud.ing.local.domain/solr/#/~cloud?view=graph"
+```
+
+Now index some documents into the empty collection.
+```bash
+curl -XPOST -H "Content-Type: application/json" \
+    -d '[{id: 1}, {id: 2}, {id: 3}, {id: 4}, {id: 5}, {id: 6}, {id: 7}, {id: 8}]' \
+    "http://default-example-solrcloud.ing.local.domain/solr/mycoll/update/"
+```
+
+## Scale from 3 to 5 nodes
+
+So we wish to add more capacity. Scaling the cluster is a breeze.
+
+```
+# Issue the scale command
+kubectl scale --replicas=5 solrcloud/example
+```
+
+After issuing the scale command, start hitting the "Refresh" button in the Admin UI.
+You will see how the new Solr nodes are added.
+You can also watch the status via the `kubectl get solrclouds` command:
+
+```bash
+kubectl get solrclouds -w
+
+# Hit Control-C when done
+```
+
+## Upgrade to newer version
+
+So we wish to upgrade to a newer Solr version:
+
+```
+# Take note of the current version, which is 8.3.1
+curl -s http://default-example-solrcloud.ing.local.domain/solr/admin/info/system | grep solr-i
+
+# Update the solrCloud configuratin with the new version, keeping 5 nodes
+cat <<EOF > solrCloud-example.yaml
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: example
+spec:
+  replicas: 5
+  solrImage:
+    tag: "8.7"
+  solrJavaMem: "-Xms300m -Xmx300m"
+EOF
+
+# Apply the new config
+# Click the 'Show all details" button in Admin UI and start hitting the "Refresh" button
+# See how the operator upgrades one pod at a time. Solr version is in the 'node' column
+# You can also watch the status with the 'kubectl get solrclouds' command
+kubectl apply -f solrCloud-example.yaml
+kubectl get solrclouds -w
+
+# Hit Control-C when done
+```
+
+## Install Kubernetes Dashboard (optional)
+
+Kubernetes Dashboard is a web interface that gives a better overview of your k8s cluster than only running command-line commands. This step is optional, you don't need it if you're comfortable with the cli.
+
+```
+# Install the Dashboard
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
+
+# You need to authenticate with the dashboard. Get a token:
+kubectl -n kubernetes-dashboard describe secret \
+    $(kubectl -n kubernetes-dashboard get secret | grep default-token | awk '{print $1}') \
+    | grep "token:" | awk '{print $2}'
+
+# Start a kube-proxy in the background (it will listein on localhost:8001)
+kubectl proxy &
+
+# Open a browser to the dashboard (note, this is one long URL)
+open "http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default"
+
+# Select 'Token' in the UI and paste the token from last step (starting with 'ey...')
+```
+
+## Delete the solrCloud cluster named 'example'
+
+```
+kubectl delete solrcloud example
+```
\ No newline at end of file
diff --git a/docs/release-instructions.md b/docs/release-instructions.md
new file mode 100644
index 0000000..5cfeaec
--- /dev/null
+++ b/docs/release-instructions.md
@@ -0,0 +1,57 @@
+# Releasing a New Verson of the Solr Operator
+
+This page details the steps for releasing new versions of the Solr Operator.
+
+- [Versioning](#versioning)
+  - [Backwards Compatibility](#backwards-compatibility)
+- [Create the Upgrade Commit](#create-the-upgrade-commit)
+- [Create a release PR and merge into `master`](#create-a-release-pr-and-merge-into-master)
+- [Tag and publish the release](#tag-and-publish-the-release)
+ 
+### Versioning
+
+The Solr Operator follows kubernetes conventions with versioning with is:
+
+`v<Major>.<Minor>.<Patch>`
+
+For example `v0.2.5` or `v1.3.4`.
+Certain systems except versions that do not start wth `v`, such as Helm.
+However the tooling has been created to automatically make these changes when necessary, so always include the prefixed `v` when following these instructions.
+
+#### Backwards Compatibility
+
+All patch versions of the same major & minor version should be backwards compatabile.
+Non-backwards compatible changes will be allowed while the Solr Operator is still in a beta state.
+ 
+### Create the upgrade commit
+
+The last commit of a release version of the Solr Operator should be made via the following command.
+
+```bash
+$ VERSION=<version> make release
+```
+
+This will do the following steps:
+
+1. Set the variables of the Helm chart to be the new version.
+1. Build the CRDs and copy them into the Helm chart.
+1. Package up the helm charts and index them in `docs/charts/index.yaml`.
+1. Create all artifacts that should be included in the Github Release, and place them in the `/release-artifacts` directory.
+1. Commits all necessary changes for the release.
+
+### Create a release PR and merge into `master`
+
+Now you need to merge the release commit into master.
+You can push it to your fork and create a PR against the `master` branch.
+If the Travis tests pass, "Squash and Merge" it into master.
+
+### Tag and publish the release
+
+In order to create a release, you can do it entirely through the Github UI.
+Go to the releases tab, and click "Draft a new Release".
+
+Follow the formatting of previous releases, showing the highlights of changes in that version nicluding links to relevant PRs.
+
+Before publishing, make sure to attach all of the artifacts from the `release-artifacts` directory that were made when running the `make release` command earlier in the guide.
+
+Once you publish the release, Travis should re-run and deploy the docker containers to docker hub.
\ No newline at end of file
diff --git a/docs/running-the-operator.md b/docs/running-the-operator.md
new file mode 100644
index 0000000..56a4f54
--- /dev/null
+++ b/docs/running-the-operator.md
@@ -0,0 +1,73 @@
+# Running the Solr Operator
+
+### Installing the Zookeeper Operator
+
+Before installing the Solr Operator, we need to install the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator).
+This is because the Solr Operator, in most instances, relies on the Zookeeper Operator to create the Zookeeper clusters that Solr coordinates through.
+Eventually this will be a dependency on the helm chart, but for now we can run an easy `kubectl apply`.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/apache/lucene-solr-operator/main/example/dependencies/zk_operator.yaml
+```
+
+## Using the Solr Operator Helm Chart
+
+The easiest way to run the Solr Operator is via the [provided Helm Chart](https://hub.helm.sh/charts/solr-operator/solr-operator).
+
+The helm chart provides abstractions over the Input Arguments described below, and should work with any official images in docker hub.
+
+### How to install via Helm
+
+The first step is to add the Solr Operator helm repository.
+
+```bash
+$ helm repo add solr-operator https://apache.github.io/lucene-solr-operator/charts
+```
+
+
+Next, install the Solr Operator chart. Note this is using Helm v3, in order to use Helm v2 please consult the [Helm Chart documentation](https://hub.helm.sh/charts/solr-operator/solr-operator).
+
+```bash
+$ helm install solr-operator solr-operator/solr-operator
+```
+
+After installing, you can check to see what lives in the cluster to make sure that the Solr and ZooKeeper operators have started correctly.
+```
+$ kubectl get all
+
+NAME                                       READY   STATUS             RESTARTS   AGE
+pod/solr-operator-8449d4d96f-cmf8p         1/1     Running            0          47h
+pod/zk-operator-674676769c-gd4jr           1/1     Running            0          49d
+
+NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
+deployment.apps/solr-operator              1/1     1            1           49d
+deployment.apps/zk-operator                1/1     1            1           49d
+
+NAME                                       DESIRED   CURRENT   READY   AGE
+replicaset.apps/solr-operator-8449d4d96f   1         1         1       2d1h
+replicaset.apps/zk-operator-674676769c     1         1         1       49d
+```
+
+After inspecting the status of you Kube cluster, you should see a deployment for the Solr Operator as well as the Zookeeper Operator.
+
+## Solr Operator Docker Images
+
+Two Docker images are published to [DockerHub](https://hub.docker.com/r/bloomberg/solr-operator), both based off of the same base image.
+
+- [Builder Image](build/Dockerfile.build) - Downloads gomod dependencies, builds operator executable (This is not published, only used to build the following images)
+- [Slim Image](build/Dockerfile.slim) - Contains only the operator executable, with the operator as the entry point
+- [Vendor Image](build/Dockerfile.slim) - Contains the operator executable as well as all dependencies (at `/solr-operator-vendor-sources`)
+
+In order to run the Solr Operator, you will only need the Slim Image.
+
+## Solr Operator Input Args
+
+* **-zookeeper-operator** Whether or not to use the Zookeeper Operator to create dependent Zookeeepers.
+                          Required to use the `ProvidedZookeeper.Zookeeper` option within the Spec.
+                          If _true_, then a Zookeeper Operator must be running for the cluster.
+                          ( _true_ | _false_ , defaults to _false_)
+* **-ingress-base-domain** If you desire to make solr externally addressable via ingresses, a base ingress domain is required.
+                        Solr Clouds will be created with ingress rules at `*.(ingress-base-domain)`.
+                        ( _optional_ , e.g. `ing.base.domain` )
+                        
+    
\ No newline at end of file
diff --git a/docs/solr-backup/README.md b/docs/solr-backup/README.md
new file mode 100644
index 0000000..3cd6a29
--- /dev/null
+++ b/docs/solr-backup/README.md
@@ -0,0 +1,15 @@
+# Solr Backups
+
+Solr backups require 3 things:
+- A solr cloud running in kubernetes to backup
+- The list of collections to backup
+- A shared volume reference that can be written to from many clouds
+    - This could be a NFS volume, a persistent volume claim (that has `ReadWriteMany` access), etc.
+    - The same volume can be used for many solr clouds in the same namespace, as the data stored within the volume is namespaced.
+- A way to persist the data. The currently supported persistence methods are:
+    - A volume reference (this does not have to be `ReadWriteMany`)
+    - An S3 endpoint.
+    
+Backups will be tarred before they are persisted.
+
+There is no current way to restore these backups, but that is in the roadmap to implement.
diff --git a/docs/solr-cloud/README.md b/docs/solr-cloud/README.md
new file mode 100644
index 0000000..582f260
--- /dev/null
+++ b/docs/solr-cloud/README.md
@@ -0,0 +1,103 @@
+# Solr Clouds
+
+The Solr Operator supports creating and managing Solr Clouds.
+
+To find how to configure the SolrCloud best for your use case, please refer to the [documentation on available SolrCloud CRD options](solr-cloud-crd.md).
+
+This page outlines how to create, update and delete a SolrCloud in Kubernetes.
+
+- [Creation](#creating-an-example-solrcloud)
+- [Scaling](#scaling-a-solrcloud)
+- [Deletion](#deleting-the-example-solrcloud)
+- [Solr Images](#solr-images)
+    - [Official Images](#official-solr-images)
+    - [Custom Images](#build-your-own-private-solr-images)
+
+## Creating an example SolrCloud
+
+Make sure that the solr-operator and a zookeeper-operator are running.
+
+Create an example Solr cloud, with the following configuration.
+
+```bash
+$ cat example/test_solrcloud.yaml
+
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: example
+spec:
+  replicas: 4
+  solrImage:
+    tag: 8.1.1
+```
+
+Apply it to your Kubernetes cluster.
+
+```bash
+$ kubectl apply -f example/test_solrcloud.yaml
+$ kubectl get solrclouds
+
+NAME      VERSION   DESIREDNODES   NODES   READYNODES   AGE
+example   8.1.1     4              2       1            2m
+
+$ kubectl get solrclouds
+
+NAME      VERSION   DESIREDNODES   NODES   READYNODES   AGE
+example   8.1.1     4              4       4            8m
+```
+
+What actually gets created when you start a Solr Cloud though?
+Refer to the [dependencies outline](dependencies.md) to see what dependent Kuberenetes resources are created in order to run a Solr Cloud.
+
+## Scaling a SolrCloud
+
+The SolrCloud CRD support the Kubernetes `scale` operation, to increase and decrease the number of Solr Nodes that are running within the cloud.
+
+```
+# Issue the scale command
+kubectl scale --replicas=5 solrcloud/example
+```
+
+After issuing the scale command, start hitting the "Refresh" button in the Admin UI.
+You will see how the new Solr nodes are added.
+You can also watch the status via the `kubectl get solrclouds` command:
+
+```bash
+watch -dc kubectl get solrclouds
+
+# Hit Control-C when done
+```
+
+### Deleting the example SolrCloud
+
+Delete the example SolrCloud
+
+```bash
+$ kubectl delete solrcloud example
+```
+  
+## Solr Images
+
+### Official Solr Images
+
+The solr-operator will work with any of the [official Solr images](https://hub.docker.com/_/solr) currently available.
+
+### Build Your Own Private Solr Images
+
+The solr-operator supports private Docker repo access for Solr images you may want to store in a private Docker repo. It is recommended to source your image from the official Solr images. 
+
+Using a private image requires you have a K8s secret preconfigured with appropriate access to the image. (type: kubernetes.io/dockerconfigjson)
+
+```
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: example-private-repo-solr-image
+spec:
+  replicas: 3
+  solrImage:
+    repository: myprivate-repo.jfrog.io/solr
+    tag: 8.2.0
+    imagePullSecret: "k8s-docker-registry-secret"
+```
\ No newline at end of file
diff --git a/docs/solr-cloud/dependencies.md b/docs/solr-cloud/dependencies.md
new file mode 100644
index 0000000..66d9499
--- /dev/null
+++ b/docs/solr-cloud/dependencies.md
@@ -0,0 +1,48 @@
+## Dependent Kubernetes Resources
+
+What actually gets created when the Solr Cloud is spun up?
+
+```bash
+$ kubectl get all
+
+NAME                                       READY   STATUS             RESTARTS   AGE
+pod/example-solrcloud-0                    1/1     Running            7          47h
+pod/example-solrcloud-1                    1/1     Running            6          47h
+pod/example-solrcloud-2                    1/1     Running            0          47h
+pod/example-solrcloud-3                    1/1     Running            6          47h
+pod/example-solrcloud-zk-0                 1/1     Running            0          49d
+pod/example-solrcloud-zk-1                 1/1     Running            0          49d
+pod/example-solrcloud-zk-2                 1/1     Running            0          49d
+pod/example-solrcloud-zk-3                 1/1     Running            0          49d
+pod/example-solrcloud-zk-4                 1/1     Running            0          49d
+pod/solr-operator-8449d4d96f-cmf8p         1/1     Running            0          47h
+pod/zk-operator-674676769c-gd4jr           1/1     Running            0          49d
+
+NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
+service/example-solrcloud-0                ClusterIP   ##.###.###.##    <none>        80/TCP                47h
+service/example-solrcloud-1                ClusterIP   ##.###.##.#      <none>        80/TCP                47h
+service/example-solrcloud-2                ClusterIP   ##.###.###.##    <none>        80/TCP                47h
+service/example-solrcloud-3                ClusterIP   ##.###.##.###    <none>        80/TCP                47h
+service/example-solrcloud-common           ClusterIP   ##.###.###.###   <none>        80/TCP                47h
+service/example-solrcloud-headless         ClusterIP   None             <none>        8983/TCP              47h
+service/example-solrcloud-zk-client        ClusterIP   ##.###.###.###   <none>        21210/TCP             49d
+service/example-solrcloud-zk-headless      ClusterIP   None             <none>        22210/TCP,23210/TCP   49d
+
+NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
+deployment.apps/solr-operator              1/1     1            1           49d
+deployment.apps/zk-operator                1/1     1            1           49d
+
+NAME                                       DESIRED   CURRENT   READY   AGE
+replicaset.apps/solr-operator-8449d4d96f   1         1         1       2d1h
+replicaset.apps/zk-operator-674676769c     1         1         1       49d
+
+NAME                                       READY   AGE
+statefulset.apps/example-solrcloud         4/4     47h
+statefulset.apps/example-solrcloud-zk      5/5     49d
+
+NAME                                          HOSTS                                                                                       PORTS   AGE
+ingress.extensions/example-solrcloud-common   default-example-solrcloud.test.domain,default-example-solrcloud-0.test.domain + 3 more...   80      2d2h
+
+NAME                                       VERSION   DESIREDNODES   NODES   READYNODES   AGE
+solrcloud.solr.bloomberg.com/example       8.1.1     4              4       4            47h
+```
\ No newline at end of file
diff --git a/docs/solr-cloud/managed-updates.md b/docs/solr-cloud/managed-updates.md
new file mode 100644
index 0000000..e7b6f63
--- /dev/null
+++ b/docs/solr-cloud/managed-updates.md
@@ -0,0 +1,56 @@
+# Managed SolrCloud Rolling Updates
+
+Solr Clouds are complex distributed systems, and thus require a more delicate and informed approach to rolling updates.
+
+If the [`Managed` update strategy](solr-cloud-crd.md#update-strategy) is specified in the Solr Cloud CRD, then the Solr Operator will take control over deleting SolrCloud pods when they need to be updated.
+
+The operator will find all pods that have not been updated yet and choose the next set of pods to delete for an udpate, given the following workflow.
+
+## Pod Update Workflow
+
+The logic goes as follows:
+
+1. Find the pods that are out-of-date
+1. Update all out-of-date pods that do not have a started Solr container.
+    - This allows for updating a pod that cannot start, even if other pods are not available.
+    - This step does not respect the `maxPodsUnavailable` option, because these pods have not even started the Solr process.
+1. Retrieve the cluster state of the SolrCloud if there are any `ready` pods.
+    - If no pods are ready, then there is no endpoint to retrieve the cluster state from.
+1. Sort the pods in order of safety for being restarted. [Sorting order reference](#pod-update-sorting-order)
+1. Iterate through the sorted pods, greedily choosing which pods to update. [Selection logic reference](#pod-update-selection-logic)
+    - The maximum number of pods that can be updated are determined by starting with `maxPodsUnavailable`,
+    then subtracting the number of updated pods that are unavailable as well as the number of not-yet-started, out-of-date pods that were updated in a previous step.
+    This check makes sure that any pods taken down during this step do not violate the `maxPodsUnavailable` constraint.
+    
+
+### Pod Update Sorting Order
+
+The pods are sorted by the following criteria, in the given order.
+If any two pods on a criterion, then the next criteria (in the following order) is used to sort them.
+
+In this context the pods sorted highest are the first chosen to be updated, the pods sorted lowest will be selected last.
+
+1. If the pod is the overseer, it will be sorted lowest.
+1. If the pod is not represented in the clusterState, it will be sorted highest.
+    - A pod is not in the clusterstate if it does not host any replicas and is not the overseer.
+1. Number of leader replicas hosted in the pod, sorted low -> high
+1. Number of active or recovering replicas hosted in the pod, sorted low -> high
+1. Number of total replicas hosted in the pod, sorted low -> high
+1. If the pod is not a liveNode, then it will be sorted lower.
+1. Any pods that are equal on the above criteria will be sorted lexicographically.
+
+### Pod Update Selection Logic
+
+Loop over the sorted pods, until the number of pods selected to be updated has reached the maximum.
+This maximum is calculated by taking the given, or default, [`maxPodsUnavailable`](solr-cloud-crd.md#update-strategy) and subtracting the number of updated pods that are unavailable or have yet to be re-created.
+   - If the pod is the overseer, then all other pods must be updated and available.
+   Otherwise, the overseer pod cannot be updated.
+   - If the pod contains no replicas, the pod is chosen to be updated.  
+   **WARNING**: If you use Solr worker nodes for streaming expressions, you will likely want to set [`maxPodsUnavailable`](solr-cloud-crd.md#update-strategy) to a value you are comfortable with.
+   - If Solr Node of the pod is not **`live`**, the pod is chosen to be updated.
+   - If all replicas in the pod are in a **`down`** or **`recovery_failed`** state, the pod is chosen to be updated.
+   - If the taking down the replicas hosted in the pod would not violate the given [`maxShardReplicasUnavailable`](solr-cloud-crd.md#update-strategy), then the pod can be updated.
+   Once a pod with replicas has been chosen to be updated, the replicas hosted in that pod are then considered unavailable for the rest of the selection logic.
+        - Some replicas in the shard may already be in a non-active state, or may reside on Solr Nodes that are not "live".
+        The `maxShardReplicasUnavailable` calculation will take these replicas into account, as a starting point.
+        - If a pod contains non-active replicas, and the pod is chosen to be updated, then the pods that are already non-active will not be double counted for the `maxShardReplicasUnavailable` calculation.
\ No newline at end of file
diff --git a/docs/solr-cloud/solr-cloud-crd.md b/docs/solr-cloud/solr-cloud-crd.md
new file mode 100644
index 0000000..3f21116
--- /dev/null
+++ b/docs/solr-cloud/solr-cloud-crd.md
@@ -0,0 +1,130 @@
+# The SolrCloud CRD
+
+The SolrCloud CRD allows users to spin up a Solr cloud in a very configurable way.
+Those configuration options are laid out on this page.
+
+## Data Storage
+
+The SolrCloud CRD gives the option for users to use either
+persistent storage, through [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/),
+or ephemeral storage, through [emptyDir volumes](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir),
+to store Solr data.
+Ephemeral and persistent storage cannot be used together, if both are provided, the `persistent` options take precedence.
+If neither is provided, ephemeral storage will be used by default.
+
+These options can be found in `SolrCloud.spec.dataStorage`
+
+- **`persistent`**
+  - **`reclaimPolicy`** - Either `Retain`, the default, or `Delete`.
+    This describes the lifecycle of PVCs that are deleted after the SolrCloud is deleted, or the SolrCloud is scaled down and the pods that the PVCs map to no longer exist.
+    `Retain` is used by default, as that is the default Kubernetes policy, to leave PVCs in case pods, or StatefulSets are deleted accidentally.
+    
+    Note: If reclaimPolicy is set to `Delete`, PVCs will not be deleted if pods are merely deleted. They will only be deleted once the `SolrCloud.spec.replicas` is scaled down or deleted.
+  - **`pvcTemplate`** - The template of the PVC to use for the solr data PVCs. By default the name will be "data".
+    Only the `pvcTemplate.spec` field is required, metadata is optional.
+    
+    Note: This template cannot be changed unless the SolrCloud is deleted and recreated.
+    This is a [limitation of StatefulSets and PVCs in Kubernetes](https://github.com/kubernetes/enhancements/issues/661).
+- **`ephemeral`**
+  - **`emptyDir`** - An [`emptyDir` volume source](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) that describes the desired emptyDir volume to use in each SolrCloud pod to store data.
+  This option is optional, and if not provided an empty `emptyDir` volume source will be used.
+    
+- **`backupRestoreOptions`** (Required for integration with [`SolrBackups`](../solr-backup/README.md))
+  - **`volume`** - This is a [volume source](https://kubernetes.io/docs/concepts/storage/volumes/), that supports `ReadWriteMany` access.
+  This is critical because this single volume will be loaded into all pods at the same path.
+  - **`directory`** - A custom directory to store backup/restore data, within the volume described above.
+  This is optional, and defaults to the name of the SolrCloud.
+  Only use this option when you require restoring the same backup to multiple SolrClouds.
+
+## Update Strategy
+
+The SolrCloud CRD provides users the ability to define how it is addressed, through `SolrCloud.Spec.updateStrategy`.
+This provides the following options:
+
+Under `SolrCloud.Spec.updateStrategy`:
+
+- **`method`** - The method in which Solr pods should be updated. Enum options are as follows:
+  - `Managed` - (Default) The Solr Operator will take control over deleting pods for updates. This process is [documented here](managed-updates.md).
+  - `StatefulSet` - Use the default StatefulSet rolling update logic, one pod at a time waiting for all pods to be "ready".
+  - `Manual` - Neither the StatefulSet or the Solr Operator will delete pods in need of an update. The user will take responsibility over this.
+- **`managed`** - Options for rolling updates managed by the Solr Operator.
+  - **`maxPodsUnavailable`** - (Defaults to `"25%"`) The number of Solr pods in a Solr Cloud that are allowed to be unavailable during the rolling restart.
+  More pods may become unavailable during the restart, however the Solr Operator will not kill pods if the limit has already been reached.  
+  - **`maxShardReplicasUnavailable`** - (Defaults to `1`) The number of replicas for each shard allowed to be unavailable during the restart.
+  
+**Note:** Both `maxPodsUnavailable` and `maxShardReplicasUnavailable` are intOrString fields. So either an int or string can be provided for the field.
+- **int** - The parameter is treated as an absolute value, unless the value is <= 0 which is interpreted as unlimited.
+- **string** - Only percentage string values (`"0%"` - `"100%"`) are accepted, all other values will be ignored.
+  - **`maxPodsUnavailable`** - The `maximumPodsUnavailable` is calculated as the percentage of the total pods configured for that Solr Cloud.
+  - **`maxShardReplicasUnavailable`** - The `maxShardReplicasUnavailable` is calculated independently for each shard, as the percentage of the number of replicas for that shard.
+
+## Addressability
+
+The SolrCloud CRD provides users the ability to define how it is addressed, through the following options:
+
+Under `SolrCloud.Spec.solrAddressability`:
+
+- **`podPort`** - The port on which the pod is listening. This is also that the port that the Solr Jetty service will listen on. (Defaults to `8983`)
+- **`commonServicePort`** - The port on which the common service is exposed. (Defaults to `80`)
+- **`kubeDomain`** - Specifies an override of the default Kubernetes cluster domain name, `cluster.local`. This option should only be used if the Kubernetes cluster has been setup with a custom domain name.
+- **`external`** - Expose the cloud externally, outside of the kubernetes cluster in which it is running.
+  - **`method`** - (Required) The method by which your cloud will be exposed externally.
+  Currently available options are [`Ingress`](https://kubernetes.io/docs/concepts/services-networking/ingress/) and [`ExternalDNS`](https://github.com/kubernetes-sigs/external-dns).
+  The goal is to support more methods in the future, such as LoadBalanced Services.
+  - **`domainName`** - (Required) The primary domain name to open your cloud endpoints on. If `useExternalAddress` is set to `true`, then this is the domain that will be used in Solr Node names.
+  - **`additionalDomainNames`** - You can choose to listen on additional domains for each endpoint, however Solr will not register itself under these names.
+  - **`useExternalAddress`** - Use the external address to advertise the SolrNode. If a domain name is required for the chosen external `method`, then the one provided in `domainName` will be used.
+  - **`hideCommon`** - Do not externally expose the common service (one endpoint for all solr nodes).
+  - **`hideNodes`** - Do not externally expose each node. (This cannot be set to `true` if the cloud is running across multiple kubernetes clusters)
+  - **`nodePortOverride`** - Make the Node Service(s) override the podPort. This is only available for the `Ingress` external method. If `hideNodes` is set to `true`, then this option is ignored. If provided, his port will be used to advertise the Solr Node. \
+  If `method: Ingress` and `hideNodes: false`, then this value defaults to `80` since that is the default port that ingress controllers listen on.
+
+**Note:** Unless both `external.method=Ingress` and `external.hideNodes=false`, a headless service will be used to make each Solr Node in the statefulSet addressable.
+If both of those criteria are met, then an individual ClusterIP Service will be created for each Solr Node/Pod.
+
+## Zookeeper Reference
+
+Solr Clouds require an Apache Zookeeper to connect to.
+
+The Solr operator gives a few options.
+
+- Connecting to an already running zookeeper ensemble via [connection strings](#zk-connection-info)
+- [Spinning up a provided](#provided-instance) Zookeeper Ensemble in the same namespace via the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator)
+
+#### Chroot
+
+Both options below come with options to specify a `chroot`, or a ZNode path for solr to use as it's base "directory" in Zookeeper.
+Before the operator creates or updates a StatefulSet with a given `chroot`, it will first ensure that the given ZNode path exists and if it doesn't the operator will create all necessary ZNodes in the path.
+If no chroot is given, a default of `/` will be used, which doesn't require the existence check previously mentioned.
+If a chroot is provided without a prefix of `/`, the operator will add the prefix, as it is required by Zookeeper.
+
+### ZK Connection Info
+
+This is an external/internal connection string as well as an optional chRoot to an already running Zookeeeper ensemble.
+If you provide an external connection string, you do not _have_ to provide an internal one as well.
+
+#### ACLs
+
+The Solr Operator allows for users to specify ZK ACL references in their Solr Cloud CRDs.
+The user must specify the name of a secret that resides in the same namespace as the cloud, that contains an ACL username value and an ACL password value.
+This ACL must have admin permissions for the [chroot](#chroot) given.
+
+The ACL information can be provided through an ADMIN acl and a READ ONLY acl.  
+- Admin: `SolrCloud.spec.zookeeperRef.connectionInfo.acl`
+- Read Only: `SolrCloud.spec.zookeeperRef.connectionInfo.readOnlyAcl`
+
+All ACL fields are **required** if an ACL is used.
+
+- **`secret`** - The name of the secret, in the same namespace as the SolrCloud, that contains the admin ACL username and password.
+- **`usernameKey`** - The name of the key in the provided secret that stores the admin ACL username.
+- **`usernameKey`** - The name of the key in the provided secret that stores the admin ACL password.
+
+### Provided Instance
+
+If you do not require the Solr cloud to run cross-kube cluster, and do not want to manage your own Zookeeper ensemble,
+the solr-operator can manage Zookeeper ensemble(s) for you.
+
+Using the [zookeeper-operator](https://github.com/pravega/zookeeper-operator), a new Zookeeper ensemble can be spun up for 
+each solrCloud that has this option specified.
+
+The startup parameter `zookeeper-operator` must be provided on startup of the solr-operator for this parameter to be available.
\ No newline at end of file
diff --git a/docs/solr-collection-alias/README.md b/docs/solr-collection-alias/README.md
new file mode 100644
index 0000000..708cddd
--- /dev/null
+++ b/docs/solr-collection-alias/README.md
@@ -0,0 +1,20 @@
+# Solr Collection Alias
+
+The solr-operator supports the full lifecycle of standard aliases. Here is an example pointing an alias to 2 collections
+
+```
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollectionAlias
+metadata:
+  name: collection-alias-example
+spec:
+  solrCloud: example
+  aliasType: standard
+  collections:
+    - example-collection-1
+    - example-collection-2
+```
+
+Aliases can be useful when migrating from one collection to another without having to update application endpoint configurations.
+
+Routed aliases are presently not supported
\ No newline at end of file
diff --git a/docs/solr-collection/README.md b/docs/solr-collection/README.md
new file mode 100644
index 0000000..5207191
--- /dev/null
+++ b/docs/solr-collection/README.md
@@ -0,0 +1,58 @@
+# Solr Collections
+
+Solr-operator can manage the creation, deletion and modification of Solr collections. 
+
+Collection creation requires a Solr Cloud to apply against. Presently, SolrCollection supports both implicit and compositeId router types, with some of the basic configuration options including `autoAddReplicas`. 
+
+Create an example set of collections against on the "example" solr cloud
+
+```bash
+$ cat example/test_solrcollection.yaml
+
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollection
+metadata:
+  name: example-collection-1
+spec:
+  solrCloud: example
+  collection: example-collection
+  routerName: compositeId
+  autoAddReplicas: false
+  numShards: 2
+  replicationFactor: 1
+  maxShardsPerNode: 1
+  collectionConfigName: "_default"
+---
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollection
+metadata:
+  name: example-collection-2-compositeid-autoadd
+spec:
+  solrCloud: example
+  collection: example-collection-2
+  routerName: compositeId
+  autoAddReplicas: true
+  numShards: 2
+  replicationFactor: 1
+  maxShardsPerNode: 1
+  collectionConfigName: "_default"
+---
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollection
+metadata:
+  name: example-collection-3-implicit
+spec:
+  solrCloud: example
+  collection: example-collection-3-implicit
+  routerName: implicit
+  autoAddReplicas: true
+  numShards: 2
+  replicationFactor: 1
+  maxShardsPerNode: 1
+  shards: "fooshard1,fooshard2"
+  collectionConfigName: "_default"
+```
+
+```bash
+$ kubectl apply -f examples/test_solrcollections.yaml
+```
\ No newline at end of file
diff --git a/docs/solr-prometheus-exporter/README.md b/docs/solr-prometheus-exporter/README.md
new file mode 100644
index 0000000..89ff83e
--- /dev/null
+++ b/docs/solr-prometheus-exporter/README.md
@@ -0,0 +1,52 @@
+# Solr Prometheus Exporter
+
+Solr metrics can be collected from solr clouds/standalone solr both residing within the kubernetes cluster and outside.
+To use the Prometheus exporter, the easiest thing to do is just provide a reference to a Solr instance. That can be any of the following:
+- The name and namespace of the Solr Cloud CRD
+- The Zookeeper connection information of the Solr Cloud
+- The address of the standalone Solr instance
+
+You can also provide a custom Prometheus Exporter config, Solr version, and exporter options as described in the
+[Solr ref-guide](https://lucene.apache.org/solr/guide/monitoring-solr-with-prometheus-and-grafana.html#command-line-parameters).
+
+Note that a few of the official Solr docker images do not enable the Prometheus Exporter.
+Versions `6.6` - `7.x` and `8.2` - `master` should have the exporter available. 
+
+## Finding the Solr Cluster to monitor
+
+The Prometheus Exporter supports metrics for both standalone solr as well as Solr Cloud.
+
+### Cloud
+
+You have two options for the prometheus exporter to find the zookeeper connection information that your Solr Cloud uses.
+
+- Provide the name of a `SolrCloud` object in the same Kubernetes cluster, and optional namespace.
+The Solr Operator will keep the ZK Connection info up to date from the SolrCloud object.  
+This name can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.name`
+- Provide explicit Zookeeper Connection info for the prometheus exporter to use.  
+  This info can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo`, with keys `internalConnectionString` and `chroot`
+
+#### ACLs
+
+The Prometheus Exporter can be set up to use ZK ACLs when connecting to Zookeeper.
+
+If the prometheus exporter has been provided the name of a solr cloud, through `cloud.name`, then the solr operator will load up the ZK ACL Secret information found in the [SolrCloud spec](../solr-cloud/solr-cloud-crd.md#acls).
+In order for the prometheus exporter to have visibility to these secrets, it must be deployed to the same namespace as the referenced SolrCloud or the same exact secrets must exist in both namespaces.
+
+If explicit Zookeeper connection information has been provided, through `cloud.zkConnectionInfo`, then ACL information must be provided in the same section.
+The ACL information can be provided through an ADMIN acl and a READ ONLY acl.  
+- Admin: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo.acl`
+- Read Only: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo.readOnlyAcl`
+
+All ACL fields are **required** if an ACL is used.
+
+- **`secret`** - The name of the secret, in the same namespace as the SolrCloud, that contains the admin ACL username and password.
+- **`usernameKey`** - The name of the key in the provided secret that stores the admin ACL username.
+- **`usernameKey`** - The name of the key in the provided secret that stores the admin ACL password.
+
+### Standalone
+
+The Prometheus Exporter can be setup to scrape a standalone Solr instance.
+In order to use this functionality, use the following spec field:
+
+`SolrPrometheusExporter.spec.solrRef.standalone.address`
\ No newline at end of file
diff --git a/example/dependencies/zk_operator.yaml b/example/dependencies/zk_operator.yaml
new file mode 100644
index 0000000..92e030d
--- /dev/null
+++ b/example/dependencies/zk_operator.yaml
@@ -0,0 +1,134 @@
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  name: zookeeperclusters.zookeeper.pravega.io
+spec:
+  group: zookeeper.pravega.io
+  names:
+    kind: ZookeeperCluster
+    listKind: ZookeeperClusterList
+    plural: zookeeperclusters
+    singular: zookeepercluster
+    shortNames:
+      - zk
+  additionalPrinterColumns:
+    - name: Replicas
+      type: integer
+      description: The number of ZooKeeper servers in the ensemble
+      JSONPath: .status.replicas
+    - name: Ready Replicas
+      type: integer
+      description: The number of ZooKeeper servers in the ensemble that are in a Ready state
+      JSONPath: .status.readyReplicas
+    - name: Internal Endpoint
+      type: string
+      description: Client endpoint internal to cluster network
+      JSONPath: .status.internalClientEndpoint
+    - name: External Endpoint
+      type: string
+      description: Client endpoint external to cluster network via LoadBalancer
+      JSONPath: .status.externalClientEndpoint
+    - name: Age
+      type: date
+      JSONPath: .metadata.creationTimestamp
+  scope: Namespaced
+  version: v1beta1
+  subresources:
+    status: {}
+
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: zk-operator
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      name: zk-operator
+  template:
+    metadata:
+      labels:
+        name: zk-operator
+    spec:
+      serviceAccountName: zookeeper-operator
+      containers:
+        - name: zk-operator
+          image: pravega/zookeeper-operator:0.2.6
+          ports:
+          - containerPort: 60000
+            name: metrics
+          imagePullPolicy: Always
+          command:
+          - zookeeper-operator
+          env:
+          - name: WATCH_NAMESPACE
+            value: ""
+          - name: POD_NAME
+            valueFrom:
+              fieldRef:
+                fieldPath: metadata.name
+          - name: OPERATOR_NAME
+            value: "zk-operator"
+
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: zookeeper-operator
+
+---
+
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: zookeeper-operator
+rules:
+- apiGroups:
+  - zookeeper.pravega.io
+  resources:
+  - "*"
+  verbs:
+  - "*"
+- apiGroups:
+  - ""
+  resources:
+  - pods
+  - services
+  - endpoints
+  - persistentvolumeclaims
+  - events
+  - configmaps
+  - secrets
+  verbs:
+  - "*"
+- apiGroups:
+  - apps
+  resources:
+  - deployments
+  - daemonsets
+  - replicasets
+  - statefulsets
+  verbs:
+  - "*"
+- apiGroups:
+  - policy
+  resources:
+  - poddisruptionbudgets
+  verbs:
+  - "*"
+
+---
+
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+  name: zookeeper-operator-cluster-role-binding
+subjects:
+- kind: ServiceAccount
+  name: zookeeper-operator
+  namespace: default
+roleRef:
+  kind: ClusterRole
+  name: zookeeper-operator
+  apiGroup: rbac.authorization.k8s.io
diff --git a/example/test_solrbackup.yaml b/example/test_solrbackup.yaml
new file mode 100644
index 0000000..534eef8
--- /dev/null
+++ b/example/test_solrbackup.yaml
@@ -0,0 +1,14 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrBackup
+metadata:
+  name: solrbackup-test
+  namespace: default
+spec:
+  persistence:
+    volume:
+      source:
+        persistentVolumeClaim:
+          claimName: "pvc-test"
+  solrCloud: example
+  collections:
+    - example
diff --git a/example/test_solrcloud.yaml b/example/test_solrcloud.yaml
new file mode 100644
index 0000000..a034422
--- /dev/null
+++ b/example/test_solrcloud.yaml
@@ -0,0 +1,48 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: example
+spec:
+  dataStorage:
+    persistent:
+      reclaimPolicy: Delete
+      pvcTemplate:
+        spec:
+          resources:
+            requests:
+              storage: "5Gi"
+    backupRestoreOptions:
+      volume:
+        persistentVolumeClaim:
+          claimName: "pvc-test"
+  replicas: 3
+  solrImage:
+    tag: 8.2.0
+  solrJavaMem: "-Xms1g -Xmx3g"
+  customSolrKubeOptions:
+    podOptions:
+      resources:
+        limits:
+          memory: "1G"
+        requests:
+          cpu: "65m"
+          memory: "156Mi"
+  zookeeperRef:
+    provided:
+      chroot: "/this/will/be/auto/created"
+      persistence:
+        spec:
+          storageClassName: "hostpath"
+          resources:
+            requests:
+              storage: "5Gi"
+      replicas: 1
+      zookeeperPodPolicy:
+        resources:
+          limits:
+            memory: "1G"
+          requests:
+            cpu: "65m"
+            memory: "156Mi"
+  solrOpts: "-Dsolr.autoSoftCommit.maxTime=10000"
+  solrGCTune: "-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8"
diff --git a/example/test_solrcloud_addressability.yaml b/example/test_solrcloud_addressability.yaml
new file mode 100644
index 0000000..6abf94f
--- /dev/null
+++ b/example/test_solrcloud_addressability.yaml
@@ -0,0 +1,36 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: ingress-cloud
+spec:
+  replicas: 3
+  solrImage:
+    tag: 8.2.0
+  solrAddressability:
+    podPort: 10000
+    commonServicePort: 80
+    external:
+      method: Ingress
+      useExternalAddress: false
+      domainName: "kube.example.com"
+      additionalDomainNames:
+        - "another.kube.example.com"
+        - "another.kube.other.com"
+      nodePortOverride: 80
+
+---
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: external-dns-cloud
+spec:
+  replicas: 3
+  solrImage:
+    tag: 8.2.0
+  solrAddressability:
+    podPort: 10000
+    commonServicePort: 80
+    external:
+      method: ExternalDNS
+      useExternalAddress: true
+      domainName: "kube.example.com"
diff --git a/example/test_solrcloud_private_repo.yaml b/example/test_solrcloud_private_repo.yaml
new file mode 100644
index 0000000..e04d1d1
--- /dev/null
+++ b/example/test_solrcloud_private_repo.yaml
@@ -0,0 +1,10 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: example-private-repo-solr-image
+spec:
+  replicas: 3
+  solrImage:
+    repository: myprivate-repo.jfrog.io/solr
+    tag: 8.2.0
+    imagePullSecret: "k8s-docker-registry-secret"
diff --git a/example/test_solrcloud_toleration_example.yaml b/example/test_solrcloud_toleration_example.yaml
new file mode 100644
index 0000000..431084b
--- /dev/null
+++ b/example/test_solrcloud_toleration_example.yaml
@@ -0,0 +1,53 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCloud
+metadata:
+  name: example-with-tolerations
+spec:
+  dataPvcSpec:
+    resources:
+      requests:
+        storage: "5Gi"
+  replicas: 1
+  solrImage:
+    tag: 8.2.0
+  solrJavaMem: "-Xms1g -Xmx3g"
+  customSolrKubeOptions:
+    podOptions:
+      nodeSelector:
+        beta.kubernetes.io/os: linux
+        beta.kubernetes.io/arch: amd64
+      tolerations:
+        - effect: NoSchedule
+          key: node-restriction.kubernetes.io/workloads
+          operator: Equal
+          value: solrclouds
+      resources:
+        limits:
+          memory: "1G"
+        requests:
+          cpu: "65m"
+          memory: "156Mi"
+  zookeeperRef:
+    provided:
+      persistence:
+        spec:
+          storageClassName: "hostpath"
+          resources:
+            requests:
+              storage: "5Gi"
+      replicas: 1
+      zookeeperPodPolicy:
+        nodeSelector:
+          beta.kubernetes.io/os: linux
+          beta.kubernetes.io/arch: amd64
+        tolerations:
+          - effect: NoSchedule
+            key: node-restriction.kubernetes.io/workloads
+            operator: Equal
+            value: zookeeper
+        resources:
+          limits:
+            memory: "1G"
+          requests:
+            cpu: "65m"
+            memory: "156Mi"
diff --git a/example/test_solrcollection.yaml b/example/test_solrcollection.yaml
new file mode 100644
index 0000000..414a606
--- /dev/null
+++ b/example/test_solrcollection.yaml
@@ -0,0 +1,43 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollection
+metadata:
+  name: example-collection-1
+spec:
+  solrCloud: example
+  collection: example-collection
+  routerName: compositeId
+  autoAddReplicas: false
+  numShards: 2
+  replicationFactor: 1
+  maxShardsPerNode: 1
+  collectionConfigName: "_default"
+---
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollection
+metadata:
+  name: example-collection-2-compositeid-autoadd
+spec:
+  solrCloud: example
+  collection: example-collection-2
+  routerName: compositeId
+  autoAddReplicas: true
+  numShards: 2
+  replicationFactor: 1
+  maxShardsPerNode: 1
+  collectionConfigName: "_default"
+---
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollection
+metadata:
+  name: example-collection-3-implicit
+spec:
+  solrCloud: example
+  collection: example-collection-3-implicit
+  routerName: implicit
+  routerField: 'car'
+  autoAddReplicas: true
+  numShards: 2
+  replicationFactor: 1
+  maxShardsPerNode: 1
+  shards: "fooshard1,fooshard2"
+  collectionConfigName: "_default"
\ No newline at end of file
diff --git a/example/test_solrcollection_alias.yaml b/example/test_solrcollection_alias.yaml
new file mode 100644
index 0000000..32012ee
--- /dev/null
+++ b/example/test_solrcollection_alias.yaml
@@ -0,0 +1,9 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrCollectionAlias
+metadata:
+  name: collection-alias-example
+spec:
+  solrCloud: example
+  aliasType: standard
+  collections:
+    - example-collection-1
diff --git a/example/test_solrprometheusexporter.yaml b/example/test_solrprometheusexporter.yaml
new file mode 100644
index 0000000..b937d2f
--- /dev/null
+++ b/example/test_solrprometheusexporter.yaml
@@ -0,0 +1,13 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrPrometheusExporter
+metadata:
+  labels:
+    controller-tools.k8s.io: "1.0"
+  name: solrprometheusexporter-sample
+spec:
+  solrReference:
+    cloud:
+      name: "example"
+  numThreads: 4
+  image:
+    tag: 8.2.0


[lucene-solr-operator] 02/03: Make changes for using github pages for documentation.

Posted by ho...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

houston pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/lucene-solr-operator.git

commit 239e58ae5969f70ca2c1fea766730228d32505b2
Author: Houston Putman <ho...@apache.org>
AuthorDate: Wed Jan 20 17:45:58 2021 -0500

    Make changes for using github pages for documentation.
---
 README.md         | 20 ++++++++++++--------
 example/README.md | 17 +++++++++++++++++
 2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/README.md b/README.md
index 664c064..2fa0b55 100644
--- a/README.md
+++ b/README.md
@@ -31,15 +31,19 @@ Join us on the [#solr-operator](https://kubernetes.slack.com/messages/solr-opera
 
 Please visit the following pages for documentation on using and developing the Solr Operator:
 
-- [Local Tutorial](docs/local_tutorial.md)
-- [Running the Solr Operator](docs/running-the-operator.md)
+- [Local Tutorial](https://apache.github.io/lucene-solr-operator/docs/local_tutorial.md)
+- [Running the Solr Operator](https://apache.github.io/lucene-solr-operator/docs/running-the-operator.md)
 - Available Solr Resources
-    - [Solr Clouds](docs/solr-cloud)
-    - [Solr Collections](docs/solr-collection)
-    - [Solr Backups](docs/solr-backup)
-    - [Solr Metrics](docs/solr-prometheus-exporter)
-    - [Solr Collection Aliases](docs/solr-collection-alias)
-- [Development](docs/development.md)
+    - [Solr Clouds](https://apache.github.io/lucene-solr-operator/docs/solr-cloud)
+    - [Solr Collections](https://apache.github.io/lucene-solr-operator/docs/solr-collection)
+    - [Solr Backups](https://apache.github.io/lucene-solr-operator/docs/solr-backup)
+    - [Solr Metrics](https://apache.github.io/lucene-solr-operator/docs/solr-prometheus-exporter)
+    - [Solr Collection Aliases](https://apache.github.io/lucene-solr-operator/docs/solr-collection-alias)
+- [Development](https://apache.github.io/lucene-solr-operator/docs/development.md)
+
+### Examples
+
+Example uses of each CRD have been [provided](https://apache.github.io/lucene-solr-operator/examples).
 
 ## Version Compatibility & Upgrade Notes
 
diff --git a/example/README.md b/example/README.md
new file mode 100644
index 0000000..636f560
--- /dev/null
+++ b/example/README.md
@@ -0,0 +1,17 @@
+# Examples
+
+The following examples are provided in helping explain various options provided in each of the Solr Operator CRDs
+
+- Solr Cloud
+  - [All Encompassing](test_solrcloud.yaml)
+  - [Custom Addressability](test_solrcloud_addressability.yaml)
+  - [Private Repo Solr Image](test_solrcloud_private_repo.yaml)
+  - [Pod Tolerations](test_solrcloud_toleration_example.yaml)
+- Solr Prometheus Exporter
+  - [Basic](test_solrprometheusexporter.yaml)
+- Solr Backup
+  - [Basic](test_solrbackup.yaml)
+- Solr Collection
+  - [Basic](test_solrcollection.yaml)
+- Solr Collection Alias
+  - [Basic](test_solrcollection_alias.yaml)
\ No newline at end of file