You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ho...@apache.org on 2022/08/18 02:16:16 UTC

[solr-operator] branch gh-pages updated: Revert previous commit

This is an automated email from the ASF dual-hosted git repository.

houston pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/solr-operator.git


The following commit(s) were added to refs/heads/gh-pages by this push:
     new 57cb8dc  Revert previous commit
57cb8dc is described below

commit 57cb8dc2e9f8aec09221d5ea47b3b2ae9cff8b4f
Author: Houston Putman <ho...@apache.org>
AuthorDate: Thu Aug 18 11:16:02 2022 +0900

    Revert previous commit
---
 docs/docs/README.md                          |   29 -
 docs/docs/development.md                     |  142 ----
 docs/docs/local_tutorial.md                  |  252 ------
 docs/docs/release-instructions.md            |   44 -
 docs/docs/running-the-operator.md            |  181 ----
 docs/docs/solr-backup/README.md              |  365 ---------
 docs/docs/solr-cloud/README.md               |  129 ---
 docs/docs/solr-cloud/dependencies.md         |   65 --
 docs/docs/solr-cloud/managed-updates.md      |   91 ---
 docs/docs/solr-cloud/solr-cloud-crd.md       | 1134 --------------------------
 docs/docs/solr-prometheus-exporter/README.md |  320 --------
 docs/docs/upgrade-notes.md                   |  265 ------
 docs/docs/upgrading-to-apache.md             |  152 ----
 13 files changed, 3169 deletions(-)

diff --git a/docs/docs/README.md b/docs/docs/README.md
deleted file mode 100644
index 04580d7..0000000
--- a/docs/docs/README.md
+++ /dev/null
@@ -1,29 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Documentation
-
-Please visit the following pages for documentation on using and developing the Solr Operator:
-
-- [Local Tutorial](local_tutorial.md)
-- [Upgrade Notes](upgrade-notes.md)
-- [Running the Solr Operator](running-the-operator.md)
-- Available Solr Resources
-    - [Solr Clouds](solr-cloud)
-    - [Solr Backups](solr-backup)
-    - [Solr Metrics](solr-prometheus-exporter)
-- [Development](development.md)
\ No newline at end of file
diff --git a/docs/docs/development.md b/docs/docs/development.md
deleted file mode 100644
index 59bc4a0..0000000
--- a/docs/docs/development.md
+++ /dev/null
@@ -1,142 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Developing the Solr Operator
-
-This page details the steps for developing the Solr Operator, and all necessary steps to follow before creating a PR to the repo.
-
- - [Setup](#setup)
-    - [Setup Docker for Mac with K8S](#setup-docker-for-mac-with-k8s-with-an-ingress-controller)
-    - [Install the necessary Dependencies](#install-the-necessary-dependencies)
- - [Build the Solr CRDs](#build-the-solr-crds)
- - [Build and Run the Solr Operator](#build-and-run-local-versions)
-    - [Build the Solr Operator](#building-the-solr-operator)
-    - [Running the Solr Operator](#running-the-solr-operator)
- - [Steps to take before creating a PR](#before-you-create-a-pr)
- 
-## Setup
-
-### Setup Docker for Mac with K8S with an Ingress Controller
-
-Please follow the instructions from the [local tutorial](local_tutorial.md#setup-docker-for-mac-with-k8s).
-
-### Install the necessary dependencies
-
-Install the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator), which this operator depends on by default.
-It is optional, however, as described in the [Zookeeper Reference](solr-cloud/solr-cloud-crd.md#zookeeper-reference) section in the CRD docs.
-
-```bash
-helm repo add pravega https://charts.pravega.io
-helm install zookeeper-operator pravega/zookeeper-operator --version 0.2.14
-```
-
-Install necessary dependencies for building and deploying the operator.
-```bash
-export PATH="$PATH:$GOPATH/bin" # You likely want to add this line to your ~/.bashrc or ~/.bash_aliases
-make install-dependencies
-```
-
-## Build the Solr CRDs
-
-If you have changed anything in the [APIs directory](/api/v1beta1), you will need to run the following command to regenerate all Solr CRDs.
-
-```bash
-make manifests
-```
-
-In order to apply these CRDs to your kube cluster, merely run the following:
-
-```bash
-make install
-```
-
-## Build and Run local versions
-
-It is very useful to build and run your local version of the operator to test functionality.
-
-### Building the Solr Operator
-
-#### Building a Go binary
-
-Building the Go binary files is quite straightforward:
-
-```bash
-make build
-```
-
-This is useful for testing that your code builds correctly, as well as using the `make run` command detailed below.
-
-#### Building the Docker image
-
-Building and releasing a test operator image with a custom Docker namespace.
-
-```bash
-REPOSITORY=your-repository make docker-build docker-push
-```
-
-You can control the repository and version for your solr-operator docker image via the ENV variables:
-- `REPOSITORY` - defaults to `apache`. This can also include the docker repository information for private repos.
-- `NAME` - defaults to `solr-operator`.
-- `TAG` - defaults to the full branch version (e.g. `v0.3.0-prerelease`). For github tags, this value will be the release version.
-You can check what version you are using by running `make tag`, you can check your version with `make version`.
-
-The image will be created under the tag `$(REPOSITORY)/$(NAME):$(TAG)` as well as `$(REPOSITORY)/$(NAME):latest`.
-
-
-### Running the Solr Operator
-
-There are a few options for running the Solr Operator version you are developing.
-
-- You can deploy the Solr Operator by using our provided [Helm Chart](/helm/solr-operator/README.md).
-You will need to [build a docker image](#building-the-docker-image) for your version of the operator.
-Then update the values for the helm chart to use the version that you have built.
-- There are two useful `make` commands provided to help with running development versions of the operator:
-    - `make run` - This command will start the solr-operator process locally (not within kubernetes).
-    This does not require building a docker image.
-    - `make deploy` - This command will apply the docker image with your local version to your kubernetes cluster.
-    This requires [building a docker image](#building-the-docker-image).
-    
-**Warning**: If you are running kubernetes locally and do not want to push your image to docker hub or a private repository, you will need to set the `imagePullPolicy: Never` on your Solr Operator Deployment.
-That way Kubernetes does not try to pull your image from whatever repo it is listed under (or docker hub by default).
-
-## Testing
-
-If you are creating new functionality for the operator, please include that functionality in an existing test or a new test before creating a PR.
-Most tests can be found in the [controller](/controllers) directory, with names that end in `_test.go`.
-
-PRs will automatically run the unit tests, and will block merging if the tests fail.
-
-You can run these tests locally via the following make command:
-
-```bash
-make test
-```
-
-## Before you create a PR
-
-The github actions will auto-check that linting is successful on your PR.
-To make sure that the linting will succeed, run the following command before committing.
-
-```bash
-make prepare
-```
-
-Make sure that you have updated the go.mod file:
-
-```bash
-make mod-tidy
-```
diff --git a/docs/docs/local_tutorial.md b/docs/docs/local_tutorial.md
deleted file mode 100644
index 4846f38..0000000
--- a/docs/docs/local_tutorial.md
+++ /dev/null
@@ -1,252 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Solr on Kubernetes on local Mac
-
-This tutorial shows how to setup Solr under Kubernetes on your local mac. The plan is as follows:
-
- 1. [Setup Kubernetes and Dependencies](#setup-kubernetes-and-dependencies)
-    1. [Setup Docker for Mac with K8S](#setup-docker-for-mac-with-k8s)
-    2. [Install an Ingress Controller to reach the cluster on localhost](#install-an-ingress-controller)
- 3. [Install Solr Operator](#install-the-solr-operator)
- 4. [Start your Solr cluster](#start-an-example-solr-cloud-cluster)
- 5. [Create a collection and index some documents](#create-a-collection-and-index-some-documents)
- 6. [Scale from 3 to 5 nodes](#scale-from-3-to-5-nodes)
-    1. [Using the Horizontal Pod Autoscaler](#horizontal-pod-autoscaler-hpa)
- 7. [Upgrade to newer Solr version](#upgrade-to-newer-version)
- 8. [Install Kubernetes Dashboard (optional)](#install-kubernetes-dashboard-optional)
- 9. [Delete the solrCloud cluster named 'example'](#delete-the-solrcloud-cluster-named-example)
-
-## Setup Kubernetes and Dependencies
-
-### Setup Docker for Mac with K8s
-
-```bash
-# Install Homebrew, if you don't have it already
-/bin/bash -c "$(curl -fsSL \
-	https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
-
-# Install Docker Desktop for Mac (use edge version to get latest k8s)
-brew install --cask docker
-
-# Enable Kubernetes in Docker Settings, or run the command below:
-sed -i -e 's/"kubernetesEnabled" : false/"kubernetesEnabled" : true/g' \
-    ~/Library/Group\ Containers/group.com.docker/settings.json
-
-# Start Docker for mac from Finder, or run the command below
-open /Applications/Docker.app
-
-# Install Helm, which we'll use to install the operator, and 'watch'
-brew install helm watch
-```
-
-### Install an Ingress Controller
-
-Kubernetes services are by default only accessible from within the k8s cluster. To make them adressable from our laptop, we'll add an ingress controller
-
-```bash
-# Install the nginx ingress controller
-kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
-
-# Inspect that the ingress controller is running by visiting the Kubernetes dashboard 
-# and selecting namespace `ingress-nginx`, or running this command:
-kubectl get all --namespace ingress-nginx
-
-# Edit your /etc/hosts file (`sudo vi /etc/hosts`) and replace the 127.0.0.1 line with:
-127.0.0.1	localhost default-example-solrcloud.ing.local.domain ing.local.domain default-example-solrcloud-0.ing.local.domain default-example-solrcloud-1.ing.local.domain default-example-solrcloud-2.ing.local.domain dinghy-ping.localhost
-```
-
-Once we have installed Solr to our k8s, this will allow us to address the nodes locally.
-
-## Install the Solr Operator
-
-You can follow along here, or follow the instructions in the [Official Helm release](https://artifacthub.io/packages/helm/apache-solr/solr-operator).
-
-Now that we have the prerequisites setup, let us install Solr Operator which will let us easily manage a large Solr cluster:
-
-Now add the Solr Operator Helm repository. (You should only need to do this once)
-
-```bash
-$ helm repo add apache-solr https://solr.apache.org/charts
-$ helm repo update
-```
-
-Next, install the Solr Operator chart. Note this is using Helm v3, in order to use Helm v2 please consult the [Helm Chart documentation](https://hub.helm.sh/charts/solr-operator/solr-operator).
-This will install the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) by default.
-
-```bash
-# Install the Solr & Zookeeper CRDs
-$ kubectl create -f https://solr.apache.org/operator/downloads/crds/v0.6.0/all-with-dependencies.yaml
-# Install the Solr operator and Zookeeper Operator
-$ helm install solr-operator apache-solr/solr-operator --version 0.6.0
-```
-
-_Note that the Helm chart version does not contain a `v` prefix, which the downloads version does. The Helm chart version is the only part of the Solr Operator release that does not use the `v` prefix._
-
-
-After installing, you can check to see what lives in the cluster to make sure that the Solr and ZooKeeper operators have started correctly.
-```bash
-$ kubectl get all
-
-NAME                                                   READY   STATUS             RESTARTS   AGE
-pod/solr-operator-8449d4d96f-cmf8p                     1/1     Running            0          47h
-pod/solr-operator-zookeeper-operator-674676769c-gd4jr  1/1     Running            0          49d
-
-NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
-deployment.apps/solr-operator                     1/1     1            1           49d
-deployment.apps/solr-operator-zookeeper-operator  1/1     1            1           49d
-
-NAME                                                         DESIRED   CURRENT   READY   AGE
-replicaset.apps/solr-operator-8449d4d96f                     1         1         1       2d1h
-replicaset.apps/solr-operator-zookeeper-operator-674676769c  1         1         1       49d
-```
-
-After inspecting the status of you Kube cluster, you should see a deployment for the Solr Operator as well as the Zookeeper Operator.
-
-## Start an example Solr Cloud cluster
-
-To start a Solr Cloud cluster, we will create a yaml that will tell the Solr Operator what version of Solr Cloud to run, and how many nodes, with how much memory etc.
-
-```bash
-# Create a 3-node cluster v8.3 with 300m Heap each:
-helm install example-solr apache-solr/solr --version 0.6.0 \
-  --set image.tag=8.3 \
-  --set solrOptions.javaMemory="-Xms300m -Xmx300m" \
-  --set addressability.external.method=Ingress \
-  --set addressability.external.domainName="ing.local.domain" \
-  --set addressability.external.useExternalAddress="true" \
-  --set ingressOptions.ingressClassName="nginx"
-
-# The solr-operator has created a new resource type 'solrclouds' which we can query
-# Check the status live as the deploy happens
-kubectl get solrclouds -w
-
-# Open a web browser to see a solr node:
-# Note that this is the service level, so will round-robin between the nodes
-open "http://default-example-solrcloud.ing.local.domain/solr/#/~cloud?view=nodes"
-```
-
-## Create a collection and index some documents
-
-Create a collection via the [Collections API](https://solr.apache.org/guide/8_8/collection-management.html#create).
-
-```bash
-# Execute the Collections API command
-curl "http://default-example-solrcloud.ing.local.domain/solr/admin/collections?action=CREATE&name=mycoll&numShards=1&replicationFactor=3&maxShardsPerNode=2&collection.configName=_default"
-
-# Check in Admin UI that collection is created
-open "http://default-example-solrcloud.ing.local.domain/solr/#/~cloud?view=graph"
-```
-
-Now index some documents into the empty collection.
-```bash
-curl -XPOST -H "Content-Type: application/json" \
-    -d '[{id: 1}, {id: 2}, {id: 3}, {id: 4}, {id: 5}, {id: 6}, {id: 7}, {id: 8}]' \
-    "http://default-example-solrcloud.ing.local.domain/solr/mycoll/update/"
-```
-
-## Scale from 3 to 5 nodes
-
-So we wish to add more capacity. Scaling the cluster is a breeze.
-
-```bash
-# Issue the scale command
-kubectl scale --replicas=5 solrcloud/example
-```
-
-After issuing the scale command, start hitting the "Refresh" button in the Admin UI.
-You will see how the new Solr nodes are added.
-You can also watch the status via the `kubectl get solrclouds` command:
-
-```bash
-kubectl get solrclouds -w
-
-# Hit Control-C when done
-```
-
-### Horizontal Pod Autoscaler (HPA)
-
-The SolrCloud CRD is setup so that it is able to run with the HPA.
-Merely use the following when creating an HPA object:
-```yaml
-apiVersion: autoscaling/v2beta2
-kind: HorizontalPodAutoscaler
-metadata:
-  name: example-solr
-spec:
-  maxReplicas: 6
-  minReplicas: 3
-  scaleTargetRef:
-    apiVersion: solr.apache.com/v1beta1
-    kind: SolrCloud
-    name: example
-  metrics:
-    ....
- ```
-
-Make sure that you are not overwriting the `SolrCloud.Spec.replicas` field when doing `kubectl apply`,
-otherwise you will be undoing the autoscaler's work.
-By default, the helm chart does not set the `replicas` field, so it is safe to use with the HPA.
-
-## Upgrade to newer version
-
-So we wish to upgrade to a newer Solr version:
-
-```bash
-# Take note of the current version, which is 8.3.1
-curl -s http://default-example-solrcloud.ing.local.domain/solr/admin/info/system | grep solr-i
-
-# Update the solrCloud configuration with the new version, keeping all previous settings and the number of nodes set by the autoscaler.
-helm upgrade example-solr apache-solr/solr --version 0.6.0 \
-  --reuse-values \
-  --set image.tag=8.7
-
-# Click the 'Show all details" button in Admin UI and start hitting the "Refresh" button
-# See how the operator upgrades one pod at a time. Solr version is in the 'node' column
-# You can also watch the status with the 'kubectl get solrclouds' command
-kubectl get solrclouds -w
-
-# Hit Control-C when done
-```
-
-## Install Kubernetes Dashboard (optional)
-
-Kubernetes Dashboard is a web interface that gives a better overview of your k8s cluster than only running command-line commands. This step is optional, you don't need it if you're comfortable with the cli.
-
-```bash
-# Install the Dashboard
-kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
-
-# You need to authenticate with the dashboard. Get a token:
-kubectl -n kubernetes-dashboard describe secret \
-    $(kubectl -n kubernetes-dashboard get secret | grep default-token | awk '{print $1}') \
-    | grep "token:" | awk '{print $2}'
-
-# Start a kube-proxy in the background (it will listein on localhost:8001)
-kubectl proxy &
-
-# Open a browser to the dashboard (note, this is one long URL)
-open "http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default"
-
-# Select 'Token' in the UI and paste the token from last step (starting with 'ey...')
-```
-
-## Delete the solrCloud cluster named 'example'
-
-```bash
-kubectl delete solrcloud example
-```
diff --git a/docs/docs/release-instructions.md b/docs/docs/release-instructions.md
deleted file mode 100644
index 59c7161..0000000
--- a/docs/docs/release-instructions.md
+++ /dev/null
@@ -1,44 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Releasing a New Verson of the Solr Operator
-
-This page details the steps for releasing new versions of the Solr Operator.
-
-- [Versioning](#versioning)
-- [Use the Release Wizard](#release-wizard)
- 
-### Versioning
-
-The Solr Operator follows kubernetes conventions with versioning with is:
-
-`v<Major>.<Minor>.<Patch>`
-
-For example `v0.2.5` or `v1.3.4`.
-Certain systems except versions that do not start wth `v`, such as Helm.
-However the tooling has been created to automatically make these changes when necessary, so always include the prefixed `v` when following these instructions.
-
-### Release Wizard
-
-Run the release wizard from the root of the repo on the branch that you want to make the release from.
-
-```bash
-./hack/release/wizard/releaseWizard.py
-```
-
-Make sure to install all necessary programs and follow all steps.
-If there is any confusion, it is best to reach out on slack or the mailing lists before continuing.
\ No newline at end of file
diff --git a/docs/docs/running-the-operator.md b/docs/docs/running-the-operator.md
deleted file mode 100644
index 2b3780a..0000000
--- a/docs/docs/running-the-operator.md
+++ /dev/null
@@ -1,181 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Running the Solr Operator
-
-## Using the Solr Operator Helm Chart
-
-The easiest way to run the Solr Operator is via the [provided Helm Chart](https://artifacthub.io/packages/helm/apache-solr/solr-operator).
-
-The helm chart provides abstractions over the Input Arguments described below, and should work with any official images in docker hub.
-
-### How to install via Helm
-
-The official documentation for installing the Solr Operator Helm chart can be found on [Artifact Hub](https://artifacthub.io/packages/helm/apache-solr/solr-operator).
-
-The first step is to add the Solr Operator helm repository.
-
-```bash
-$ helm repo add apache-solr https://solr.apache.org/charts
-$ helm repo update
-```
-
-Next, install the Solr Operator chart. Note this is using Helm v3, use the official Helm chart documentation linked to above.
-This will install the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) by default.
-
-```bash
-$ kubectl create -f https://solr.apache.org/operator/downloads/crds/v0.6.0/all-with-dependencies.yaml
-$ helm install solr-operator apache-solr/solr-operator --version 0.6.0
-```
-
-_Note that the Helm chart version does not contain a `v` prefix, which the downloads version does. The Helm chart version is the only part of the Solr Operator release that does not use the `v` prefix._
-
-
-After installing, you can check to see what lives in the cluster to make sure that the Solr and ZooKeeper operators have started correctly.
-```bash
-$ kubectl get all
-
-NAME                                                   READY   STATUS             RESTARTS   AGE
-pod/solr-operator-8449d4d96f-cmf8p                     1/1     Running            0          47h
-pod/solr-operator-zookeeper-operator-674676769c-gd4jr  1/1     Running            0          49d
-
-NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
-deployment.apps/solr-operator                     1/1     1            1           49d
-deployment.apps/solr-operator-zookeeper-operator  1/1     1            1           49d
-
-NAME                                                         DESIRED   CURRENT   READY   AGE
-replicaset.apps/solr-operator-8449d4d96f                     1         1         1       2d1h
-replicaset.apps/solr-operator-zookeeper-operator-674676769c  1         1         1       49d
-```
-
-After inspecting the status of you Kube cluster, you should see a deployment for the Solr Operator as well as the Zookeeper Operator.
-
-### Managing CRDs
-
-Helm 3 automatically installs the Solr CRDs in the /crds directory, so no further action is needed when first installing the Operator.
-
-If you have solr operator installations in multiple namespaces that are managed separately, you will likely want to skip installing CRDs when installing the chart.
-This can be done with the `--skip-crds` helm option.
-
-```bash
-helm install solr-operator apache-solr/solr-operator --skip-crds --namespace solr
-```
-
-**Helm will not upgrade CRDs once they have been installed.
-Therefore, if you are upgrading from a previous Solr Operator version, be sure to install the most recent CRDs first.**
-
-You can update the released Solr CRDs with the following URL:
-```bash
-kubectl replace -f "https://solr.apache.org/operator/downloads/crds/<version>/<name>.yaml"
-```
-
-Examples:
-- `https://solr.apache.org/operator/downloads/crds/v0.3.0/all.yaml`  
-  Includes all Solr CRDs in the `v0.3.0` release
-- `https://solr.apache.org/operator/downloads/crds/v0.2.7/all-with-dependencies.yaml`  
-  Includes all Solr CRDs and dependency CRDs in the `v0.2.7` release
-- `https://solr.apache.org/operator/downloads/crds/v0.2.8/solrclouds.yaml`  
-  Just the SolrCloud CRD in the `v0.2.8` release
-
-#### The ZookeeperCluster CRD
-
-If you use the provided Zookeeper Cluster in the SolrCloud Spec, it is important to make sure you have the correct `ZookeeperCluster` CRD installed as well.
-
-The Zookeeper Operator Helm chart includes its CRDs when installing, however the way the CRDs are managed can be considered risky.
-If we let the Zookeeper Operator Helm chart manage the Zookeeper CRDs, then users could see outages when [uninstalling the chart](#uninstalling-the-chart).
-Therefore, by default, we tell the Zookeeper Operator to not install the Zookeeper CRDs.
-You can override this, assuming the risks, by setting `zookeeper-operator.crd.create: true`.
-
-For manual installation of the ZookeeperCluster CRD, you can find the file in the [zookeeper-operator repo](https://github.com/pravega/zookeeper-operator/blob/master/deploy/crds/zookeeper.pravega.io_zookeeperclusters_crd.yaml), for the correct release,
-or use the convenience download locations provided below.
-The Solr CRD releases have bundled the ZookeeperCluster CRD required in each version.
-
-```bash
-# Install all Solr CRDs as well as the dependency CRDS (ZookeeperCluster) for the given version of the Solr Operator
-kubectl create -f "https://solr.apache.org/operator/downloads/crds/<solr operator version>/all-with-dependencies.yaml"
-
-# Install just the ZookeeperCluster CRD used in the given version of the Solr Operator
-kubectl create -f "https://solr.apache.org/operator/downloads/crds/<solr operator version>/zookeeperclusters.yaml"
-```
-
-Examples:
-- `https://solr.apache.org/operator/downloads/crds/v0.3.0/all-with-dependencies.yaml`  
-  Includes all Solr CRDs and dependency CRDs, including `ZookeeperCluster`, in the `v0.3.0` Solr Operator release
-- `https://solr.apache.org/operator/downloads/crds/v0.2.8/zookeeperclusters.yaml`  
-  Just the ZookeeperCluster CRD required in the `v0.2.8` Solr Operator release
-
-## Solr Operator Docker Images
-
-The Solr Operator Docker image is published to Dockerhub at [apache/solr-operator](https://hub.docker.com/r/apache/solr-operator).
-The [Dockerfile](/build/Dockerfile) builds from scratch source, and copies all necessary information to a very limited image.
-The final image will only contain the solr-operator binary and necessary License information.
-
-## Solr Operator Input Args
-
-* **--zookeeper-operator** Whether or not to use the Zookeeper Operator to create dependency Zookeeepers.
-  Required to use the `spec.zookeeperRef.provided` option.
-  If _true_, then a Zookeeper Operator must be running for the cluster.
-  (_true_ | _false_ , defaults to _false_)
-
-* **--watch-namespaces** Watch certain namespaces in the Kubernetes cluster.
-  If flag is omitted, or given an empty string, then the whole cluster will be watched.
-  If the operator should watch multiple namespaces, provide them all separated by commas.
-  (_string_ , defaults to _empty_)
-
-* **--leader-elect** Whether or not to use leader election for the Solr Operator.
-  If set to true, then only one operator pod will be functional for the namespaces given through `--watch-namespaces`.
-  If multiple namespaces are provided, leader election will use the first namespace sorted alphabetically.
-  (_true_ | _false_ , defaults to _true_)
-
-* **--metrics-bind-address** The address to bind the metrics servlet on.
-  If only a port is provided (e.g. `:8080`), then the metrics server will respond to requests with any Host header.
-  (defaults to _:8080_)
-
-* **--health-probe-bind-address=** The address to bind the health probe servlet on.
-  If only a port is provided (e.g. `:8081`), then the metrics server will respond to requests with any Host header.
-  (defaults to _:8081_)
-                        
-## Client Auth for mTLS-enabled Solr clusters
-
-For SolrCloud instances that run with mTLS enabled (see `spec.solrTLS.clientAuth`), the operator needs to supply a trusted certificate when making API calls to the Solr pods it is managing.
-
-This means that the client certificate used by the operator must be added to the truststore on all Solr pods.
-Alternatively, the certificate for the Certificate Authority (CA) that signed the client certificate can be trusted by adding the CA's certificate to the Solr truststore.
-In the latter case, any client certificates issued by the trusted CA will be accepted by Solr, so make sure this is appropriate for your environment.
-
-The client certificate used by the operator should be stored in a [TLS secret](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets); you must create this secret before deploying the Solr operator.
-
-When deploying the operator, supply the client certificate using the `mTLS.clientCertSecret` Helm chart variable, such as:
-```
-  --set mTLS.clientCertSecret=my-client-cert \
-```
-The specified secret must exist in the same namespace where the operator is deployed.
-
-In addition, if the CA used to sign the server certificate used by Solr is not built into the operator's Docker image, 
-then you'll need to add the CA's certificate to the operator so its HTTP client will trust the server certificates during the TLS handshake.
-
-The CA certificate needs to be stored in Kubernetes secret in PEM format and provided via the following Helm chart variables:
-```
-  --set mTLS.caCertSecret=my-client-ca-cert \
-  --set mTLS.caCertSecretKey=ca-cert-pem
-```
-
-In most cases, you'll also want to configure the operator with `mTLS.insecureSkipVerify=true` (the default) as you'll want the operator to skip hostname verification for Solr pods.
-Setting `mTLS.insecureSkipVerify` to `false` means the operator will enforce hostname verification for the certificate provided by Solr pods.
-
-By default, the operator watches for updates to the mTLS client certificate (mounted from the `mTLS.clientCertSecret` secret) and then refreshes the HTTP client to use the updated certificate.
-To disable this behavior, configure the operator using: `--set mTLS.watchForUpdates=false`.
\ No newline at end of file
diff --git a/docs/docs/solr-backup/README.md b/docs/docs/solr-backup/README.md
deleted file mode 100644
index e771947..0000000
--- a/docs/docs/solr-backup/README.md
+++ /dev/null
@@ -1,365 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Solr Backups
-
-The Solr Operator supports triggering the backup of arbitrary Solr collections.
-
-Triggering these backups involves setting configuration options on both the SolrCloud and SolrBackup CRDs.
-The SolrCloud instance is responsible for defining one or more backup "repositories" (metadata describing where and how the backup data should be stored).
-SolrBackup instances then trigger backups by referencing these repositories by name, listing the Solr collections to back up, and optionally scheduling recurring backups.
-
-For detailed information on how to best configure backups for your use case, please refer to the detailed schema information provided by `kubectl explain solrcloud.spec.backupRepositories` and its child elements, as well as `kubectl explain solrbackup`.
-
-This page outlines how to create and delete a Kubernetes SolrBackup.
-
-- [Creation](#creating-an-example-solrbackup)
-- [Recurring/Scheduled Backups](#recurring-backups)
-- [Deletion](#deleting-an-example-solrbackup)
-- [Repository Types](#supported-repository-types)
-  - [GCS](#gcs-backup-repositories)
-  - [S3](#s3-backup-repositories)
-  - [Volume](#volume-backup-repositories)
-
-## Creating an example SolrBackup
-
-A prerequisite for taking a backup is having something to take a backup _of_.
-SolrCloud creation generally is covered in more detail [here](../solr-cloud/README.md), so if you don't have one already, create a SolrCloud instance as per those instructions.
-
-Now that you have a Solr cluster to backup data from, you need a place to store the backup data.
-In this example, we'll create a Kubernetes persistent volume to mount on each Solr node.
-
-A volume for this purpose can be created as below:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
-  name: collection-backup-pvc
-spec:
-  accessModes:
-  - ReadWriteMany
-  resources:
-    requests:
-      storage: 1Gi 
-  storageClassName: hostpath
-  volumeMode: Filesystem
-```
-
-Note that this PVC specifies `ReadWriteMany` access, which is required for Solr clusters with more than node.
-Note also that it uses a `storageClassName` of `hostpath`.
-Not all Kubernetes clusters support this `storageClassName` value - you may need to choose a different `ReadWriteMany`-compatible storage class based on your Kubernetes version and cluster setup.
-
-Next, modify your existing SolrCloud instance by adding a backup repository definition that uses the recently created volume.
-To do this, run `kubectl edit solrcloud example`, adding the following YAML nested under the `spec` property:
-
-```yaml
-spec:
-  backupRepositories:
-    - name: "local-collection-backups-1"
-      volume:
-        source:
-          persistentVolumeClaim:
-            claimName: "collection-backup-pvc"
-```
-
-This defines a backup repository called "local-collection-backups-1" which is setup to store backup data on the volume we've just created.
-The operator will notice this change and create new Solr pods that have the 'collection-backup-pvc' volume mounted.
-
-Now that there's a backup repository available to use, a backup can be triggered anytime by creating a new SolrBackup instance.
-
-```yaml
-apiVersion: solr.apache.org/v1beta1
-kind: SolrBackup
-metadata:
-  name: local-backup
-  namespace: default
-spec:
-  repositoryName: "local-collection-backups-1"
-  solrCloud: example
-  collections:
-    - techproducts
-    - books
-```
-
-This will create a backup of both the 'techproducts' and 'books' collections, storing the data on the 'collection-backup-pvc' volume.
-The status of our triggered backup can be checked with the command below.
-
-```bash
-$ kubectl get solrbackups
-NAME   CLOUD     STARTED   FINISHED   SUCCESSFUL   NEXTBACKUP  AGE
-test   example   123m      true       false                     161m
-```
-
-## Recurring Backups
-_Since v0.5.0_
-
-The Solr Operator enables taking recurring updates, at a set interval.
-Note that this feature requires a SolrCloud running Solr `8.9.0` or older, because it relies on `Incremental` backups.
-
-By default the Solr Operator will save a maximum of **5** backups at a time, however users can override this using `SolrBackup.spec.recurrence.maxSaved`.
-When using `recurrence`, users must provide a Cron-style `schedule` for the interval at which backups should be taken.
-Please refer to the [GoLang cron-spec](https://pkg.go.dev/github.com/robfig/cron/v3?utm_source=godoc#hdr-CRON_Expression_Format) for more information on allowed syntax.
-
-```yaml
-apiVersion: solr.apache.org/v1beta1
-kind: SolrBackup
-metadata:
-  name: local-backup
-  namespace: default
-spec:
-  repositoryName: "local-collection-backups-1"
-  solrCloud: example
-  collections:
-    - techproducts
-    - books
-  recurrence: # Store one backup daily, and keep a week at a time.
-    schedule: "@daily"
-    maxSaved: 7
-```
-
-If using `kubectl`, the standard `get` command will return the time the backup was last started and when the next backup will occur.
-
-```bash
-$ kubectl get solrbackups
-NAME   CLOUD     STARTED   FINISHED   SUCCESSFUL   NEXTBACKUP             AGE
-test   example   123m      true       true         2021-11-09T00:00:00Z   161m
-```
-
-Much like when not taking a recurring backup, `SolrBackup.status` will contain the information from the latest, or currently running, backup.
-The results of previous backup attempts are stored under `SolrBackup.status.history` (sorted from most recent to oldest).
-
-You are able to **add or remove** `recurrence` to/from an existing `SolrBackup` object, no matter what stage that `SolrBackup` object is in.
-If you add recurrence, then a new backup will be scheduled based on the `startTimestamp` of the last backup.
-If you remove recurrence, then the `nextBackupTime` will be removed.
-However, if the recurrent backup is already underway, it will not be stopped.
-
-### Backup Scheduling
-
-Backups are scheduled based on the `startTimestamp` of the last backup.
-Therefore, if an interval schedule such as `@every 1h` is used, and a backup starts on `2021-11-09T03:10:00Z` and ends on `2021-11-09T05:30:00Z`, then the next backup will be started at `2021-11-09T04:10:00Z`.
-If the interval is shorter than the time it takes to complete a backup, then the next backup will started directly after the previous backup completes (even though it is delayed from its given schedule).
-And the next backup will be scheduled based on the `startTimestamp` of the delayed backup.
-So there is a possibility of skew overtime if backups take longer than the allotted schedule.
-
-If a guaranteed schedule is important, it is recommended to use intervals that are guaranteed to be longer than the time it takes to complete a backup.
-
-### Temporarily Disabling Recurring Backups
-
-It is also easy to temporarily disable backups for a time.
-Merely add `disabled: true` under the `recurrence` section of the `SolrBackup` resource.
-And set `disabled: false`, or just remove the property to re-enable backups.
-
-Since backups are scheduled based on the `startTimestamp` of the last backup, a new backup may start immediately after you re-enable the recurrence.
-
-```yaml
-apiVersion: solr.apache.org/v1beta1
-kind: SolrBackup
-metadata:
-  name: local-backup
-  namespace: default
-spec:
-  repositoryName: "local-collection-backups-1"
-  solrCloud: example
-  collections:
-    - techproducts
-    - books
-  recurrence: # Store one backup daily, and keep a week at a time.
-    schedule: "@daily"
-    maxSaved: 7
-    disabled: true
-```
-
-**Note: this will not stop any backups running at the time that `disabled: true` is set, it will only affect scheduling future backups.**
-
-## Deleting an example SolrBackup
-
-Once the operator completes a backup, the SolrBackup instance can be safely deleted.
-
-```bash
-$ kubectl delete solrbackup local-backup
-```
-
-Note that deleting SolrBackup instances doesn't delete the backed up data, which the operator views as already persisted and outside its control.
-In our example this data can still be found on the volume we created earlier
-
-```bash
-$ kubectl exec example-solrcloud-0 -- ls -lh /var/solr/data/backup-restore/local-collection-backups-1/backups/
-total 8K
-drwxr-xr-x 3 solr solr 4.0K Sep 16 11:48 local-backup-books
-drwxr-xr-x 3 solr solr 4.0K Sep 16 11:48 local-backup-techproducts
-```
-
-Volume backup data, as in our example, can always be deleted using standard shell commands if desired:
-
-```bash
-kubectl exec example-solrcloud-0 -- rm -r /var/solr/data/backup-restore/local-collection-backups-1/backups/local-backup-books
-kubectl exec example-solrcloud-0 -- rm -r /var/solr/data/backup-restore/local-collection-backups-1/backups/local-backup-techproducts
-```
-
-## Supported Repository Types
-_Since v0.5.0_
-
-Note all repositories are defined in the `SolrCloud` specification.
-In order to use a repository in the `SolrBackup` CRD, it must be defined in the `SolrCloud` spec.
-All yaml examples below are `SolrCloud` resources, not `SolrBackup` resources.
-
-The Solr-operator currently supports three different backup repository types: Google Cloud Storage ("GCS"), AWS S3 ("S3"), and Volume ("local").
-The cloud backup solutions (GCS and S3) are strongly suggested as they are cloud-native backup solutions, however they require newer Solr versions.
-
-Multiple repositories can be defined under the `SolrCloud.spec.backupRepositories` field.
-Specify a unique name and single repo type that you want to connect to.
-Repository-type specific options are found under the object named with the repository-type.
-Examples can be found below under each repository-type section below.
-Feel free to mix and match multiple backup repository types to fit your use case (or multiple repositories of the same type):
-
-```yaml
-spec:
-  backupRepositories:
-    - name: "local-collection-backups-1"
-      volume:
-        ...
-    - name: "gcs-collection-backups-1"
-      gcs:
-        ...
-    - name: "s3-collection-backups-1"
-      s3:
-        ...
-    - name: "s3-collection-backups-2"
-      s3:
-        ...
-```
-
-### GCS Backup Repositories
-_Since v0.5.0_
-
-GCS Repositories store backup data remotely in Google Cloud Storage.
-This repository type is only supported in deployments that use a Solr version >= `8.9.0`.
-
-Each repository must specify the GCS bucket to store data in (the `bucket` property), and (usually) the name of a Kubernetes secret containing credentials for accessing GCS (the `gcsCredentialSecret` property).
-This secret must have a key `service-account-key.json` whose value is a JSON service account key as described [here](https://cloud.google.com/iam/docs/creating-managing-service-account-keys)
-If you already have your service account key, this secret can be created using a command like the one below.
-
-```bash
-kubectl create secret generic <secretName> --from-file=service-account-key.json=<path-to-service-account-key>
-```
-
-In some rare cases (e.g. when deploying in GKE and relying on its [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) feature) explicit credentials are not required to talk to GCS.  In these cases, the `gcsCredentialSecret` property may be omitted.
-
-An example of a SolrCloud spec with only one backup repository, with type GCS:
-
-```yaml
-spec:
-  backupRepositories:
-    - name: "gcs-backups-1"
-      gcs:
-        bucket: "backup-bucket" # Required
-        gcsCredentialSecret: 
-          name: "secretName"
-          key: "service-account-key.json"
-        baseLocation: "/store/here" # Optional
-```
-
-
-### S3 Backup Repositories
-_Since v0.5.0_
-
-S3 Repositories store backup data remotely in AWS S3 (or a supported S3 compatible interface).
-This repository type is only supported in deployments that use a Solr version >= `8.10.0`.
-
-Each repository must specify an S3 bucket and region to store data in (the `bucket` and `region` properties).
-Users will want to setup credentials so that the SolrCloud can connect to the S3 bucket and region, more information can be found in the [credentials section](#s3-credentials).
-
-```yaml
-spec:
-  backupRepositories:
-    - name: "s3-backups-1"
-      s3:
-        region: "us-west-2" # Required
-        bucket: "backup-bucket" # Required
-        credentials: {} # Optional
-        proxyUrl: "https://proxy-url-for-s3:3242" # Optional
-        endpoint: "https://custom-s3-endpoint:3242" # Optional
-```
-
-Users can also optionally set a `proxyUrl` or `endpoint` for the S3Repository.
-More information on these settings can be found in the [Ref Guide](https://solr.apache.org/guide/8_10/making-and-restoring-backups.html#s3backuprepository).
-
-#### S3 Credentials
-
-The Solr `S3Repository` module uses the [default credential chain for AWS](https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/credentials.html#credentials-chain).
-All of the options below are designed to be utilized by this credential chain.
-
-There are a few options for giving a SolrCloud the credentials for connecting to S3.
-The two most straightforward ways can be used via the `spec.backupRepositories.s3.credentials` property.
-
-```yaml
-spec:
-  backupRepositories:
-    - name: "s3-backups-1"
-      s3:
-        region: "us-west-2"
-        bucket: "backup-bucket"
-        credentials:
-          accessKeyIdSecret: # Optional
-            name: aws-secrets
-            key: access-key-id
-          secretAccessKeySecret: # Optional
-            name: aws-secrets
-            key: secret-access-key
-          sessionTokenSecret: # Optional
-            name: aws-secrets
-            key: session-token
-          credentialsFileSecret: # Optional
-            name: aws-credentials
-            key: credentials
-```
-
-All options in the `credentials` property are optional, as users can pick and choose which ones to use.
-If you have all of your credentials setup in an [AWS Credentials File](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds),
-then `credentialsFileSecret` will be the only property you need to set.
-However, if you don't have a credentials file, you will likely need to set at least the `accessKeyIdSecret` and `secretAccessKeySecret` properties.
-All of these options require the referenced Kuberentes secrets to already exist before creating the SolrCloud resource.
-_(If desired, all options can be combined. e.g. Use `accessKeyIdSecret` and `credentialsFileSecret` together. The ordering of the default credentials chain will determine which options are used.)_
-
-The options in the credentials file above merely set environment variables on the pod, or in the case of `credentialsFileSecret` use an environment variable and a volume mount.
-Users can decide to not use the `credentials` section of the s3 repository config, and instead set these environment variables themselves via `spec.customSolrKubeOptions.podOptions.env`.
-
-Lastly, if running in EKS, it is possible to add [IAM information to Kubernetes serviceAccounts](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/).
-If this is done correctly, you will only need to specify the serviceAccount for the SolrCloud pods via `spec.customSolrKubeOptions.podOptions.serviceAccount`.
-
-_NOTE: Because the Solr S3 Repository is using system-wide settings for AWS credentials, you cannot specify different credentials for different S3 repositories.
-This may be addressed in future Solr versions, but for now use the same credentials for all s3 repos._
-
-### Volume Backup Repositories
-_Since v0.5.0_
-
-Volume repositories store backup data "locally" on a Kubernetes volume mounted to each Solr pod.
-An example of a SolrCloud spec with only one backup repository, with type Volume:
-
-```yaml
-spec:
-  backupRepositories:
-    - name: "local-collection-backups-1"
-      volume:
-        source: # Required
-          persistentVolumeClaim:
-            claimName: "collection-backup-pvc"
-        directory: "store/here" # Optional
-```
-
-**NOTE: All persistent volumes used with Volume Repositories must have `accessMode: ReadWriteMany` set, otherwise the backups will not succeed.**
diff --git a/docs/docs/solr-cloud/README.md b/docs/docs/solr-cloud/README.md
deleted file mode 100644
index 3b430b1..0000000
--- a/docs/docs/solr-cloud/README.md
+++ /dev/null
@@ -1,129 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Solr Clouds
-
-The Solr Operator supports creating and managing Solr Clouds.
-
-To find how to configure the SolrCloud best for your use case, please refer to the [documentation on available SolrCloud CRD options](solr-cloud-crd.md).
-
-This page outlines how to create, update and delete a SolrCloud in Kubernetes.
-
-- [Creation](#creating-an-example-solrcloud)
-- [Scaling](#scaling-a-solrcloud)
-- [Deletion](#deleting-the-example-solrcloud)
-- [Solr Images](#solr-images)
-    - [Official Images](#official-solr-images)
-    - [Custom Images](#build-your-own-private-solr-images)
-
-## Creating an example SolrCloud
-
-Make sure that the Solr Operator and a Zookeeper Operator are running.
-
-Create an example Solr cloud, with the following configuration.
-
-```bash
-$ cat example/test_solrcloud.yaml
-
-apiVersion: solr.apache.org/v1beta1
-kind: SolrCloud
-metadata:
-  name: example
-spec:
-  replicas: 4
-  solrImage:
-    tag: 8.1.1
-```
-
-Apply it to your Kubernetes cluster.
-
-```bash
-$ kubectl apply -f example/test_solrcloud.yaml
-$ kubectl get solrclouds
-
-NAME      VERSION   DESIREDNODES   NODES   READYNODES   AGE
-example   8.1.1     4              2       1            2m
-
-$ kubectl get solrclouds
-
-NAME      VERSION   DESIREDNODES   NODES   READYNODES   AGE
-example   8.1.1     4              4       4            8m
-```
-
-What actually gets created when you start a Solr Cloud though?
-Refer to the [dependencies outline](dependencies.md) to see what dependent Kuberenetes resources are created in order to run a Solr Cloud.
-
-## Scaling a SolrCloud
-
-The SolrCloud CRD support the Kubernetes `scale` operation, to increase and decrease the number of Solr Nodes that are running within the cloud.
-
-```
-# Issue the scale command
-kubectl scale --replicas=5 solrcloud/example
-```
-
-After issuing the scale command, start hitting the "Refresh" button in the Admin UI.
-You will see how the new Solr nodes are added.
-You can also watch the status via the `kubectl get solrclouds` command:
-
-```bash
-watch -dc kubectl get solrclouds
-
-# Hit Control-C when done
-```
-
-### Deleting the example SolrCloud
-
-Delete the example SolrCloud
-
-```bash
-$ kubectl delete solrcloud example
-```
-  
-## Solr Images
-
-### Official Solr Images
-
-The Solr Operator is only guaranteed to work with [official Solr images](https://hub.docker.com/_/solr).
-However, as long as your custom image is built to be compatible with the official image, things should go smoothly.
-This is especially true starting with Solr 9, where the docker image creation is bundled within Solr.
-Run `./gradlew docker` in the Solr repository, and your custom Solr additions will be packaged into an officially compliant Solr Docker image.
-
-Please refer to the [Version Compatibility Matrix](../upgrade-notes.md#solr-versions) for more information on what Solr Versions are compatible with the Solr Operator.
-
-Also note that certain features available within the Solr Operator are only supported in newer Solr Versions.
-The version compatibility matrix shows the minimum Solr version supported for **most** options.
-Please refer to the Solr Reference guide to see what features are enabled for the Solr version you are running.
-
-### Build Your Own Private Solr Images
-
-The Solr Operator supports private Docker repo access for Solr images you may want to store in a private Docker repo. It is recommended to source your image from the official Solr images. 
-
-Using a private image requires you have a K8s secret preconfigured with appropriate access to the image. (type: kubernetes.io/dockerconfigjson)
-
-```
-apiVersion: solr.apache.org/v1beta1
-kind: SolrCloud
-metadata:
-  name: example-private-repo-solr-image
-spec:
-  replicas: 3
-  solrImage:
-    repository: myprivate-repo.jfrog.io/solr
-    tag: 8.2.0
-    imagePullSecret: "k8s-docker-registry-secret"
-```
\ No newline at end of file
diff --git a/docs/docs/solr-cloud/dependencies.md b/docs/docs/solr-cloud/dependencies.md
deleted file mode 100644
index c162048..0000000
--- a/docs/docs/solr-cloud/dependencies.md
+++ /dev/null
@@ -1,65 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-## Dependent Kubernetes Resources
-
-What actually gets created when the Solr Cloud is spun up?
-
-```bash
-$ kubectl get all
-
-NAME                                       READY   STATUS             RESTARTS   AGE
-pod/example-solrcloud-0                    1/1     Running            7          47h
-pod/example-solrcloud-1                    1/1     Running            6          47h
-pod/example-solrcloud-2                    1/1     Running            0          47h
-pod/example-solrcloud-3                    1/1     Running            6          47h
-pod/example-solrcloud-zk-0                 1/1     Running            0          49d
-pod/example-solrcloud-zk-1                 1/1     Running            0          49d
-pod/example-solrcloud-zk-2                 1/1     Running            0          49d
-pod/example-solrcloud-zk-3                 1/1     Running            0          49d
-pod/example-solrcloud-zk-4                 1/1     Running            0          49d
-pod/solr-operator-8449d4d96f-cmf8p         1/1     Running            0          47h
-pod/zk-operator-674676769c-gd4jr           1/1     Running            0          49d
-
-NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
-service/example-solrcloud-0                ClusterIP   ##.###.###.##    <none>        80/TCP                47h
-service/example-solrcloud-1                ClusterIP   ##.###.##.#      <none>        80/TCP                47h
-service/example-solrcloud-2                ClusterIP   ##.###.###.##    <none>        80/TCP                47h
-service/example-solrcloud-3                ClusterIP   ##.###.##.###    <none>        80/TCP                47h
-service/example-solrcloud-common           ClusterIP   ##.###.###.###   <none>        80/TCP                47h
-service/example-solrcloud-headless         ClusterIP   None             <none>        8983/TCP              47h
-service/example-solrcloud-zk-client        ClusterIP   ##.###.###.###   <none>        21210/TCP             49d
-service/example-solrcloud-zk-headless      ClusterIP   None             <none>        22210/TCP,23210/TCP   49d
-
-NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
-deployment.apps/solr-operator              1/1     1            1           49d
-deployment.apps/zk-operator                1/1     1            1           49d
-
-NAME                                       DESIRED   CURRENT   READY   AGE
-replicaset.apps/solr-operator-8449d4d96f   1         1         1       2d1h
-replicaset.apps/zk-operator-674676769c     1         1         1       49d
-
-NAME                                       READY   AGE
-statefulset.apps/example-solrcloud         4/4     47h
-statefulset.apps/example-solrcloud-zk      5/5     49d
-
-NAME                                          HOSTS                                                                                       PORTS   AGE
-ingress.extensions/example-solrcloud-common   default-example-solrcloud.test.domain,default-example-solrcloud-0.test.domain + 3 more...   80      2d2h
-
-NAME                                       VERSION   DESIREDNODES   NODES   READYNODES   AGE
-solrcloud.solr.apache.org/example       8.1.1     4              4       4            47h
-```
\ No newline at end of file
diff --git a/docs/docs/solr-cloud/managed-updates.md b/docs/docs/solr-cloud/managed-updates.md
deleted file mode 100644
index 86ec236..0000000
--- a/docs/docs/solr-cloud/managed-updates.md
+++ /dev/null
@@ -1,91 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Managed SolrCloud Rolling Updates
-_Since v0.2.7_
-
-Solr Clouds are complex distributed systems, and thus require a more delicate and informed approach to rolling updates.
-
-If the [`Managed` update strategy](solr-cloud-crd.md#update-strategy) is specified in the Solr Cloud CRD, then the Solr Operator will take control over deleting SolrCloud pods when they need to be updated.
-
-The operator will find all pods that have not been updated yet and choose the next set of pods to delete for an update, given the following workflow.
-
-## Pod Update Workflow
-
-The logic goes as follows:
-
-1. Find the pods that are out-of-date
-1. Update all out-of-date pods that do not have a started Solr container.
-    - This allows for updating a pod that cannot start, even if other pods are not available.
-    - This step does not respect the `maxPodsUnavailable` option, because these pods have not even started the Solr process.
-1. Retrieve the cluster state of the SolrCloud if there are any `ready` pods.
-    - If no pods are ready, then there is no endpoint to retrieve the cluster state from.
-1. Sort the pods in order of safety for being restarted. [Sorting order reference](#pod-update-sorting-order)
-1. Iterate through the sorted pods, greedily choosing which pods to update. [Selection logic reference](#pod-update-selection-logic)
-    - The maximum number of pods that can be updated are determined by starting with `maxPodsUnavailable`,
-    then subtracting the number of updated pods that are unavailable as well as the number of not-yet-started, out-of-date pods that were updated in a previous step.
-    This check makes sure that any pods taken down during this step do not violate the `maxPodsUnavailable` constraint.
-    
-
-### Pod Update Sorting Order
-
-The pods are sorted by the following criteria, in the given order.
-If any two pods on a criterion, then the next criteria (in the following order) is used to sort them.
-
-In this context the pods sorted highest are the first chosen to be updated, the pods sorted lowest will be selected last.
-
-1. If the pod is the overseer, it will be sorted lowest.
-1. If the pod is not represented in the clusterState, it will be sorted highest.
-    - A pod is not in the clusterstate if it does not host any replicas and is not the overseer.
-1. Number of leader replicas hosted in the pod, sorted low -> high
-1. Number of active or recovering replicas hosted in the pod, sorted low -> high
-1. Number of total replicas hosted in the pod, sorted low -> high
-1. If the pod is not a liveNode, then it will be sorted lower.
-1. Any pods that are equal on the above criteria will be sorted lexicographically.
-
-### Pod Update Selection Logic
-
-Loop over the sorted pods, until the number of pods selected to be updated has reached the maximum.
-This maximum is calculated by taking the given, or default, [`maxPodsUnavailable`](solr-cloud-crd.md#update-strategy) and subtracting the number of updated pods that are unavailable or have yet to be re-created.
-   - If the pod is the overseer, then all other pods must be updated and available.
-   Otherwise, the overseer pod cannot be updated.
-   - If the pod contains no replicas, the pod is chosen to be updated.  
-   **WARNING**: If you use Solr worker nodes for streaming expressions, you will likely want to set [`maxPodsUnavailable`](solr-cloud-crd.md#update-strategy) to a value you are comfortable with.
-   - If Solr Node of the pod is not **`live`**, the pod is chosen to be updated.
-   - If all replicas in the pod are in a **`down`** or **`recovery_failed`** state, the pod is chosen to be updated.
-   - If the taking down the replicas hosted in the pod would not violate the given [`maxShardReplicasUnavailable`](solr-cloud-crd.md#update-strategy), then the pod can be updated.
-   Once a pod with replicas has been chosen to be updated, the replicas hosted in that pod are then considered unavailable for the rest of the selection logic.
-        - Some replicas in the shard may already be in a non-active state, or may reside on Solr Nodes that are not "live".
-        The `maxShardReplicasUnavailable` calculation will take these replicas into account, as a starting point.
-        - If a pod contains non-active replicas, and the pod is chosen to be updated, then the pods that are already non-active will not be double counted for the `maxShardReplicasUnavailable` calculation.
-
-## Triggering a Manual Rolling Restart
-
-Given these complex requirements, `kubectl rollout restart statefulset` will generally not work on a SolrCloud.
-
-One option to trigger a manual restart is to change one of the podOptions annotations. For example you could set this to the date and time of the manual restart.
-
-
-```yaml
-apiVersion: solr.apache.org/v1beta1
-kind: SolrCloud
-spec:
-  customSolrKubeOptions:
-    podOptions:
-      annotations:
-        manualrestart: "2021-10-20T08:37:00Z"
-```
diff --git a/docs/docs/solr-cloud/solr-cloud-crd.md b/docs/docs/solr-cloud/solr-cloud-crd.md
deleted file mode 100644
index 9255364..0000000
--- a/docs/docs/solr-cloud/solr-cloud-crd.md
+++ /dev/null
@@ -1,1134 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# The SolrCloud CRD
-
-The SolrCloud CRD allows users to spin up a Solr cloud in a very configurable way.
-Those configuration options are laid out on this page.
-
-## Solr Options
-
-The SolrCloud CRD gives users the ability to customize how Solr is run.
-
-Please note that the options described below are shown using the base SolrCloud resource, not the helm chart.
-Most options will have the same name and path, however there are differences such as `customSolrKubeOptions`.
-If using Helm, refer to the [Helm Chart documentation](https://artifacthub.io/packages/helm/apache-solr/solr#chart-values) to see the names for the options you are looking to use.
-This document should still be used to see how the SolrCloud options can be used.
-
-### Solr Modules and Additional Libraries
-_Since v0.5.0_
-
-Solr comes packaged with modules that can be loaded optionally, known as either Solr Modules or Solr Contrib Modules.
-By default they are not included in the classpath of Solr, so they have to be explicitly enabled.
-Use the **`SolrCloud.spec.solrModules`** property to add a list of module names, not paths, and they will automatically be enabled for the solrCloud.
-
-However, users might want to include custom code that is not an official Solr Module.
-In order to facilitate this, the **`SolrCloud.spec.additionalLibs`** property takes a list of paths to folders, containing jars to load in the classpath of the SolrCloud.
-
-## Data Storage
-
-The SolrCloud CRD gives the option for users to use either
-persistent storage, through [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/),
-or ephemeral storage, through [emptyDir volumes](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir),
-to store Solr data.
-Ephemeral and persistent storage cannot be used together, if both are provided, the `persistent` options take precedence.
-If neither is provided, ephemeral storage will be used by default.
-
-These options can be found in `SolrCloud.spec.dataStorage`
-
-- **`persistent`**
-  - **`reclaimPolicy`** -
-    _Since v0.2.7_ -
-    Either `Retain`, the default, or `Delete`.
-    This describes the lifecycle of PVCs that are deleted after the SolrCloud is deleted, or the SolrCloud is scaled down and the pods that the PVCs map to no longer exist.
-    `Retain` is used by default, as that is the default Kubernetes policy, to leave PVCs in case pods, or StatefulSets are deleted accidentally.
-    
-    Note: If reclaimPolicy is set to `Delete`, PVCs will not be deleted if pods are merely deleted. They will only be deleted once the `SolrCloud.spec.replicas` is scaled down or deleted.
-  - **`pvcTemplate`** - The template of the PVC to use for the solr data PVCs. By default the name will be "data".
-    Only the `pvcTemplate.spec` field is required, metadata is optional.
-    
-    Note: This template cannot be changed unless the SolrCloud is deleted and recreated.
-    This is a [limitation of StatefulSets and PVCs in Kubernetes](https://github.com/kubernetes/enhancements/issues/661).
-- **`ephemeral`**
-
-  There are two types of ephemeral volumes that can be specified.
-  Both are optional, and if none are specified then an empty `emptyDir` volume source is used.
-  If both are specified then the `hostPath` volume source will take precedence.
-  - **`emptyDir`** - An [`emptyDir` volume source](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) that describes the desired emptyDir volume to use in each SolrCloud pod to store data.
-  - **`hostPath`** - A [`hostPath` volume source](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) that describes the desired hostPath volume to use in each SolrCloud pod to store data.
-
-## Update Strategy
-_Since v0.2.7_
-
-The SolrCloud CRD provides users the ability to define how Pod updates should be managed, through `SolrCloud.Spec.updateStrategy`.
-This provides the following options:
-
-Under `SolrCloud.Spec.updateStrategy`:
-
-- **`method`** - The method in which Solr pods should be updated. Enum options are as follows:
-  - `Managed` - (Default) The Solr Operator will take control over deleting pods for updates. This process is [documented here](managed-updates.md).
-  - `StatefulSet` - Use the default StatefulSet rolling update logic, one pod at a time waiting for all pods to be "ready".
-  - `Manual` - Neither the StatefulSet or the Solr Operator will delete pods in need of an update. The user will take responsibility over this.
-- **`managed`** - Options for rolling updates managed by the Solr Operator.
-  - **`maxPodsUnavailable`** - (Defaults to `"25%"`) The number of Solr pods in a Solr Cloud that are allowed to be unavailable during the rolling restart.
-  More pods may become unavailable during the restart, however the Solr Operator will not kill pods if the limit has already been reached.  
-  - **`maxShardReplicasUnavailable`** - (Defaults to `1`) The number of replicas for each shard allowed to be unavailable during the restart.
-- **`restartSchedule`** - A [CRON](https://en.wikipedia.org/wiki/Cron) schedule for automatically restarting the Solr Cloud.
-  [Multiple CRON syntaxes](https://pkg.go.dev/github.com/robfig/cron/v3?utm_source=godoc#hdr-CRON_Expression_Format) are supported, such as intervals (e.g. `@every 10h`) or predefined schedules (e.g. `@yearly`, `@weekly`, etc.).
-
-**Note:** Both `maxPodsUnavailable` and `maxShardReplicasUnavailable` are intOrString fields. So either an int or string can be provided for the field.
-- **int** - The parameter is treated as an absolute value, unless the value is <= 0 which is interpreted as unlimited.
-- **string** - Only percentage string values (`"0%"` - `"100%"`) are accepted, all other values will be ignored.
-  - **`maxPodsUnavailable`** - The `maximumPodsUnavailable` is calculated as the percentage of the total pods configured for that Solr Cloud.
-  - **`maxShardReplicasUnavailable`** - The `maxShardReplicasUnavailable` is calculated independently for each shard, as the percentage of the number of replicas for that shard.
-
-## Addressability
-_Since v0.2.6_
-
-The SolrCloud CRD provides users the ability to define how it is addressed, through the following options:
-
-Under `SolrCloud.Spec.solrAddressability`:
-
-- **`podPort`** - The port on which the pod is listening. This is also that the port that the Solr Jetty service will listen on. (Defaults to `8983`)
-- **`commonServicePort`** - The port on which the common service is exposed. (Defaults to `80`)
-- **`kubeDomain`** - Specifies an override of the default Kubernetes cluster domain name, `cluster.local`. This option should only be used if the Kubernetes cluster has been setup with a custom domain name.
-- **`external`** - Expose the cloud externally, outside of the kubernetes cluster in which it is running.
-  - **`method`** - (Required) The method by which your cloud will be exposed externally.
-  Currently available options are [`Ingress`](https://kubernetes.io/docs/concepts/services-networking/ingress/) and [`ExternalDNS`](https://github.com/kubernetes-sigs/external-dns).
-  The goal is to support more methods in the future, such as LoadBalanced Services.
-  - **`domainName`** - (Required) The primary domain name to open your cloud endpoints on. If `useExternalAddress` is set to `true`, then this is the domain that will be used in Solr Node names.
-  - **`additionalDomainNames`** - You can choose to listen on additional domains for each endpoint, however Solr will not register itself under these names.
-  - **`useExternalAddress`** - Use the external address to advertise the SolrNode. If a domain name is required for the chosen external `method`, then the one provided in `domainName` will be used. \
-    This can not be set to `true` when **`hideNodes`** is set to `true` or **`ingressTLSTermination`** is used.
-  - **`hideCommon`** - Do not externally expose the common service (one endpoint for all solr nodes).
-  - **`hideNodes`** - Do not externally expose each node. (This cannot be set to `true` if the cloud is running across multiple kubernetes clusters)
-  - **`nodePortOverride`** - Make the Node Service(s) override the podPort. This is only available for the `Ingress` external method. If `hideNodes` is set to `true`, then this option is ignored. If provided, this port will be used to advertise the Solr Node. \
-    If `method: Ingress` and `hideNodes: false`, then this value defaults to `80` since that is the default port that ingress controllers listen on.
-  - **`ingressTLSTermination`** - Terminate TLS for the SolrCloud at the `Ingress`, if using the `Ingress` **method**. This will leave the inter-node communication within the cluster to use HTTP. \
-    This option may not be used with **`useExternalAddress`**. Only one sub-option can be provided.
-    - **`useDefaultTLSSecret`** - Use the default TLS Secret set by your Ingress controller, if your Ingress controller supports this feature. Cannot be used when `tlsSecret` is used. \
-      For example, using nginx: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-ssl-certificate
-    - **`tlsSecret`** - Name a of Kubernetes TLS Secret to terminate TLS when using the `Ingress` method. Cannot be used when `useDefaultTlsSecret` is used.
-
-**Note:** Unless both `external.method=Ingress` and `external.hideNodes=false`, a headless service will be used to make each Solr Node in the statefulSet addressable.
-If both of those criteria are met, then an individual ClusterIP Service will be created for each Solr Node/Pod.
-
-If you are using an `Ingress` for external addressability, you can customize the created `Ingress` through `SolrCloud.spec.customSolrKubeOptions.ingressOptions`.
-Under this property, you can set custom `annotations`, `labels` and an `ingressClassName`.
-
-## Backups
-
-Solr Backups are enabled via the Solr Operator.
-Please refer to the [SolrBackup documentation](../solr-backup) for more information on setting up a SolrCloud with backups enabled.
-
-## Zookeeper Reference
-
-Solr Clouds require an Apache Zookeeper to connect to.
-
-The Solr operator gives a few options.
-
-- Connecting to an already running zookeeper ensemble via [connection strings](#zk-connection-info)
-- [Spinning up a provided](#provided-instance) Zookeeper Ensemble in the same namespace via the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator)
-
-These options are configured under `spec.zookeeperRef`
-
-#### Chroot
-
-Both options below come with options to specify a `chroot`, or a ZNode path for solr to use as it's base "directory" in Zookeeper.
-Before the operator creates or updates a StatefulSet with a given `chroot`, it will first ensure that the given ZNode path exists and if it doesn't the operator will create all necessary ZNodes in the path.
-If no chroot is given, a default of `/` will be used, which doesn't require the existence check previously mentioned.
-If a chroot is provided without a prefix of `/`, the operator will add the prefix, as it is required by Zookeeper.
-
-### ZK Connection Info
-
-This is an external/internal connection string as well as an optional chRoot to an already running Zookeeeper ensemble.
-If you provide an external connection string, you do not _have_ to provide an internal one as well.
-
-Under `spec.zookeeperRef`:
-
-- **`connectionInfo`**
-  - **`externalConnectionString`** - The ZK connection string to the external Zookeeper cluster, e.g. `zoo1:2181`
-  - **`chroot`** - The chroot to use for the cluster
-
-External ZooKeeper clusters are often configured to use ZooKeeper features (e.g. securePort) which require corresponding configuration on the client side.
-To support these use-cases, users may provide arbitrary system properties under `spec.solrZkOpts` which will be passed down to all ZooKeeper clients (Solr, zkcli.sh, etc.) managed by the operator.
-
-#### ACLs
-_Since v0.2.7_
-
-The Solr Operator allows for users to specify ZK ACL references in their Solr Cloud CRDs.
-The user must specify the name of a secret that resides in the same namespace as the cloud, that contains an ACL username value and an ACL password value.
-This ACL must have admin permissions for the [chroot](#chroot) given.
-
-The ACL information can be provided through an ADMIN acl and a READ ONLY acl.  
-- Admin: `SolrCloud.spec.zookeeperRef.connectionInfo.acl`
-- Read Only: `SolrCloud.spec.zookeeperRef.connectionInfo.readOnlyAcl`
-
-All ACL fields are **required** if an ACL is used.
-
-- **`secret`** - The name of the secret, in the same namespace as the SolrCloud, that contains the admin ACL username and password.
-- **`usernameKey`** - The name of the key in the provided secret that stores the admin ACL username.
-- **`passwordKey`** - The name of the key in the provided secret that stores the admin ACL password.
-
-### Provided Instance
-
-If you do not require the Solr cloud to run cross-kube cluster, and do not want to manage your own Zookeeper ensemble,
-the solr-operator can manage Zookeeper ensemble(s) for you.
-
-Using the [zookeeper-operator](https://github.com/pravega/zookeeper-operator), a new Zookeeper ensemble can be spun up for 
-each solrCloud that has this option specified.
-
-The startup parameter `zookeeper-operator` must be provided on startup of the solr-operator for this parameter to be available.
-
-To find all Provided zookeeper options, run `kubectl explain solrcloud.spec.zookeeperRef.provided`.
-Zookeeper Conf and PodOptions provided in the linked Zookeeper Operator version should be supported in the SolrCloud CRD.
-However, this is a manual task, so not all options might be available.
-If there is an option available in the ZookeeperCluster CRD that is not exposed via the SolrCloud CRD, please create a Github Issue.
-
-#### Zookeeper Storage Options
-_Since v0.4.0_
-
-The Zookeeper Operator allows for both ephemeral and persistent storage, and the Solr Operator supports both as of `v0.4.0`.
-
-```yaml
-spec:
-  zookeeperRef:
-    provided:
-      ephemeral:
-        emptydirvolumesource: {}
-      persistence:
-        reclaimPolicy: "Retain" # Either Retain or Delete
-        spec: {} # PVC Spec for the Zookeeper volumes
-```
-
-By default, if you do not provide either `ephemeral` or `persistence`, the Solr Operator will default to the type of storage you are using for your Solr pods.
-
-However, if you provide either object above, even if the object is empty, that storage type will be used for the created Zookeeper pods.
-If both `ephemeral` and `persistence` is provided, then `persistence` is preferred.
-
-#### ACLs for Provided Ensembles
-_Since v0.3.0_
-
-If you want Solr to set ZK ACLs for znodes it creates in the `provided` ensemble, you can supply ACL credentials for an ADMIN and optionally a READ ONLY user using the following config settings: 
-- Admin: `SolrCloud.spec.zookeeperRef.provided.acl`
-- Read Only: `SolrCloud.spec.zookeeperRef.provided.readOnlyAcl`
-
-All ACL fields are **required** if an ACL is used.
-
-- **`secret`** - The name of the secret, in the same namespace as the SolrCloud, that contains the ACL username and password.
-- **`usernameKey`** - The name of the key in the provided secret that stores the admin ACL username.
-- **`passwordKey`** - The name of the key in the provided secret that stores the admin ACL password.
-
-**Warning**: There is a known issue with the Zookeeper operator where it deploys pods with `skipACL=yes`, see: https://github.com/pravega/zookeeper-operator/issues/316.
-This means that even if Solr sets the ACLs on znodes, they will not be enforced by Zookeeper. If your organization requires Solr to use ZK ACLs, then you'll need to 
-deploy Zookeeper to Kubernetes using another approach, such as using a Helm chart. 
-
-## Override Built-in Solr Configuration Files
-_Since v0.2.7_
-
-The Solr operator deploys well-configured SolrCloud instances with minimal input required from human operators. 
-As such, the operator installs various configuration files automatically, including `solr.xml` for node-level settings and `log4j2.xml` for logging. 
-However, there may come a time when you need to override the built-in configuration files with custom settings.
-
-In general, users can provide custom config files by providing a ConfigMap in the same namespace as the SolrCloud instance; 
-all custom config files should be stored in the same user-provided ConfigMap under different keys.
-Point your SolrCloud definition to a user-provided ConfigMap using the following structure:
-```yaml
-spec:
-  ...
-  customSolrKubeOptions:
-    configMapOptions:
-      providedConfigMap: <Custom-ConfigMap-Here>
-```
-
-### Custom solr.xml
-
-Solr pods load node-level configuration settings from `/var/solr/data/solr.xml`. 
-This important configuration file gets created by the `cp-solr-xml` initContainer which bootstraps the `solr.home` directory on each pod before starting the main container.
-The default `solr.xml` is mounted into the `cp-solr-xml` initContainer from a ConfigMap named `<INSTANCE>-solrcloud-configmap` (where `<INSTANCE>` is the name of your SolrCloud instance) created by the Solr operator.
-
-_Note: The data in the default ConfigMap is not editable! Any changes to the `solr.xml` in the default ConfigMap created by the operator will be overwritten during the next reconcile cycle._
-
-Many of the specific values in `solr.xml` can be set using Java system properties; for instance, the following setting controls the read timeout for the HTTP client used by Solr's `HttpShardHandlerFactory`:
-```xml
-<int name="socketTimeout">${socketTimeout:600000}</int>
-```
-The `${socketTimeout:600000}` syntax means pull the value from a Java system property named `socketTimeout` with default `600000` if not set.
-
-You can set Java system properties using the `solrOpts` string in your SolrCloud definition, such as:
-```yaml
-spec:
-  solrOpts: -DsocketTimeout=300000
-```
-This same approach works for a number of settings in `solrconfig.xml` as well.
-
-However, if you need to customize `solr.xml` beyond what can be accomplished with Java system properties, 
-then you need to supply your own `solr.xml` in a ConfigMap in the same namespace where you deploy your SolrCloud instance.
-Provide your custom XML in the ConfigMap using `solr.xml` as the key as shown in the example below:
-```yaml
----
-kind: ConfigMap
-apiVersion: v1
-metadata:
-  name: custom-solr-xml
-data:
-  solr.xml: |
-    <?xml version="1.0" encoding="UTF-8" ?>
-    <solr>
-      ... CUSTOM CONFIG HERE ...
-    </solr>
-```
-**Important: Your custom `solr.xml` must include `<int name="hostPort">${hostPort:0}</int>` as the operator relies on this element to set the port Solr pods advertise to ZooKeeper. If this element is missing, then your Solr pods will not be created.**
-
-You can get the default `solr.xml` from a Solr pod as a starting point for creating a custom config using `kubectl cp` as shown in the example below:
-```bash
-SOLR_POD_ID=$(kubectl get pod -l technology=solr-cloud --no-headers -o custom-columns=":metadata.name" | head -1)
-kubectl cp $SOLR_POD_ID:/var/solr/data/solr.xml ./custom-solr.xml
-```
-This copies the default config from the first Solr pod found in the namespace and names it `custom-solr.xml`. Customize the settings in `custom-solr.xml` as needed and then create a ConfigMap using YAML. 
-
-_Note: Using `kubectl create configmap --from-file` scrambles the XML formatting, so we recommend defining the configmap YAML as shown above to keep the XML formatted properly._
-
-Point your SolrCloud instance at the custom ConfigMap using:
-```yaml
-spec:
-  customSolrKubeOptions:
-    configMapOptions:
-      providedConfigMap: custom-solr-xml
-```
-_Note: If you set `providedConfigMap`, then the ConfigMap must include the `solr.xml` or `log4j2.xml` key, otherwise the SolrCloud will fail to reconcile._
-
-#### Changes to Custom Config Trigger Rolling Restarts
-
-The Solr operator stores the MD5 hash of your custom XML in the StatefulSet's pod spec annotations (`spec.template.metadata.annotations`). To see the current annotations for your Solr pods, you can do:
-```bash
-kubectl annotate pod -l technology=solr-cloud --list=true
-```
-If the custom `solr.xml` changes in the user-provided ConfigMap, then the operator triggers a rolling restart of Solr pods to apply the updated configuration settings automatically.
-
-To summarize, if you need to customize `solr.xml`, provide your own version in a ConfigMap and changes made to the XML in the ConfigMap are automatically applied to your Solr pods.
-
-### Custom Log Configuration
-_Since v0.3.0_
-
-By default, the Solr Docker image configures Solr to load its log configuration from `/var/solr/log4j2.xml`. 
-If you need to fine-tune the log configuration, then you can provide a custom `log4j2.xml` in a ConfigMap using the same basic process as described in the previous section for customizing `solr.xml`. If supplied, the operator overrides the log config using the `LOG4J_PROPS` env var.
-
-As with custom `solr.xml`, the operator can track the MD5 hash of your `log4j2.xml` in the pod spec annotations to trigger a rolling restart if the log config changes. 
-However, Log4j2 supports hot reloading of log configuration using the `monitorInterval` attribute on the root `<Configuration>` element. For more information on this, see: [Log4j Automatic Reconfiguration](https://logging.apache.org/log4j/2.x/manual/configuration.html#AutomaticReconfiguration). 
-If your custom log config has a `monitorInterval` set, then the operator does not watch for changes to the log config and will not trigger a rolling restart if the config changes. 
-Kubernetes will automatically update the file on each pod's filesystem when the data in the ConfigMap changes. Once Kubernetes updates the file, Log4j will pick up the changes and apply them without restarting the Solr pod.
-
-If you need to customize both `solr.xml` and `log4j2.xml` then you need to supply both in the same ConfigMap using multiple keys as shown below:
-```yaml
----
-kind: ConfigMap
-apiVersion: v1
-metadata:
-  name: custom-solr-xml
-data:
-  log4j2.xml: |
-    <?xml version="1.0" encoding="UTF-8"?>
-    <Configuration monitorInterval="30">
-     ... YOUR CUSTOM LOG4J CONFIG HERE ...
-    </Configuration>
-
-
-  solr.xml: |
-    <?xml version="1.0" encoding="UTF-8" ?>
-    <solr>
-     ... YOUR CUSTOM SOLR XML CONFIG HERE ...
-    </solr>
-```
-
-## Enable TLS Between Solr Pods
-_Since v0.3.0_
-
-A common approach to securing traffic to your Solr cluster is to perform [**TLS termination** at the Ingress](#enable-ingress-tls-termination) and leave all traffic between Solr pods un-encrypted.
-However, depending on how you expose Solr on your network, you may also want to encrypt traffic between Solr pods.
-The Solr operator provides **optional** configuration settings to enable TLS for encrypting traffic between Solr pods.
-
-Enabling TLS for Solr is a straight-forward process once you have a [**PKCS12 keystore**]((https://en.wikipedia.org/wiki/PKCS_12)) containing an [X.509](https://en.wikipedia.org/wiki/X.509) certificate and private key; as of Java 8, PKCS12 is the default keystore format supported by the JVM.
-
-There are three basic use cases supported by the Solr operator. First, you can use cert-manager to issue a certificate and store the resulting PKCS12 keystore in a Kubernetes TLS secret. 
-Alternatively, you can create the TLS secret manually from a certificate obtained by some other means. In both cases, you simply point your SolrCloud CRD to the resulting TLS secret and corresponding keystore password secret.
-Lastly, as of **v0.4.0**, you can supply the path to a directory containing TLS files that are mounted by some external agent or CSI driver.  
-
-### Use cert-manager to issue the certificate
-
-[cert-manager](https://cert-manager.io/docs/) is a popular Kubernetes controller for managing TLS certificates, including renewing certificates prior to expiration. 
-One of the primary benefits of cert-manager is it supports pluggable certificate `Issuer` implementations, including a self-signed Issuer for local development and an [ACME compliant](https://tools.ietf.org/html/rfc8555) Issuer for working with services like [Let’s Encrypt](https://letsencrypt.org/).
-
-If you already have a TLS certificate you want to use for Solr, then you don't need cert-manager and can skip down to [I already have a TLS Certificate](#i-already-have-a-tls-certificate) later in this section.
-If you do not have a TLS certificate, then we recommend installing **cert-manager** as it makes working with TLS in Kubernetes much easier.
-
-#### Install cert-manager
-
-Given its popularity, cert-manager may already be installed in your Kubernetes cluster. To check if `cert-manager` is already installed, do:
-```bash
-kubectl get crds -l app.kubernetes.io/instance=cert-manager
-```
-If installed, you should see the following cert-manager related CRDs:
-```
-certificaterequests.cert-manager.io
-certificates.cert-manager.io
-challenges.acme.cert-manager.io
-clusterissuers.cert-manager.io
-issuers.cert-manager.io
-orders.acme.cert-manager.io
-```
-
-If not installed, use Helm to install it into the `cert-manager` namespace:
-```bash
-if ! helm repo list | grep -q "https://charts.jetstack.io"; then
-  helm repo add jetstack https://charts.jetstack.io
-  helm repo update
-fi
-
-kubectl create ns cert-manager
-helm upgrade --install cert-manager jetstack/cert-manager \
-  --namespace cert-manager \
-  --version v1.1.0 \
-  --set installCRDs=true
-``` 
-You’ll need admin privileges to install the CRDs in a shared K8s cluster, so work with your friendly Kubernetes admin to install if needed (most likely cert-manager will already be installed).
-Refer to the [cert-manager Installation](https://cert-manager.io/docs/installation/kubernetes/) instructions for more information.
-
-#### Create cert-manager Certificate
-
-Once cert-manager is installed, you need to create an `Issuer` or `ClusterIssuer` CRD and then request a certificate using a [Certificate CRD](https://cert-manager.io/docs/usage/certificate/).
-Refer to the [cert-manager docs](https://cert-manager.io/docs/) on how to define a certificate.
-
-Certificate Issuers are typically platform specific. For instance, on GKE, to create a Let’s Encrypt Issuer you need a service account with various cloud DNS permissions granted for DNS01 challenges to work, see: https://cert-manager.io/docs/configuration/acme/dns01/google/.
-
-The DNS names in your certificate should match the Solr addressability settings in your SolrCloud CRD. For instance, if your SolrCloud CRD uses the following settings:
-```yaml
-spec:
-  solrAddressability:
-    external:
-      domainName: k8s.solr.cloud
-``` 
-Then your certificate needs the following domains specified:
-```yaml
-apiVersion: cert-manager.io/v1
-kind: Certificate
-metadata:
-  ...
-spec:
-  dnsNames:
-  - '*.k8s.solr.cloud'
-  - k8s.solr.cloud
-```
-The wildcard DNS name will cover all SolrCloud nodes such as `<NS>-solrcloud-1.k8s.solr.cloud`.
-
-Also, when requesting your certificate, keep in mind that internal DNS names in Kubernetes are not valid for public certificates. 
-For instance `<svc>.<namespace>.svc.cluster.local` is internal to Kubernetes and certificate issuer services like LetsEncrypt 
-will not generate a certificate for K8s internal DNS names (you'll get errors during certificate issuing).
-
-Another benefit is cert-manager can create a [PKCS12](https://cert-manager.io/docs/release-notes/release-notes-0.15/#general-availability-of-jks-and-pkcs-12-keystores) keystore automatically when issuing a `Certificate`, 
-which allows the Solr operator to mount the keystore directly on our Solr pods. Ensure your certificate instance requests **pkcs12 keystore** gets created using config similar to the following:
-```yaml
-  keystores:
-    pkcs12:
-      create: true
-      passwordSecretRef:
-        key: password-key
-        name: pkcs12-password-secret
-```
-_Note: the example structure above goes in your certificate CRD YAML, not SolrCloud._
-
-You need to create the keystore secret (e.g. `pkcs12-password-secret`) in the same namespace before requesting the certificate, see: https://cert-manager.io/docs/reference/api-docs/#cert-manager.io/v1.PKCS12Keystore.
-Although a keystore password is not required for PKCS12, **cert-manager** requires a password when requesting a `pkcs12` keystore for your certificate.
-Moreover, most JVMs require a password for pkcs12 keystores, not supplying a password typically results in errors like the following:
-```
-Caused by: java.security.UnrecoverableKeyException: Get Key failed: null
-	at java.base/sun.security.pkcs12.PKCS12KeyStore.engineGetKey(Unknown Source)
-	at java.base/sun.security.util.KeyStoreDelegator.engineGetKey(Unknown Source)
-	at java.base/java.security.KeyStore.getKey(Unknown Source)
-	at java.base/sun.security.ssl.SunX509KeyManagerImpl.<init>(Unknown Source)
-```
-Consequently, the Solr operator requires you to use a non-null password for your keystore. 
-
-Here's an example of how to use cert-manager to generate a self-signed certificate:
-```yaml
----
-apiVersion: v1
-kind: Secret
-metadata:
-  name: pkcs12-password-secret
-data:
-  password-key: SOME_PASSWORD_HERE
-
----
-apiVersion: cert-manager.io/v1
-kind: Issuer
-metadata:
-  name: selfsigned-issuer
-spec:
-  selfSigned: {}
-
----
-apiVersion: cert-manager.io/v1
-kind: Certificate
-metadata:
-  name: selfsigned-cert
-spec:
-  subject:
-    organizations: ["dev"]
-  dnsNames:
-    - localhost
-    - dev-dev-solrcloud.ing.local.domain
-    - "*.ing.local.domain"
-  secretName: dev-selfsigned-cert-tls
-  issuerRef:
-    name: selfsigned-issuer
-  keystores:
-    pkcs12:
-      create: true
-      passwordSecretRef:
-        key: password-key
-        name: pkcs12-password-secret
-```
-
-Once created, simply point the SolrCloud deployment at the TLS and keystore password secrets, e.g.
-```yaml
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrTLS:
-    keyStorePasswordSecret:
-      name: pkcs12-password-secret
-      key: password-key
-    pkcs12Secret:
-      name: selfsigned-cert
-      key: keystore.p12
-```
-_Note: when using self-signed certificates, you'll have to configure HTTP client libraries to skip hostname and CA verification._
-
-### I already have a TLS Certificate
-
-Users may bring their own cert stored in a `kubernetes.io/tls` secret; for this use case, cert-manager is not required. 
-There are many ways to get a certificate, such as from the GKE managed certificate process or from a CA directly. 
-Regardless of how you obtain a Certificate, it needs to be stored in a [Kubernetes TLS secret](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) 
-that contains a `tls.crt` file (x.509 certificate with a public key and info about the issuer) and a `tls.key` file (the private key).
-
-Ideally, the TLS secret will also have a `pkcs12` keystore. 
-If the supplied TLS secret does not contain a `keystore.p12` key, then the Solr operator creates an `initContainer` on the StatefulSet to generate the keystore from the TLS secret using the following command:
-```bash
-openssl pkcs12 -export -in tls.crt -inkey tls.key -out keystore.p12 -passout pass:${SOLR_SSL_KEY_STORE_PASSWORD}"
-```
-_The `initContainer` uses the main Solr image as it has `openssl` installed._
-
-Configure the SolrCloud deployment to point to the user-provided keystore and TLS secrets:
-```yaml
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrTLS:
-    keyStorePasswordSecret:
-      name: pkcs12-keystore-manual
-      key: password-key
-    pkcs12Secret:
-      name: pkcs12-keystore-manual
-      key: keystore.p12
-```
-
-### Separate TrustStore
-
-A truststore holds public keys for certificates you trust. By default, Solr pods are configured to use the keystore as the truststore.
-However, you may have a separate truststore you want to use for Solr TLS. As with the keystore, you need to provide a PKCS12 truststore in a secret and then configure your SolrCloud TLS settings as shown below:
-```yaml
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrTLS:
-    keyStorePasswordSecret:
-      name: pkcs12-keystore-manual
-      key: password-key
-    pkcs12Secret:
-      name: pkcs12-keystore-manual
-      key: keystore.p12
-    trustStorePasswordSecret:
-      name: pkcs12-truststore
-      key: password-key
-    trustStoreSecret:
-      name: pkcs12-truststore
-      key: truststore.p12
-``` 
-_Tip: if your truststore is not in PKCS12 format, use `openssl` to convert it._ 
-
-### Mounted TLS Directory
-_Since v0.4.0_
-
-The options discussed to this point require that all Solr pods share the same certificate and truststore. An emerging pattern in the Kubernetes ecosystem is to issue a unique certificate for each pod.
-Typically this operation is performed by an external agent, such as a cert-manager extension, that uses mutating webhooks to mount a unique certificate and supporting files on each pod dynamically.
-How the pod-specific certificates get issued is beyond the scope of the Solr operator. Under this scheme, you can use `spec.solrTLS.mountedTLSDir.path` to specify the path where the TLS files are mounted on the main pod.
-The following example illustrates how to configure a keystore and truststore in PKCS12 format using the `mountedTLSDir` option:
-```yaml
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrTLS:
-    clientAuth: Want
-    checkPeerName: true
-    verifyClientHostname: true
-    mountedTLSDir:
-      path: /pod-server-tls
-      keystoreFile: keystore.p12
-      keystorePasswordFile: keystore-password
-      truststoreFile: truststore.p12
-```
-
-When using the mounted TLS directory option, you need to ensure each Solr pod gets restarted before the certificate expires. Solr does not support hot reloading of the keystore or truststore.
-Consequently, we recommend using the `spec.updateStrategy.restartSchedule` to restart pods before the certificate expires. 
-Typically, with this scheme, a new certificate is issued whenever a pod is restarted.
-
-### Client TLS
-_Since v0.4.0_
-
-Solr supports using separate client and server TLS certificates. Solr uses the client certificate in mutual TLS (mTLS) scenarios to make requests to other Solr pods.
-Use the `spec.solrClientTLS` configuration options to configure a separate client certificate. 
-As this is an advanced option, the supplied client certificate keystore and truststore must already be in PKCS12 format.
-As with the server certificate loaded from `spec.solrTLS.pkcs12Secret`, 
-you can have the operator restart Solr pods after the client TLS secret updates by setting `spec.solrClientTLS.restartOnTLSSecretUpdate` to `true`.
-
-You may need to increase the timeout for the liveness / readiness probes when using mTLS with a separate client certificate, such as: 
-```yaml
-spec:
-  ... other SolrCloud CRD settings ...
-
-  customSolrKubeOptions:
-    podOptions:
-      livenessProbe:
-        timeoutSeconds: 10
-      readinessProbe:
-        timeoutSeconds: 10
-```
-
-You may also use the `spec.solrClientTLS.mountedTLSDir` option to load a pod specific client certificate from a directory mounted by an external agent or CSI driver.  
-
-### Ingress with TLS protected Solr
-
-The Solr operator may create an Ingress for exposing Solr pods externally. When TLS is enabled, the operator adds the following annotation and TLS settings to the Ingress manifest, such as:
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
-  annotations:
-    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
-spec:
-  rules:
-    ...
-  tls:
-  - secretName: my-selfsigned-cert-tls
-```
-
-If using the mounted TLS Directory option with an Ingress, you will need to inject the ingress with TLS information as well.
-The [Ingress TLS Termination section below](#enable-ingress-tls-termination) shows how this can be done when using cert-manager.
-
-
-### Certificate Renewal and Rolling Restarts
-
-cert-manager automatically handles certificate renewal. From the docs:
-
-> The default duration for all certificates is 90 days and the default renewal windows is 30 days. This means that certificates are considered valid for 3 months and renewal will be attempted within 1 month of expiration.
->          https://docs.cert-manager.io/en/release-0.8/reference/certificates.html
-
-However, this only covers updating the underlying TLS secret and mounted secrets in each Solr pod do get updated on the filesystem, see: https://kubernetes.io/docs/concepts/configuration/secret/#mounted-secrets-are-updated-automatically. 
-However, the JVM only reads key and trust stores once during initialization and does not reload them if they change. Thus, we need to recycle the Solr container in each pod to pick up the updated keystore.
-
-The operator tracks the MD5 hash of the `tls.crt` from the TLS secret in an annotation on the StatefulSet pod spec so that when the TLS secret changes, it will trigger a rolling restart of the affected Solr pods.
-The operator guards this behavior with an **opt-in** flag `restartOnTLSSecretUpdate` as some users may not want to restart Solr pods when the TLS secret holding the cert changes and may instead choose to restart the pods during a maintenance window (presumably before the certs expire).
-```yaml
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrTLS:
-    restartOnTLSSecretUpdate: true
-    ...
-
-```
-
-### Misc Config Settings for TLS Enabled Solr
-
-Although not required, we recommend setting the `commonServicePort` and `nodePortOverride` to `443` instead of the default port `80` under `solrAddressability` to avoid confusion when working with `https`. 
-```yaml
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrAddressability:
-    commonServicePort: 443
-    external:
-      nodePortOverride: 443
-
-```
-
-#### Prometheus Exporter
-
-If you're relying on a self-signed certificate (or any certificate that requires importing the CA into the Java trust store) for Solr pods, then the Prometheus Exporter will not be able to make requests for metrics. 
-You'll need to duplicate your TLS config from your SolrCloud CRD definition to your Prometheus exporter CRD definition as shown in the example below:
-```yaml
-  solrReference:
-    cloud:
-      name: "dev"
-    solrTLS:
-      restartOnTLSSecretUpdate: true
-      keyStorePasswordSecret:
-        name: pkcs12-password-secret
-        key: password-key
-      pkcs12Secret:
-        name: dev-selfsigned-cert-tls
-        key: keystore.p12
-```
-_This only applies to the SolrJ client the exporter uses to make requests to your TLS-enabled Solr pods and does not enable HTTPS for the exporter service._
-
-#### Public / Private Domain Names
-
-If your Solr pods use Kubernetes internal domain names, such as `<cloud>-solrcloud-<oridinal>.<ns>` or 
-`<cloud>-solrcloud-<oridinal>.<ns>.svc.cluster.local` then you **cannot** request a certificate from a service like LetsEncrypt. 
-You'll receive an error like (from the cert-manager controller pod logs):
-```
-   Cannot issue for \"*.<ns>.svc.cluster.local\": Domain name does not end with a valid public suffix (TLD)"
-```
-This is policy enforced by trusted certificate authorities, see: https://www.digicert.com/kb/advisories/internal-names.htm.
-Intuitively, this makes sense because services like LetsEncrypt cannot determine if you own a private domain because they cannot reach it from the Internet. 
-
-Some CA's provide TLS certificates for private domains but that topic is beyond the scope of the Solr operator.
-You may want to use a self-signed certificate for internal traffic and then a public certificate for your Ingress.
-Alternatively, you can choose to expose Solr pods with an external name using SolrCloud `solrAddressability` settings:
-```yaml
-kind: SolrCloud
-metadata:
-  name: search
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrAddressability:    
-    commonServicePort: 443
-    external:
-      nodePortOverride: 443
-      domainName: k8s.solr.cloud
-      method: Ingress
-      useExternalAddress: true
-```
-The example settings above will result in your Solr pods getting names like: `<ns>-search-solrcloud-0.k8s.solr.cloud` 
-which you can request TLS certificates from LetsEncrypt assuming you own the `k8s.solr.cloud` domain.
-
-#### mTLS
-
-Mutual TLS (mTLS) provides an additional layer of security by ensuring the client applications sending requests to Solr are trusted.
-To enable mTLS, simply set `spec.solrTLS.clientAuth` to either `Want` or `Need`. When mTLS is enabled, the Solr operator needs to
-supply a client certificate that is trusted by Solr; the operator makes API calls to Solr to get cluster status. 
-To configure the client certificate for the operator, see [Running the Operator > mTLS](../running-the-operator.md#client-auth-for-mtls-enabled-solr-clusters)
-
-When mTLS is enabled, the liveness and readiness probes are configured to execute a local command on each Solr pod instead of the default HTTP Get request.
-Using a command is required so that we can use the correct TLS certificate when making an HTTPs call to the probe endpoints.
-
-To help with debugging the TLS handshake between client and server,
-you can add the `-Djavax.net.debug=SSL,keymanager,trustmanager,ssl:handshake` Java system property to the `spec.solrOpts` for your SolrCloud instance. 
-
-To verify mTLS is working for your Solr pods, you can supply the client certificate (and CA cert if needed) via curl after opening a port-forward to one of your Solr pods:
-```
-curl "https://localhost:8983/solr/admin/info/system" -v \
-  --key client/private_key.pem \
-  --cert client/client.pem \
-  --cacert root-ca/root-ca.pem
-```
-The `--cacert` option supplies the CA's certificate needed to trust the server certificate provided by the Solr pods during TLS handshake.
-
-## Enable Ingress TLS Termination
-_Since v0.4.0_
-
-A common approach to securing traffic to your Solr cluster is to perform **TLS termination** at the Ingress and either leave all traffic between Solr pods un-encrypted or use private CAs for inter-pod communication.
-The operator supports this paradigm, to ensure all external traffic is encrypted.
-
-```yaml
-kind: SolrCloud
-metadata:
-  name: search
-spec:
-  ... other SolrCloud CRD settings ...
-
-  solrAddressability:
-    external:
-      domainName: k8s.solr.cloud
-      method: Ingress
-      hideNodes: true
-      useExternalAddress: false
-      ingressTLSTermination:
-        tlsSecret: my-selfsigned-cert-tls
-```
-
-The only additional settings required here are:
-- Making sure that you are not using the external TLS address for Solr to communicate internally via `useExternalAddress: false`.
-  This will be ignored, even if it is set to `true`.
-- Adding a TLS secret through `ingressTLSTermination.tlsSecret`, this is passed to the Kubernetes Ingress to handle the TLS termination.
-  _This ensures that the only way to communicate with your Solr cluster externally is through the TLS protected common-endpoint._
-
-To generate a TLS secret, follow the [instructions above](#use-cert-manager-to-issue-the-certificate) and use the templated Hostname: `<namespace>-<name>-solrcloud.<domain>`
-
-If you configure your SolrCloud correctly, cert-manager can auto-inject the TLS secrets for you as well:
-
-```yaml
-kind: SolrCloud
-metadata:
-  name: search
-  namespace: explore
-spec:
-  ... other SolrCloud CRD settings ...
-  customSolrKubeOptions:
-    ingressOptions:
-      annotations:
-        kubernetes.io/ingress.class: "nginx"
-        cert-manager.io/issuer: "<issuer-name>"
-        cert-manager.io/common-name: explore-search-solrcloud.apple.com
-  solrAddressability:
-    external:
-      domainName: k8s.solr.cloud
-      method: Ingress
-      hideNodes: true
-      useExternalAddress: false
-      ingressTLSTermination:
-        tlsSecret: myingress-cert
-```
-
-For more information on the Ingress TLS Termination options for cert-manager, [refer to the documentation](https://cert-manager.io/docs/usage/ingress/).
-
-## Authentication and Authorization
-_Since v0.3.0_
-
-All well-configured Solr clusters should enforce users to authenticate, even for read-only operations. Even if you want
-to allow anonymous query requests from unknown users, you should make this explicit using Solr's rule-based authorization
-plugin. In other words, always enforce security and then relax constraints as needed for specific endpoints based on your
-use case. The Solr operator can bootstrap a default security configuration for your SolrCloud during initialization. As such,
-there is no reason to deploy an unsecured SolrCloud cluster when using the Solr operator. In most cases, you'll want to combine
-basic authentication with TLS to ensure credentials are never passed in clear text.
-
-For background on Solr security, please refer to the [Reference Guide](https://solr.apache.org/guide) for your version of Solr.
-
-The Solr operator only supports the `Basic` authentication scheme. In general, you have two primary options for configuring authentication with the Solr operator:
-1. Let the Solr operator bootstrap the `security.json` to configure *basic authentication* for Solr.
-2. Supply your own `security.json` to Solr, which must define a user account that the operator can use to make API requests to secured Solr pods.
-
-If you choose option 2, then you need to provide the credentials the Solr operator should use to make requests to Solr via a Kubernetes secret. 
-With option 1, the operator creates a Basic Authentication Secret for you, which contains the username and password for the `k8s-oper` user.
-
-### Option 1: Bootstrap Security
-
-The easiest way to get started with Solr security is to have the operator bootstrap a `security.json` (stored in ZK) as part of the initial deployment process.
-To activate this feature, add the following configuration to your SolrCloud CRD definition YAML:
-```yaml
-spec:
-  ...
-  solrSecurity:
-    authenticationType: Basic
-```
-
-Once the cluster is up, you'll need the `admin` user password to login to the Solr Admin UI.
-The `admin` user will have a random password generated by the operator during `security.json` bootstrapping.
-Use the following command to retrieve the password from the bootstrap secret created by the operator:
-```bash
-kubectl get secret <CLOUD>-solrcloud-security-bootstrap -o jsonpath='{.data.admin}' | base64 --decode
-```
-_where `<CLOUD>` is the name of your SolrCloud_
-
-Once `security.json` is bootstrapped, the operator will not update it! You're expected to use the `admin` user to access the Security API to make further changes.
-In addition to the `admin` user, the operator defines a `solr` user, which has basic read access to Solr resources. You can retrieve the `solr` user password using:
-```bash
-kubectl get secret <CLOUD>-solrcloud-security-bootstrap -o jsonpath='{.data.solr}' | base64 --decode
-```
-
-You can safely delete the bootstrap secret, provided you've captured the `admin` password, after your SolrCloud deploys with the bootstrapped `security.json`.
-However, this will trigger a rolling restart across all pods as the `setup-zk` initContainer definition changes.
-
-#### k8s-oper user
-
-The operator makes requests to secured Solr endpoints as the `k8s-oper` user; credentials for the `k8s-oper` user are stored in a separate secret of type `kubernetes.io/basic-auth`
-with name `<CLOUD>-solrcloud-basic-auth`. The `k8s-oper` user is configured with read-only access to a minimal set of endpoints, see details in the **Authorization** sub-section below.
-Remember, if you change the `k8s-oper` password using the Solr security API, then you **must** update the secret with the new password or the operator will be locked out.
-Also, changing the password for the `k8s-oper` user in the K8s secret after bootstrapping will not update Solr! You're responsible for changing the password in both places.
-
-#### Liveness and Readiness Probes
-
-We recommend configuring Solr to allow un-authenticated access over HTTP to the probe endpoint(s) and the bootstrapped `security.json` does this for you automatically (see next sub-section). 
-However, if you want to secure the probe endpoints, then you need to set `probesRequireAuth: true` as shown below:
-```yaml
-spec:
-  ...
-  solrSecurity:
-    authenticationType: Basic
-    probesRequireAuth: true
-```
-When `probesRequireAuth` is set to `true`, the liveness and readiness probes execute a command instead of using HTTP. 
-The operator configures a command instead of setting the `Authorization` header for the HTTP probes, as that would require a restart of all pods if the password changes. 
-With a command, we can load the username and password from a secret; Kubernetes will 
-[update the mounted secret files](https://kubernetes.io/docs/concepts/configuration/secret/#mounted-secrets-are-updated-automatically) when the secret changes automatically.
-
-If you customize the HTTP path for any probes (under `spec.customSolrKubeOptions.podOptions`), 
-then you must use `probesRequireAuth=false` as the operator does not reconfigure custom HTTP probes to use the command needed to support `probesRequireAuth=true`.
-
-If you're running Solr 8+, then we recommend using the `/admin/info/health` endpoint for your probes using the following config:
-```yaml
-spec:
-  ...
-  customSolrKubeOptions:
-    podOptions:
-      livenessProbe:
-        httpGet:
-          scheme: HTTP
-          path: /solr/admin/info/health
-          port: 8983
-      readinessProbe:
-        httpGet:
-          scheme: HTTP
-          path: /solr/admin/info/health
-          port: 8983
-```
-Consequently, the bootstrapped `security.json` will include an additional rule to allow access to the `/admin/info/health` endpoint:
-```json
-      {
-        "name": "k8s-probe-1",
-        "role": null,
-        "collection": null,
-        "path": "/admin/info/health"
-      }
-```
-
-Note, if you change the probes after creating your solrcloud, then the new probe paths will not be added to the security.json.
-The security file is bootstrapped just once, so if your probes need to change you must add it to the allowed paths via the Solr Security API using the admin credentials.
-
-#### Authorization
-
-The default `security.json` created by the operator during initialization is shown below; the passwords for each user are randomized for every SolrCloud you create.
-In addition to configuring the `solr.BasicAuthPlugin`, the operator initializes a set of authorization rules for the default user accounts: `admin`, `solr`, and `k8s-oper`.
-Take a moment to review these authorization rules so that you're aware of the roles and access granted to each user in your cluster.
-```json
-{
-  "authentication": {
-    "blockUnknown": false,
-    "class": "solr.BasicAuthPlugin",
-    "credentials": {
-      "admin": "...",
-      "k8s-oper": "...",
-      "solr": "..."
-    },
-    "realm": "Solr Basic Auth",
-    "forwardCredentials": false
-  },
-  "authorization": {
-    "class": "solr.RuleBasedAuthorizationPlugin",
-    "user-role": {
-      "admin": [ "admin", "k8s" ],
-      "k8s-oper": [ "k8s" ],
-      "solr": [ "users", "k8s" ]
-    },
-    "permissions": [
-      {
-        "name": "k8s-probe-0",
-        "role": null,
-        "collection": null,
-        "path": "/admin/info/system"
-      },
-      {
-        "name": "k8s-status",
-        "role": "k8s",
-        "collection": null,
-        "path": "/admin/collections"
-      },
-      {
-        "name": "k8s-metrics",
-        "role": "k8s",
-        "collection": null,
-        "path": "/admin/metrics"
-      },
-      { 
-         "name": "k8s-zk", 
-         "role":"k8s", 
-         "collection": null, 
-         "path":"/admin/zookeeper/status" 
-      },
-      {
-        "name": "k8s-ping",
-        "role": "k8s",
-        "collection": "*",
-        "path": "/admin/ping"
-      },
-      {
-        "name": "read",
-        "role": [ "admin", "users" ]
-      },
-      {
-        "name": "update",
-        "role": [ "admin" ]
-      },
-      {
-        "name": "security-read",
-        "role": [ "admin" ]
-      },
-      {
-        "name": "security-edit",
-        "role": [ "admin" ]
-      },
-      {
-        "name": "all",
-        "role": [ "admin" ]
-      }
-    ]
-  }
-}
-```
-A few aspects of the default `security.json` configuration warrant a closer look. First, the `probesRequireAuth` setting 
-(defaults to `false`) governs the value for `blockUnknown` (under `authentication`) and whether the probe endpoint(s) require authentication:
-```json
-      {
-        "name": "k8s-probe-0",
-        "role": null,
-        "collection": null,
-        "path": "/admin/info/system"
-      }
-``` 
-In this case, the `"role":null` indicates this endpoint allows anonymous access by unknown users. 
-The `"collection":null` value indicates the path is not associated with any collection, i.e. it is a top-level system path.
-
-The operator sends GET requests to the `/admin/collections` endpoint to get cluster status to determine the rolling restart order:
-```json
-      {
-        "name": "k8s-status",
-        "role": "k8s",
-        "collection": null,
-        "path": "/admin/collections"
-      },
-``` 
-In this case, the `"role":"k8s"` indicates the requesting user must be in the `k8s` role; notice that all default users have the `k8s` role.
-
-The Prometheus exporter sends GET requests to the `/admin/metrics` endpoint to collect metrics from each pod. 
-The exporter also hits the `/admin/ping` endpoint for every collection, which requires the following authorization rules:
-```json
-      {
-        "name": "k8s-metrics",
-        "role": "k8s",
-        "collection": null,
-        "path": "/admin/metrics"
-      },
-      {
-        "name": "k8s-ping",
-        "role": "k8s",
-        "collection": "*",
-        "path": "/admin/ping"
-      },
-      { 
-         "name": "k8s-zk", 
-         "role":"k8s", 
-         "collection": null, 
-         "path":"/admin/zookeeper/status" 
-      },
-```
-The `"collection":"*"` setting indicates this path applies to all collections, which maps to endpoint `/collections/<COLL>/admin/ping` at runtime.
-
-The initial authorization config grants the `read` permission to the `users` role, which allows `users` to send query requests but cannot add / update / delete documents.
-For instance, the `solr` user is mapped to the `users` role, so the `solr` user can send query requests only. 
-In general, please verify the initial authorization rules for each role before sharing user credentials.
-
-### Option 2: User-provided `security.json` and credentials secret
-
-If users want full control over their cluster's security config, then they can provide the Solr `security.json` via a Secret and the credentials the operator should use
-to make requests to Solr in a Secret.
-
-#### Custom `security.json` Secret
-_Since v0.5.0_
-
-For full control over the Solr security configuration, supply a `security.json` in a Secret. The following example illustrates how to point the operator to a Secret containing a custom `security.json`:
-
-```yaml
-spec:
-  ...
-  solrSecurity:
-    authenticationType: Basic
-    bootstrapSecurityJson:
-      name: my-custom-security-json
-      key: security.json
-```
-For `Basic` authentication, if you don't supply a `security.json` Secret, then the operator assumes you are bootstrapping the security configuration via some other means.
-
-Refer to the example `security.json` shown in the Authorization section above to help you get started crafting your own custom configuration. 
-
-#### Basic Authentication 
-
-For `Basic` authentication, the supplied secret must be of type [Basic Authentication Secret](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret) and define both a `username` and `password`.
- 
-```yaml
-spec:
-  ...
-  solrSecurity:
-    authenticationType: Basic
-    basicAuthSecret: user-provided-secret
-```
-
-Here is an example of how to define a basic auth secret using YAML:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
-  name: my-basic-auth-secret
-type: kubernetes.io/basic-auth
-stringData:
-  username: k8s-oper
-  password: Test1234
-```
-With this config, the operator will make API requests to secured Solr pods as the `k8s-oper` user. 
-_Note: be sure to use a stronger password for real deployments_
-
-Users need to ensure their `security.json` contains the user supplied in the `basicAuthSecret` has read access to the following endpoints:
-```
-/admin/info/system
-/admin/info/health
-/admin/collections
-/admin/metrics
-/admin/ping (for collection="*")
-/admin/zookeeper/status
-```
-_Tip: see the authorization rules defined by the default `security.json` as a guide for configuring access for the operator user_
-
-##### Changing the Password
-
-If you change the password for the user configured in your `basicAuthSecret` using the Solr security API, then you **must** update the secret with the new password or the operator will be locked out.
-Also, changing the password for this user in the K8s secret will not update Solr! You're responsible for changing the password in both places.
-
-##### Prometheus Exporter with Basic Auth
-
-If you enable basic auth for your SolrCloud cluster, then you need to point the Prometheus exporter at the basic auth secret; 
-refer to [Prometheus Exporter with Basic Auth](../solr-prometheus-exporter/README.md#prometheus-exporter-with-basic-auth) for more details.
-
-## Various Runtime Parameters
-
-There are various runtime parameters that allow you to customize the running of your Solr Cloud via the Solr Operator.
-
-### Time to wait for Solr to be killed gracefully
-_Since v0.3.0_
-
-The Solr Operator manages the Solr StatefulSet in a way that when a Solr pod needs to be stopped, or deleted, Kubernetes and Solr are on the same page for how long to wait for the process to die gracefully.
-
-The default time given is 60 seconds, before Solr or Kubernetes tries to forcefully stop the Solr process.
-You can override this default with the field:
-
-```yaml
-spec:
-  ...
-  customSolrKubeOptions:
-    podOptions:
-      terminationGracePeriodSeconds: 120
-```
diff --git a/docs/docs/solr-prometheus-exporter/README.md b/docs/docs/solr-prometheus-exporter/README.md
deleted file mode 100644
index f223904..0000000
--- a/docs/docs/solr-prometheus-exporter/README.md
+++ /dev/null
@@ -1,320 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Solr Prometheus Exporter
-
-Solr metrics can be collected from solr clouds/standalone solr both residing within the kubernetes cluster and outside.
-To use the Prometheus exporter, the easiest thing to do is just provide a reference to a Solr instance. That can be any of the following:
-- The name and namespace of the Solr Cloud CRD
-- The Zookeeper connection information of the Solr Cloud
-- The address of the standalone Solr instance
-
-You can also provide a custom Prometheus Exporter config, Solr version, and exporter options as described in the
-[Solr ref-guide](https://solr.apache.org/guide/monitoring-solr-with-prometheus-and-grafana.html#command-line-parameters).
-
-Note that a few of the official Solr docker images do not enable the Prometheus Exporter.
-Versions `6.6` - `7.x` and `8.2` - `master` should have the exporter available. 
-
-## Finding the Solr Cluster to monitor
-
-The Prometheus Exporter supports metrics for both standalone solr as well as Solr Cloud.
-
-### Cloud
-
-You have two options for the prometheus exporter to find the zookeeper connection information that your Solr Cloud uses.
-
-- Provide the name of a `SolrCloud` object in the same Kubernetes cluster, and optional namespace.
-The Solr Operator will keep the ZK Connection info up to date from the SolrCloud object.  
-This name can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.name`
-- Provide explicit Zookeeper Connection info for the prometheus exporter to use.  
-  This info can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo`, with keys `internalConnectionString` and `chroot`
-
-If `SolrPrometheusExporter.spec.solrRef.cloud.name` is used and no image information is passed via `SolrPrometheusExporter.spec.image.*` options, then the Prometheus Exporter will use the same image as the SolrCloud it is listening to.
-If any `SolrPrometheusExporter.spec.image.*` option is provided, then the Prometheus Exporter will use its own image.
-
-#### ACLs
-_Since v0.2.7_
-
-The Prometheus Exporter can be set up to use ZK ACLs when connecting to Zookeeper.
-
-If the prometheus exporter has been provided the name of a solr cloud, through `cloud.name`, then the solr operator will load up the ZK ACL Secret information found in the [SolrCloud spec](../solr-cloud/solr-cloud-crd.md#acls).
-In order for the prometheus exporter to have visibility to these secrets, it must be deployed to the same namespace as the referenced SolrCloud or the same exact secrets must exist in both namespaces.
-
-If explicit Zookeeper connection information has been provided, through `cloud.zkConnectionInfo`, then ACL information must be provided in the same section.
-The ACL information can be provided through an ADMIN acl and a READ ONLY acl.  
-- Admin: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo.acl`
-- Read Only: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo.readOnlyAcl`
-
-All ACL fields are **required** if an ACL is used.
-
-- **`secret`** - The name of the secret, in the same namespace as the SolrCloud, that contains the admin ACL username and password.
-- **`usernameKey`** - The name of the key in the provided secret that stores the admin ACL username.
-- **`passwordKey`** - The name of the key in the provided secret that stores the admin ACL password.
-
-### Standalone
-
-The Prometheus Exporter can be setup to scrape a standalone Solr instance.
-In order to use this functionality, use the following spec field:
-
-`SolrPrometheusExporter.spec.solrRef.standalone.address`
-
-
-### Solr TLS
-_Since v0.3.0_
-
-If you're relying on a self-signed certificate (or any certificate that requires importing the CA into the Java trust store) for Solr pods, then the Prometheus Exporter will not be able to make requests for metrics.
-You'll need to duplicate your TLS config from your SolrCloud CRD definition to your Prometheus exporter CRD definition as shown in the example below:
-
-```yaml
-spec:
-  solrReference:
-    cloud:
-      name: "dev"
-    solrTLS:
-      restartOnTLSSecretUpdate: true
-      keyStorePasswordSecret:
-        name: pkcs12-password-secret
-        key: password-key
-      pkcs12Secret:
-        name: dev-selfsigned-cert-tls
-        key: keystore.p12
-```
-
-**This only applies to the SolrJ client the exporter uses to make requests to your TLS-enabled Solr pods and does not enable HTTPS for the exporter service.**
-
-#### Mounted TLS Directory
-_Since v0.4.0_
-
-You can use the `spec.solrReference.solrTLS.mountedTLSDir.path` to point to a directory containing certificate files mounted by an external agent or CSI driver.
-
-### Prometheus Exporter with Basic Auth
-_Since v0.3.0_
-
-If you enable basic auth for your SolrCloud cluster, then you need to point the Prometheus exporter at the basic auth secret containing the credentials for making API requests to `/admin/metrics` and `/admin/ping` for all collections.
-
-```yaml
-spec:
-  solrReference:
-    basicAuthSecret: user-provided-secret
-```
-If you chose option #1 to have the operator bootstrap `security.json` for you, then the name of the secret will be:
-`<CLOUD>-solrcloud-basic-auth`. If you chose option #2, then pass the same name that you used for your SolrCloud CRD instance.
-
-This user account will need access to the following endpoints in Solr:
-```json
-      {
-        "name": "k8s-metrics",
-        "role": "k8s",
-        "collection": null,
-        "path": "/admin/metrics"
-      },
-      {
-        "name": "k8s-ping",
-        "role": "k8s",
-        "collection": "*",
-        "path": "/admin/ping"
-      },
-```
-
-For more details on configuring Solr security with the operator, see [Authentication and Authorization](../solr-cloud/solr-cloud-crd.md#authentication-and-authorization)
-
-## Prometheus Stack
-
-In this section, we'll walk through how to use the Prometheus exporter with the [Prometheus Stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack).
-
-The Prometheus Stack provides all the services you need for monitoring Kubernetes applications like Solr and is the recommended way of deploying Prometheus and Grafana.
-
-### Install Prometheus Stack
-
-Begin by installing the Prometheus Stack in the `monitoring` namespace with Helm release name `mon`:
-```bash
-MONITOR_NS=monitoring
-PROM_OPER_REL=mon
-
-kubectl create ns ${MONITOR_NS}
-
-# see: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
-if ! helm repo list | grep -q "https://prometheus-community.github.io/helm-charts"; then
-  echo -e "\nAdding the prometheus-community repo to helm"
-  helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-  helm repo add stable https://charts.helm.sh/stable
-  helm repo update
-fi
-
-helm upgrade --install ${PROM_OPER_REL} prometheus-community/kube-prometheus-stack \
---namespace ${MONITOR_NS} \
---set kubeStateMetrics.enabled=false \
---set nodeExporter.enabled=false
-```
-_Refer to the Prometheus stack documentation for detailed instructions._
-
-Verify you have Prometheus / Grafana pods running in the `monitoring` namespace:
-```bash
-kubectl get pods -n monitoring
-```
-
-### Deploy Prometheus Exporter for Solr Metrics
-
-Next, deploy a Solr Prometheus exporter for the SolrCloud you want to capture metrics from in the namespace where you're running SolrCloud, not in the `monitoring` namespace. 
-For instance, the following example creates a Prometheus exporter named `dev-prom-exporter` for a SolrCloud named `dev` deployed in the `dev` namespace:
-```yaml
-apiVersion: solr.apache.org/v1beta1
-kind: SolrPrometheusExporter
-metadata:
-  name: dev-prom-exporter
-spec:
-  customKubeOptions:
-    podOptions:
-      resources:
-        requests:
-          cpu: 300m
-          memory: 900Mi
-  solrReference:
-    cloud:
-      name: "dev"
-  numThreads: 6
-```
-
-Look at the logs for your exporter pod to ensure it is running properly (notice we're using a label filter vs. addressing the pod by name):
-```bash
-kubectl logs -l solr-prometheus-exporter=dev-prom-exporter
-```
-You should see some log messages that look similar to:
-```
-INFO  - <timestamp>; org.apache.solr.prometheus.collector.SchedulerMetricsCollector; Beginning metrics collection
-INFO  - <timestamp>; org.apache.solr.prometheus.collector.SchedulerMetricsCollector; Completed metrics collection
-```
-
-You can also see the metrics that are exported by the pod by opening a port-forward to the exporter pod and hitting port 8080 with cURL:
-```bash
-kubectl port-forward $(kubectl get pod -l solr-prometheus-exporter=dev-prom-exporter --no-headers -o custom-columns=":metadata.name") 8080
-
-curl http://localhost:8080/metrics
-```
-
-#### Customize Prometheus Exporter Config
-_Since v0.3.0_
-
-Each Solr pod exposes metrics as JSON from the `/solr/admin/metrics` endpoint. To see this in action, open a port-forward to a Solr pod and send a request to `http://localhost:8983/solr/admin/metrics`.
-
-The Prometheus exporter requests metrics from each pod and then extracts the desired metrics using a series of [jq](https://stedolan.github.io/jq/) queries against the JSON returned by each pod.
-
-By default, the Solr operator configures the exporter to use the config from `/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml`.
-
-If you need to customize the metrics exposed to Prometheus, you'll need to provide a custom config XML via a ConfigMap and then configure the exporter CRD to point to it.
-
-For instance, let's imagine you need to expose a new metric to Prometheus. Start by pulling the default config from the exporter pod using:
-```bash
-EXPORTER_POD_ID=$(kubectl get pod -l solr-prometheus-exporter=dev-prom-exporter --no-headers -o custom-columns=":metadata.name"`)
-
-kubectl cp $EXPORTER_POD_ID:/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml ./solr-exporter-config.xml
-```
-Create a ConfigMap with your customized XML config under the `solr-prometheus-exporter.xml` key.
-```yaml
-apiVersion: v1
-data:
-  solr-prometheus-exporter.xml: |
-    <?xml version="1.0" encoding="UTF-8" ?>
-    <config>
-    ... YOUR CUSTOM CONFIG HERE ...
-    </config>
-kind: ConfigMap
-metadata:
-  name: custom-exporter-xml
-```
-_Note: Using `kubectl create configmap --from-file` scrambles the XML formatting, so we recommend defining the configmap YAML as shown above to keep the XML formatted properly._
-
-Point to the custom ConfigMap in your Prometheus exporter definition using:
-```yaml
-spec:
-  customKubeOptions:
-    configMapOptions:
-      providedConfigMap: custom-exporter-xml
-``` 
-The ConfigMap needs to be defined in the same namespace where you're running the exporter.
-
-The Solr operator automatically triggers a restart of the exporter pods whenever the exporter config XML changes in the ConfigMap.
-
-#### Solr Prometheus Exporter Service
-The Solr operator creates a K8s `ClusterIP` service for load-balancing across exporter pods; there will typically only be one active exporter pod per SolrCloud managed by a K8s deployment.
-
-For our example `dev-prom-exporter`, the service name is: `dev-prom-exporter-solr-metrics`
-
-Take a quick look at the labels on the service as you'll need them to define a service monitor in the next step.
-```bash
-kubectl get svc dev-prom-exporter-solr-metrics --show-labels
-```
-
-Also notice the ports that are exposed for this service:
-```bash
-kubectl get svc dev-prom-exporter-solr-metrics --output jsonpath="{@.spec.ports}"
-```
-You should see output similar to:
-```json
-[{"name":"solr-metrics","port":80,"protocol":"TCP","targetPort":8080}]
-```
-
-### Create a Service Monitor
-The Prometheus operator (deployed with the Prometheus stack) uses service monitors to find which services to scrape metrics from. Thus, we need to define a service monitor for our exporter service `dev-prom-exporter-solr-metrics`.
-If you're not using the Prometheus operator, then you do not need a service monitor as Prometheus will scrape metrics using the `prometheus.io/*` pod annotations on the exporter service; see [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/). 
-
-```yaml
-apiVersion: monitoring.coreos.com/v1
-kind: ServiceMonitor
-metadata:
-  name: solr-metrics
-  labels:
-    release: mon
-spec:
-  selector:
-    matchLabels:
-      solr-prometheus-exporter: dev-prom-exporter
-  namespaceSelector:
-    matchNames:
-    - dev
-  endpoints:
-  - port: solr-metrics
-    interval: 20s
-```
-
-There are a few important aspects of this service monitor definition:
-* The `release: mon` label associates this service monitor with the Prometheus operator; recall that we used `mon` as the Helm release when installing our Prometheus stack
-* The Prometheus operator uses the `solr-prometheus-exporter: dev-prom-exporter` label selector to find the service to scrape metrics from, which of course is the `dev-prom-exporter-solr-metrics` service created by the Solr operator.
-* The Prometheus operator uses `dev` to match the namespace (`namespaceSelector.matchNames`) where our SolrCloud and Prometheus exporter services are deployed
-* The `endpoints` section identifies the port to scrape metrics from and the scrape interval; recall our service exposes the port as `solr-metrics`  
-
-Save the service monitor YAML to a file, such as `dev-prom-service-monitor.yaml` and apply to the `monitoring` namespace:
-```bash
-kubectl apply -f dev-prom-service-monitor.yaml -n monitoring
-```
-
-Prometheus is now configured to scrape metrics from the exporter service.
-
-### Load Solr Dashboard in Grafana
-
-You can expose Grafana via a LoadBalancer (or Ingress) but for now, we'll just open a port-forward to port 3000 to access Grafana:
-```bash
-GRAFANA_POD_ID=$(kubectl get pod -l app.kubernetes.io/name=grafana --no-headers -o custom-columns=":metadata.name" -n monitoring)
-kubectl port-forward -n monitoring $GRAFANA_POD_ID 3000
-```
-Open Grafana using `localhost:3000` and login with username `admin` and password `prom-operator`.
-
-Once logged into Grafana, import the Solr dashboard JSON corresponding to the version of Solr you're running.
-
-Solr does not export any useful metrics until you have at least one collection defined.
-
-_Note: Solr 8.8.0 and newer versions include an updated dashboard that provides better metrics for monitoring query performance._ 
diff --git a/docs/docs/upgrade-notes.md b/docs/docs/upgrade-notes.md
deleted file mode 100644
index 4d05649..0000000
--- a/docs/docs/upgrade-notes.md
+++ /dev/null
@@ -1,265 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Solr Operator Upgrade Notes
-
-Please carefully read the entries for all versions between the version you are running and the version you want to upgrade to.
-
-Ensure to read the [Upgrade Warnings and Notes](#upgrade-warnings-and-notes) for the version you are upgrading to as well as the versions you are skipping.
-
-If you want to skip versions when upgrading, be sure to check out the [upgrading minor versions](#upgrading-minor-versions-v_x_) and [upgrading patch versions](#upgrading-patch-versions-v__x) sections.
-
-## Version Compatibility Matrixes
-
-### Kubernetes Versions
-
-| Solr Operator Version | `1.15` | `1.16` - `1.18` |  `1.19` - `1.21` | `1.22`+ |
-|:---------------------:| :---: | :---: | :---: | :---: |
-|       `v0.2.6`        | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-|       `v0.2.7`        | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-|       `v0.2.8`        | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-|       `v0.3.x`        | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-|       `v0.4.x`        | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-|       `v0.5.x`        | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-|       `v0.6.x`        | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-
-### Solr Versions
-
-| Solr Operator Version | `6.6` | `7.7` | `8.0` - `8.5` | `8.6`+ |
-|:---------------------:| :---: | :---: | :---: | :---: |
-|       `v0.2.6`        | :grey_question: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-|       `v0.2.7`        | :grey_question: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-|       `v0.2.8`        | :grey_question: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-|       `v0.3.x`        | :grey_question: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-|       `v0.4.x`        | :grey_question: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-|       `v0.5.x`        | :grey_question: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-|       `v0.6.x`        | :grey_question: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-
-Please note that this represents basic compatibility with the Solr Operator.
-There may be options and features that require newer versions of Solr.
-(e.g. S3/GCS Backup Support)
-
-Please test to make sure the features you plan to use are compatible with the version of Solr you choose to run.
-
-
-### Upgrading from `v0.2.x` to `v0.3.x`
-If you are upgrading from `v0.2.x` to `v0.3.x`, please follow the [Upgrading to Apache guide](upgrading-to-apache.md).
-This is a special upgrade that requires different instructions.
-
-### Upgrading minor versions (`v_.X._`)
-
-In order to upgrade minor versions (e.g. `v0.2.5` -> `v0.3.0`), you must upgrade one minor version at a time (e.g. `v0.2.0` -> `v0.3.0` -> `v0.4.0`).
-It is also necessary to upgrade to the latest patch version before upgrading to the next minor version.
-Therefore if you are running `v0.2.5` and you want to upgrade to `v0.3.0`, you must first upgrade to `v0.2.8` before upgrading to `v0.3.0`.
-
-### Upgrading patch versions (`v_._.X`)
-
-You should be able to upgrade from a version to any patch version with the same minor and major versions.
-It is always encouraged to upgrade to the latest patch version of the minor and major version you are running.
-There is no need to upgrade one patch version at a time (e.g. `v0.2.5` -> `v0.2.6` -> `v0.2.7` -> `v0.2.8`),
-instead you can leap to the latest patch version (e.g. `v0.2.5` -> `v0.2.8`).
-
-## Installing the Solr Operator vs Solr CRDs
-
-Installing the Solr Operator, especially via the [Helm Chart](https://artifacthub.io/packages/helm/apache-solr/solr-operator),
-does not necessarily mean that you are installing the required CRDs for that version of the Solr Operator.
-
-If the Solr CRDs already exist in your Kubernetes cluster, then Helm will not update them even if the CRDs have changed between the Helm chart versions.
-Instead, you will need to manually install the CRDs whenever upgrading your Solr Operator.  
-**You should always upgrade your CRDs before upgrading the Operator**
-
-You can do this via the following command, replacing `<version>` with the version of the Solr Operator you are installing:
-```bash
-# Just replace the Solr CRDs
-kubectl replace -f "http://solr.apache.org/operator/downloads/crds/<version>/all.yaml"
-# Just replace the Solr CRDs and all CRDs it might depend on (e.g. ZookeeperCluster)
-kubectl replace -f "http://solr.apache.org/operator/downloads/crds/<version>/all-with-dependencies.yaml"
-```
-
-It is **strongly recommended** to use `kubectl create` or `kubectl replace`, instead of `kubectl apply` when creating/updating CRDs.
-
-### Upgrading the Zookeeper Operator
-
-When upgrading the Solr Operator, you may need to upgrade the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) at the same time.
-If you are using the Solr Helm chart to deploy the Zookeeper operator, then you won't need to do anything besides installing the CRD's with dependencies, and upgrade the Solr Operator helm deployment.
-
-```bash
-# Just replace the Solr CRDs and all CRDs it might depend on (e.g. ZookeeperCluster)
-kubectl replace -f "http://solr.apache.org/operator/downloads/crds/v0.6.0/all-with-dependencies.yaml"
-helm upgrade solr-operator apache-solr/solr-operator --version 0.6.0
-```
-
-_Note that the Helm chart version does not contain a `v` prefix, which the downloads version does. The Helm chart version is the only part of the Solr Operator release that does not use the `v` prefix._
-
-## Upgrade Warnings and Notes
-
-### v0.6.0
-- The default Solr version for the `SolrCloud` and `SolrPrometheusExporter` resources has been upgraded from `8.9` to `8.11`.
-  This will not affect any existing resources, as default versions are hard-written to the resources immediately.
-  Only new resources created after the Solr Operator is upgraded to `v0.6.0` will be affected.
-
-- The required version of the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) to use with this version has been upgraded from `v0.2.12` to `v0.2.14`.
-  If you use the Solr Operator helm chart, then by default the new version of the Zookeeper Operator will be installed as well.
-  Refer to the helm chart documentation if you want to manage the Zookeeper Operator installation yourself.  
-  Please refer to the [Zookeeper Operator release notes](https://github.com/pravega/zookeeper-operator/releases) before upgrading.
-  Make sure to install the correct version of the Zookeeper Operator CRDS, as [shown above](#upgrading-the-zookeeper-operator).
-
-- The SolrCloud CRD field `Spec.solrAddressability.external.additionalDomains` has been renamed to `additionalDomainNames`.
-  In this release `additionalDomains` is still accepted, but all values will automatically be added to `additionalDomainNames` and the field will be set to `nil` by the operator.
-  The `additionalDomains` option will be removed in a future version.
-
-- The SolrCloud CRD field `Spec.solrAddressability.external.ingressTLSTerminationSecret` has been moved to `Spec.solrAddressability.external.ingressTLSTermination.tlsSecret`.
-  In this release `ingressTLSTerminationSecret` is still accepted, but all values will automatically be changed to `ingressTLSTermination.tlsSecret` and the original field will be set to `nil` by the operator.
-  The `ingressTLSTerminationSecret` option will be removed in a future version.
-
-- `SolrPrometheusExporter` resources without any image specifications (`SolrPrometheusExporter.Spec.image.*`) will use the referenced `SolrCloud` image, if the reference is by `name`, not `zkConnectionString`.
-  If any `SolrPrometheusExporter.Spec.image.*` option is provided, then those values will be defaulted by the Solr Operator and the `SolrCloud` image will not be used.
-  When upgrading from `v0.5.*` to `v0.6.0`, only new `SolrPrometheusExporter` resources will use this new feature.
-  To enable it on existing resources, update the resources and remove the `SolrPrometheusExporter.Spec.image` section.
-
-- CRD options deprecated in `v0.5.0` have been removed.
-  This includes field `SolrCloud.spec.dataStorage.backupRestoreOptions`, `SolrBackup.spec.persistence` and `SolrBackup.status.persistenceStatus`.
-  Upgrading to `v0.5.*` will remove these options on existing and new SolrCloud and SolrBackup resources.
-  However, once the Solr CRDs are upgraded to `v0.6.0`, you will no longer be able to submit resources with the options listed above.
-  Please migrate your systems to use the new options while running `v0.5.*`, before upgrading to `v0.6.0`. 
-
-### v0.5.0
-- Due to the deprecation and removal of `networking.k8s.io/v1beta1` in Kubernetes v1.22, `networking.k8s.io/v1` will be used for Ingresses.
-
-  **This means that Kubernetes support is now limited to 1.19+.**  
-  If you are unable to use a newer version of Kubernetes, please install the `v0.4.0` version of the Solr Operator for use with Kubernetes 1.18 and below.
-  See the [version compatibility matrix](#kubernetes-versions) for more information.
-
-  This also means that if you specify a custom `ingressClass` via an annotation, you should change to use the `SolrCloud.spec.customSolrKubeOptions.ingressOptions.ingressClassName` instead.
-  The ability to set the class through annotations is now deprecated in Kubernetes and will be removed in future versions.
-
-- The legacy way of specifying a backupRepository has been **DEPRECATED**.
-  Instead of using `SolrCloud.spec.dataStorage.backupRestoreOptions`, use `SolrCloud.spec.backupRepositories`.
-  The `SolrCloud.spec.dataStorage.backupRestoreOptions` option **will be removed in `v0.6.0`**.  
-  **Note**: Do not take backups while upgrading from the Solr Operator `v0.4.0` to `v0.5.0`.
-  Wait for the SolrClouds to be updated, after the Solr Operator is upgraded, and complete their rolling restarts before continuing to use the Backup functionality.
-
-- The location of Solr backup data as well as the name of the Solr backups have been changed, when using volume repositories.
-  Previously the name of the backup (in solr) was set to the name of the collection.
-  Now the name given to the backup in Solr will be set to `<backup-resource-name>-<collection-name>`, without the `<` or `>` characters, where the `backup-resource-name` is the name of the SolrBackup resource.
-
-  The directory in the Read-Write-Many Volume, required for volume repositories, that backups are written to is now `/cloud/<solr-cloud-name>/backups` by default, instead of `/cloud/<solr-cloud-name>/backups/<backup-name>`.
-  Because the backup name in Solr uses both the SolrBackup resource name and the collection name, there should be no collisions in this directory.
-  However, this can be overridden using the `SolrBackup.spec.location` option, which is appended to `/cloud/<solr-cloud-name>`.
-
-- The SolrBackup persistence option has been removed as of `v0.5.0`.
-  Users should plan to keep their backup data in the shared volume if using a Volume Backup repository.
-  If `SolrBackup.spec.persistence` is provided, it will be removed and written back to Kubernetes.
-
-  Users using the S3 persistence option should try to use the [S3 backup repository](solr-backup/README.md#s3-backup-repositories) instead. This requires Solr 8.10 or higher.
-
-- Default ports when using TLS are now set to 443 instead of 80.
-  This affects `solrCloud.Spec.SolrAddressability.CommonServicePort` and `solrCloud.Spec.SolrAddressability.CommonServicePort` field defaulting.
-  Users already explicitly setting these values will not be affected.
-
-### v0.4.0
-- The required version of the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) to use with this version has been upgraded from `v0.2.9` to `v0.2.12`.
-  If you use the Solr Operator helm chart, then by default the new version of the Zookeeper Operator will be installed as well.
-  Refer to the helm chart documentation if you want to manage the Zookeeper Operator installation yourself.  
-  Please refer to the [Zookeeper Operator release notes](https://github.com/pravega/zookeeper-operator/releases) before upgrading.
-  Make sure to install the correct version of the Zookeeper Operator CRDS, as [shown above](#upgrading-the-zookeeper-operator).
-
-- The deprecated Solr Operator Helm chart option `useZkOperator` has been removed, use `zookeeper-operator.use` instead.  
-  **Note**: The old option takes a _string_ `"true"`/`"false"`, while the new option takes a _boolean_ `true`/`false`.
-
-- The default Solr version for `SolrCloud` and `SolrPrometheusExporter` resources has been upgraded from `7.7.0` to `8.9`.
-  This will not affect any existing resources, as default versions are hard-written to the resources immediately.
-  Only new resources created after the Solr Operator is upgraded to `v0.4.0` will be affected.
-
-- In previous versions of the Solr Operator, the provided Zookeeper instances could only use Persistent Storage.
-  Now ephemeral storage is enabled, and used by default if Solr is using ephemeral storage.
-  The ZK storage type can be explicitly set via `Spec.zookeeperRef.provided.ephemeral` or `Spec.zookeeperRef.provided.persistence`,
-  however if neither is set, the Solr Operator will default to use the type of storage (persistent or ephemeral) that Solr is using.  
-  **This means that the default Zookeeper Storage type can change for users using ephemeral storage for Solr.
-  If you require ephemeral Solr storage and persistent Zookeeper Storage, be sure to explicitly set that starting in `v0.4.0`.**
-
-### v0.3.0
-- All deprecated CRD fields and Solr Operator options from `v0.2.*` have been removed.
-
-- The `SolrCollection` and `SolrCollectionAlias` have been removed. Please use the Solr APIs to manage these resources instead.
-  Discussion around the removal can be found in [Issue #204](https://github.com/apache/solr-operator/issues/204).
-
-- The required version of the [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) to use with this version has been upgraded from `v0.2.6` to `v0.2.9`.
-  If you use the Solr Operator helm chart, then by default the new version of the Zookeeper Operator will be installed as well.
-  Refer to the helm chart documentation if you want to manage the Zookeeper Operator installation yourself.  
-  Please refer to the [Zookeeper Operator release notes](https://github.com/pravega/zookeeper-operator/releases) before upgrading.
-
-### v0.2.7
-- Due to the addition of possible sidecar/initContainers for SolrClouds, the version of CRDs used had to be upgraded to `apiextensions.k8s.io/v1`.
-
-  **This means that Kubernetes support is now limited to 1.16+.**  
-  If you are unable to use a newer version of Kubernetes, please install the `v0.2.6` version of the Solr Operator for use with Kubernetes 1.15 and below.
-
-- The location of backup-restore volume mounts in Solr containers has changed from `/var/solr/solr-backup-restore` to `/var/solr/data/backup-restore`.
-  This change was made to ensure that there were no issues using the backup API with solr 8.6+, which restricts the locations that backup data can be saved to and read from.
-  This change should be transparent if you are merely using the SolrBackup CRD.
-  All files permissions issues with SolrBackups should now be addressed.
-
-- The default `PodManagementPolicy` for StatefulSets has been changed to `Parallel` from `OrderedReady`.
-  This change will not affect existing StatefulSets, as `PodManagementPolicy` cannot be updated.
-  In order to continue using `OrderedReady` on new SolrClouds, please use the following setting:  
-  `SolrCloud.spec.customSolrKubeOptions.statefulSetOptions.podManagementPolicy`
-
-- The `SolrCloud` and `SolrPrometheusExporter` services' portNames have changed to `"solr-client"` and `"solr-metrics"` from `"ext-solr-client"` and `"ext-solr-metrics"`, respectively.
-  This is due to a bug in Kubernetes where `portName` and `targetPort` must match for services.
-
-- Support for `etcd`/`zetcd` deployments has been removed.  
-  The section for a Zookeeper cluster Spec `SolrCloud.spec.zookeeperRef.provided.zookeeper` has been **DEPRECATED**.
-  The same fields (except for the deprecated `persistentVolumeClaimSpec` option) are now available under `SolrCloud.spec.zookeeperRef.provided`.
-
-- Data Storage options have been expanded, and moved from their old locations.
-    - `SolrCloud.spec.dataPvcSpec` has been **DEPRECATED**.  
-      Please instead use the following instead: `SolrCloud.spec.dataStorage.persistent.pvcTemplate.spec=<spec>`
-    - `SolrCloud.spec.backupRestoreVolume` has been **DEPRECATED**.  
-      Please instead use the following instead: `SolrCloud.spec.dataStorage.backupRestoreOptions.Volume=<volume-source>`
-
-### v0.2.6
-- The solr-operator argument `--ingressBaseDomain` has been **DEPRECATED**.
-  In order to set the external baseDomain of your clouds, please begin to use `SolrCloud.spec.solrAddressability.external.domainName` instead.
-  You will also need to set `SolrCloud.spec.solrAddressability.external.method` to `Ingress`.
-  The `--ingressBaseDomain` argument is backwards compatible, and all existing SolrCloud objects will be auto-updated once your operator is upgraded to `v0.2.6`.
-  The argument will be removed in a future version (`v0.3.0`).
-
-### v0.2.4
-- The default supported version of the Zookeeper Operator has been upgraded to `v0.2.6`.  
-  If you are using the provided zookeeper option for your SolrClouds, then you will want to upgrade your zookeeper operator version as well as the version and image of the zookeeper that you are running.
-  You can find examples of the zookeeper operator as well as solrClouds that use provided zookeepers in the [examples](/example) directory.  
-  Please refer to the [Zookeeper Operator release notes](https://github.com/pravega/zookeeper-operator/releases) before upgrading.
-
-### v0.2.3
-- If you do not use an ingress with the Solr Operator, the Solr Hostname and Port will change when upgrading to this version. This is to fix an outstanding bug. Because of the headless service port change, you will likely see an outage for inter-node communication until all pods have been restarted.
-
-### v0.2.2
-- `SolrCloud.spec.solrPodPolicy` has been **DEPRECATED** in favor of the `SolrCloud.spec.customSolrKubeOptions.podOptions` option.  
-  This option is backwards compatible, but will be removed in a future version (`v0.3.0`).
-
-- `SolrPrometheusExporter.spec.solrPodPolicy` has been **DEPRECATED** in favor of the `SolrPrometheusExporter.spec.customKubeOptions.podOptions` option.  
-  This option is backwards compatible, but will be removed in a future version (`v0.3.0`).
-
-### v0.2.1
-- The zkConnectionString used for provided zookeepers changed from using the string provided in the `ZkCluster.Status`, which used an IP, to using the service name. This will cause a rolling restart of your solrs using the provided zookeeper option, but there will be no data loss.
-
-### v0.2.0
-- Uses `gomod` instead of `dep`
-- `SolrCloud.spec.zookeeperRef.provided.zookeeper.persistentVolumeClaimSpec` has been **DEPRECATED** in favor of the `SolrCloud.zookeeperRef.provided.zookeeper.persistence` option.  
-  This option is backwards compatible, but will be removed in a future version (`v0.3.0`).
-- An upgrade to the ZKOperator version `0.2.4` is required.
diff --git a/docs/docs/upgrading-to-apache.md b/docs/docs/upgrading-to-apache.md
deleted file mode 100644
index 2e58db8..0000000
--- a/docs/docs/upgrading-to-apache.md
+++ /dev/null
@@ -1,152 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-# Upgrading the Solr Operator from Bloomberg to Apache
-
-Please read the guide fully before attempting to upgrade your system.
-
-## Guide
-
-There are many ways of managing what runs in your Kubernetes cluster.
-Therefore, we cannot give one definitive way of upgrading the Solr Operator and the Solr resources in your Cluster.
-
-Below are guides based on how to upgrade, depending on the type of management that you use.
-Please look through all the instructions to see what will work best for your setup.
-
-## If you manage your cluster declaratively
-
-If you have an external system managing your cluster (e.g. Ansible, Helm, etc), then the steps you take to upgrade the Solr Operator may be very different.
-
-1. Make sure your system is installing the `v0.3.0` version of the Solr Operator, through the [Helm chart](https://artifacthub.io/packages/helm/apache-solr/solr-operator), or other means.
-1. Make sure your system is [installing the `v0.2.9` version of the Zookeeper Operator](#zookeeper-operator-upgrade).
-1. Make sure the Solr resources you are installing are using the new `solr.apache.org` group and not `solr.bloomberg.com`.
-1. Check if the Solr resources you are managing use any fields deprecated in `v0.2.x` versions, you can check for these in the [Upgrade Notes](upgrade-notes.md#upgrade-warnings-and-notes).
-    - If your resources contain these fields, replace them with their new locations.
-    Otherwise, your `kubectl apply` commands will fail when using the `solr.apache.org` CRDs.
-
-Once you have made sure that your declarative system is checked for the above points, you can run it to install the new version of the Solr Operator and all of your new CRDs.
-
-Now you need to delete all the old `solr.bloomberg.com` resources & operator.
-
-1. Remove the `v0.2.x` version of the Solr Operator that is running, unless you merely upgraded your existing deployment to `v0.3.0`.
-1. [Remove any finalizers](#remove-solr-resource-finalizers) that the Solr Resources may contain, in all namespaces that you run Solr resources in.
-1. [Remove the `solr.bloomberg.com` CRDs](#remove-bloomberg-solr-crds), as they are no longer needed when running the `v0.3.0` version of the Solr Operator.
-
-## If you manually manage your cluster
-
-If you manually manage your cluster, then the instructions below will allow you to upgrade in-place.
-
-1. Make sure you have the latest Apache Solr Operator helm charts (If you use helm to deploy the Solr Operator)
-   ```bash
-    helm repo add apache-solr https://solr.apache.org/charts
-    helm repo update
-    ```
-1. Upgrade to `v0.2.8`, if you are running a version lower.
-   - If your Solr Operator deployment is managed with a Helm chart, merely upgrade your helm chart to version `0.2.8`
-1. Install the v0.2.8 Solr Operator CRDs
-   ```bash
-    kubectl replace -f "https://solr.apache.org/operator/downloads/crds/v0.2.8/all.yaml"
-    ```
-   _The Upgrade Notes page says to always upgrade CRDs before upgrading the operator, however for this single upgrade case this order is safer._
-1. _If you are using the ZK Operator_  
-   [Remove the outdated Zookeeper Operator deployment and other resources](#removing-the-old-zookeeper-operator-resources).
-1. Install the Apache Solr CRDs
-   ```bash
-    kubectl create -f "https://solr.apache.org/operator/downloads/crds/v0.3.0/all.yaml"
-    ```
-   _If you are using the ZK Operator_
-   ```bash
-    kubectl create -f "https://solr.apache.org/operator/downloads/crds/v0.3.0/all-with-dependencies.yaml" || \
-      kubectl replace -f "https://solr.apache.org/operator/downloads/crds/v0.3.0/all-with-dependencies.yaml"
-    ```
-1. Install the Apache Solr Operator
-   ```bash
-    helm install apache apache-solr/solr-operator --version 0.3.0
-    ```
-1. Convert `solr.bloomberg.com` resources into `solr.apache.org` resources:
-   ```bash
-   # First make sure that "yq" is installed
-   kubectl get solrclouds.solr.bloomberg.com --all-namespaces -o yaml | \
-      sed "s#solr.bloomberg.com#solr.apache.org#g" | \
-      yq eval 'del(.items.[].metadata.annotations."kubectl.kubernetes.io/last-applied-configuration", .items.[].metadata.managedFields, .items.[].metadata.resourceVersion, .items.[].metadata.creationTimestamp, .items.[].metadata.generation, .items.[].metadata.selfLink, .items.[].metadata.uid, .items.[].spec.solrPodPolicy, .items.[].spec.zookeeperRef.provided.image.tag, .items.[].status)' - \
-      | kubectl apply -f -
-   kubectl get solrprometheusexporters.solr.bloomberg.com --all-namespaces -o yaml | \
-      sed "s#solr.bloomberg.com#solr.apache.org#g" | \
-      yq eval 'del(.items.[].metadata.annotations."kubectl.kubernetes.io/last-applied-configuration", .items.[].metadata.managedFields, .items.[].metadata.resourceVersion, .items.[].metadata.creationTimestamp, .items.[].metadata.generation, .items.[].metadata.selfLink, .items.[].metadata.uid, .items.[].spec.podPolicy, .items.[].status)' - \
-      | kubectl apply -f -
-   kubectl get solrbackups.solr.bloomberg.com --all-namespaces -o yaml | \
-      sed "s#solr.bloomberg.com#solr.apache.org#g" | \
-      yq eval 'del(.items.[].metadata.annotations."kubectl.kubernetes.io/last-applied-configuration", .items.[].metadata.managedFields, .items.[].metadata.resourceVersion, .items.[].metadata.creationTimestamp, .items.[].metadata.generation, .items.[].metadata.selfLink, .items.[].metadata.uid, .items.[].status)' - \
-      | kubectl apply -f -
-   ```
-1. Delete the `v0.2.8` Solr Operator deployment:
-   ```bash
-   # If solr-operator is the name of your v0.2.8 Solr Operator deployment
-   helm delete solr-operator 
-   ```
-1. [Remove any finalizers](#remove-solr-resource-finalizers) that the Solr Resources may contain, in all namespaces that you run Solr resources in.
-1. [Remove the `solr.bloomberg.com` CRDs](#remove-bloomberg-solr-crds), as they are no longer needed when running the `v0.3.0` version of the Solr Operator.
-1. Test to make sure that your Solr resources exist as apache resources now and your `StatefulSets`/`Deployments` are still running correctly.
-
-## Other Considerations
-
-### Zookeeper Operator Upgrade
-
-The [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) version that the Solr Operator requires has changed between `v0.2.6` and `v0.3.0`.
-Therefore, the Zookeeper Operator needs to be upgraded to `v0.2.9` when the Solr Operator is upgraded to `v0.3.0`.
-
-#### Removing the old Zookeeper Operator resources
-If you use the Solr Operator [Helm chart](https://artifacthub.io/packages/helm/apache-solr/solr-operator), then the correct version of the Zookeeper Operator will be deployed when upgrading.
-However, you will need to pre-emptively delete some Kubernetes resources so that they can be managed by Helm.
-
-_Only use the below `kubectl` commands if you installed the Zookeeper Operator using the URL provided in the Solr Operator repository._  
-If you already have a `v0.2.9` Zookeeper Operator running, ignore this and pass the following options when installing the Solr Operator:
-`--set zookeeper-operator.install=false --set zookeeper-operator.use=true`
-
-```bash
-kubectl delete deployment zk-operator
-kubectl delete clusterrolebinding zookeeper-operator-cluster-role-binding
-kubectl delete clusterrole zookeeper-operator
-kubectl delete serviceaccount zookeeper-operator
-```
-
-You will then be able to install the Solr Operator & dependency CRDs, and the Solr Operator Helm Chart.
-
-
-### Remove Solr Resource Finalizers
-
-It is necessary to remove any finalizers that the old `solr.bloomberg.com` resources may have, otherwise they will not be able to be deleted.
-You should do this for all namespaces that you are running solr resources in.
-
-**Only run these after stopping any `v0.2.x` Solr Operator that may be running in the Kubernetes cluster.**
-
-```bash
-kubectl get solrcollections.solr.bloomberg.com -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch solrcollections.solr.bloomberg.com {} --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers/0"}]'
-kubectl get solrcollectionaliases.solr.bloomberg.com -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch solrcollectionaliases.solr.bloomberg.com {} --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers/0"}]'
-kubectl get solrclouds.solr.bloomberg.com -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch solrclouds.solr.bloomberg.com {} --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers/0"}]'
-```
-
-### Remove Bloomberg Solr CRDs
-
-After you have [removed all finalizers on Solr resources](#remove-solr-resource-finalizers), you can remove all `solr.bloomberg.com` CRDs.
-
-If you don't want to disrupt any Solr service that is running, make sure that all Solr resources have been upgraded from `solr.bloomberg.com` CRDs to `solr.apache.org` CRDs.
-This command will delete all `solr.bloomberg.com` resources, which means that only resources that exist as `solr.apache.org` will continue to run afterwards.
-
-```bash
-kubectl delete crd solrclouds.solr.bloomberg.com solrprometheusexporters.solr.bloomberg.com solrbackups.solr.bloomberg.com solrcollections.solr.bloomberg.com solrcollectionaliases.solr.bloomberg.com
-```