You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ja...@apache.org on 2024/02/24 20:54:53 UTC

(solr) 02/02: Link Solr Operator for learning on Solr in Kubernetes, and better exp… (#1114)

This is an automated email from the ASF dual-hosted git repository.

janhoy pushed a commit to branch branch_9x
in repository https://gitbox.apache.org/repos/asf/solr.git

commit de9d4f24cb66ebb6aa8b821bc01ff7030258cc87
Author: Jeb Nix <11...@users.noreply.github.com>
AuthorDate: Sat Feb 24 21:27:52 2024 +0100

    Link Solr Operator for learning on Solr in Kubernetes, and better exp… (#1114)
    
    * Remove docker-networking.adoc page
    * Write a few paragraphs about cloud in the FAQ
    * Remove bad advice about docker links
    * Remove info about legacy v5-7
    * Add example of compose file for cloud
    
    Co-authored-by: Jan Høydahl <ja...@apache.org>
---
 dev-docs/running-in-docker.adoc                    |   6 +-
 .../modules/deployment-guide/deployment-nav.adoc   |   1 -
 .../modules/deployment-guide/pages/docker-faq.adoc |  35 +--
 .../deployment-guide/pages/docker-networking.adoc  | 279 ---------------------
 .../deployment-guide/pages/solr-in-docker.adoc     |  55 ++--
 5 files changed, 47 insertions(+), 329 deletions(-)

diff --git a/dev-docs/running-in-docker.adoc b/dev-docs/running-in-docker.adoc
index 0e455ce4ceb..202d0379b1e 100644
--- a/dev-docs/running-in-docker.adoc
+++ b/dev-docs/running-in-docker.adoc
@@ -1,4 +1,4 @@
-# Running Solr in Docker
+= Running Solr in Docker
 
 You can run Solr in Docker via the https://hub.docker.com/_/solr[official image].
 
@@ -11,7 +11,7 @@ In order to start Solr in cloud mode, run the following.
 `docker run -p 8983:8983 solr solr-fg -c`
 
 For documentation on using the official docker builds, please refer to the https://hub.docker.com/_/solr[DockerHub page].
-Up to date documentation for running locally built images of this branch can be found in the xref:_running_solr_in_docker[local reference guide].
+Up-to-date documentation for running locally built images of this branch can be found in the xref:_running_solr_in_docker[local reference guide].
 
 There is also a gradle task for building custom Solr images from your local checkout.
 These local images are built identically to the official image except for retrieving the Solr artifacts locally instead of from the official release.
@@ -29,5 +29,5 @@ For more info on building an image, run:
 
 `./gradlew helpDocker`
 
-## Additional Information
+== Additional Information
 You can find additional information in the https://solr.apache.org/guide/solr/latest/deployment-guide/solr-in-docker.html[Solr Ref Guide Docker Page]
\ No newline at end of file
diff --git a/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc b/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
index 102746c16ac..5ba4dd59fb0 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
@@ -29,7 +29,6 @@
 ** xref:backup-restore.adoc[]
 ** xref:solr-in-docker.adoc[]
 *** xref:docker-faq.adoc[]
-*** xref:docker-networking.adoc[]
 ** xref:solr-on-hdfs.adoc[]
 
 * Scaling Solr
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc
index a45b00c3add..8a747c193f9 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc
@@ -49,30 +49,17 @@ docker run --rm -p 8983:8983 -v solrData:/mysolrhome solr:9-slim
 
 == Can I run ZooKeeper and Solr clusters under Docker?
 
-At the network level the ZooKeeper nodes need to be able to talk to each other,
-and the Solr nodes need to be able to talk to the ZooKeeper nodes and to each other.
-At the application level, different nodes need to be able to identify and locate each other.
-In ZooKeeper that is done with a configuration file that lists hostnames or IP addresses for each node.
-In Solr that is done with a parameter that specifies a host or IP address, which is then stored in ZooKeeper.
-
-In typical clusters, those hostnames/IP addresses are pre-defined and remain static through the lifetime of the cluster.
-In Docker, inter-container communication and multi-host networking can be facilitated by https://docs.docker.com/engine/userguide/networking/[Docker Networks].
-But, crucially, Docker does not normally guarantee that IP addresses of containers remain static during the lifetime of a container.
-In non-networked Docker, the IP address seems to change every time you stop/start.
-In a networked Docker, containers can lose their IP address in certain sequences of starting/stopping, unless you take steps to prevent that.
-
-IP changes cause problems:
-
-* If you use hardcoded IP addresses in configuration, and the addresses of your containers change after a stops/start, then your cluster will stop working and may corrupt itself.
-* If you use hostnames in configuration, and the addresses of your containers change, then you might run into problems with cached hostname lookups.
-* And if you use hostnames there is another problem: the names are not defined until the respective container is running,
-So when for example the first ZooKeeper node starts up, it will attempt a hostname lookup for the other nodes, and that will fail.
-This is especially a problem for ZooKeeper 3.4.6; future versions are better at recovering.
-
-Docker 1.10 has a new `--ip` configuration option that allows you to specify an IP address for a container.
-It also has a `--ip-range` option that allows you to specify the range that other containers get addresses from.
-Used together, you can implement static addresses.
-See the xref:docker-networking.adoc[] for more information.
+Yes. You can simply start your Solr containers in "Cloud mode", pointing
+them to a xref:zookeeper-ensemble.adoc[Zookeeper Ensemble].
+
+For local development, using a single zookeeper container is enough.
+Please consult the https://hub.docker.com/_/zookeeper[Zookeeper docker image] for details.
+
+For production purposes, we discourage rolling your own Zookeeper orchestration,
+as there are many pitfalls. Instead, use a well-supported container orchestrator
+with support for Solr and Zookeeper. For Kubernetes, we provide the
+https://solr.apache.org/operator/[Solr Operator] sub project.
+There are also 3rd party Helm charts available.
 
 == How can I run ZooKeeper and Solr with Docker Compose?
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/docker-networking.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/docker-networking.adoc
deleted file mode 100644
index d8ba979b26b..00000000000
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/docker-networking.adoc
+++ /dev/null
@@ -1,279 +0,0 @@
-= Solr & ZooKeeper with Docker Networking
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-_Note: this article dates from Jan 2016. While this approach would still work, in Jan 2019 this would typically done with Docker cluster and orchestration tools like Kubernetes. See for example https://lucidworks.com/post/running-solr-on-kubernetes-part-1/[this blog post]._
-
-In this example I'll create a cluster with 3 ZooKeeper nodes and 3 Solr nodes, distributed over 3 machines (trinity10, trinity20, trinity30).
-I'll use an overlay network, specify fixed IP addresses when creating containers, and I'll pass in explicit `/etc/hosts` entries to make sure they are available even when nodes are down.
-I won't show the configuration of the key-value store to configuration to enable networking, see https://docs.docker.com/engine/userguide/networking/get-started-overlay/[the docs] for that.
-I'll not use Docker Swarm in this example, but specifically place and configure containers where I want them by ssh'ing into the appropriate Docker host.
-
-To make this example easier to understand I'll just use shell commands.
-For actual use you may want to use a fancier deployment tool like http://www.fabfile.org[Fabric].
-
-NOTE: this example requires Docker 1.10.
-
-I'll run these commands from the first machine, trinity10.
-
-Create a network named "netzksolr" for this cluster.
-The `--ip-range` specifies the range of addresses to use for containers, whereas the `--subnet` specifies all possible addresses in this network.
-So effectively, addresses in the subnet but outside the range are reserved for containers that specifically use the `--ip` option.
-
-[source,bash]
-----
-docker network create --driver=overlay --subnet 192.168.22.0/24 --ip-range=192.168.22.128/25 netzksolr
-----
-
-As a simple test, check the automatic assignment and specific assignment work:
-
-[source,bash]
-----
-docker run -i --rm --net=netzksolr busybox ip -4 addr show eth0 | grep inet
-# inet 192.168.23.129/24 scope global eth0
-docker run -i --rm --net=netzksolr --ip=192.168.22.5 busybox ip -4 addr show eth0 | grep inet
-# inet 192.168.22.5/24 scope global eth0
-----
-
-So next create containers for ZooKeeper nodes.
-First define some environment variables for convenience:
-
-[source,bash]
-----
-# the machine to run the container on
-ZK1_HOST=trinity10.lan
-ZK2_HOST=trinity20.lan
-ZK3_HOST=trinity30.lan
-
-# the IP address for the container
-ZK1_IP=192.168.22.10
-ZK2_IP=192.168.22.11
-ZK3_IP=192.168.22.12
-
-# the Docker image
-ZK_IMAGE=jplock/zookeeper
-----
-
-Then create the containers:
-
-[source,bash]
-----
-ssh -n $ZK1_HOST "docker pull jplock/zookeeper && docker create --ip=$ZK1_IP --net netzksolr --name zk1 --hostname=zk1 --add-host zk2:$ZK2_IP --add-host zk3:$ZK3_IP -it $ZK_IMAGE"
-ssh -n $ZK2_HOST "docker pull jplock/zookeeper && docker create --ip=$ZK2_IP --net netzksolr --name zk2 --hostname=zk2 --add-host zk1:$ZK1_IP --add-host zk3:$ZK3_IP -it $ZK_IMAGE"
-ssh -n $ZK3_HOST "docker pull jplock/zookeeper && docker create --ip=$ZK3_IP --net netzksolr --name zk3 --hostname=zk3 --add-host zk1:$ZK1_IP --add-host zk2:$ZK2_IP -it $ZK_IMAGE"
-----
-
-Next configure those containers by creating ZooKeeper's `zoo.cfg` and `myid` files:
-
-[source,bash]
-----
-# Add ZooKeeper nodes to the ZooKeeper config.
-# If you use hostnames here, ZK will complain with UnknownHostException about the other nodes.
-# In ZooKeeper 3.4.6 that stays broken forever; in 3.4.7 that does recover.
-# If you use IP addresses you avoid the UnknownHostException and get a quorum more quickly,
-# but IP address changes can impact you.
-docker cp zk1:/opt/zookeeper/conf/zoo.cfg .
-cat >>zoo.cfg <<EOM
-server.1=zk1:2888:3888
-server.2=zk2:2888:3888
-server.3=zk3:2888:3888
-EOM
-
-cat zoo.cfg | ssh $ZK1_HOST 'dd of=zoo.cfg.tmp && docker cp zoo.cfg.tmp zk1:/opt/zookeeper/conf/zoo.cfg && rm zoo.cfg.tmp'
-cat zoo.cfg | ssh $ZK2_HOST 'dd of=zoo.cfg.tmp && docker cp zoo.cfg.tmp zk2:/opt/zookeeper/conf/zoo.cfg && rm zoo.cfg.tmp'
-cat zoo.cfg | ssh $ZK3_HOST 'dd of=zoo.cfg.tmp && docker cp zoo.cfg.tmp zk3:/opt/zookeeper/conf/zoo.cfg && rm zoo.cfg.tmp'
-rm zoo.cfg
-
-echo 1 | ssh $ZK1_HOST  'dd of=myid && docker cp myid zk1:/tmp/zookeeper/myid && rm myid'
-echo 2 | ssh $ZK2_HOST 'dd of=myid && docker cp myid zk2:/tmp/zookeeper/myid && rm myid'
-echo 3 | ssh $ZK3_HOST 'dd of=myid && docker cp myid zk3:/tmp/zookeeper/myid && rm myid'
-----
-
-Now start the containers:
-
-[source,bash]
-----
-ssh -n $ZK1_HOST 'docker start zk1'
-ssh -n $ZK2_HOST 'docker start zk2'
-ssh -n $ZK3_HOST 'docker start zk3'
-
-# Optional: verify containers are running
-ssh -n $ZK1_HOST 'docker ps'
-ssh -n $ZK2_HOST 'docker ps'
-ssh -n $ZK3_HOST 'docker ps'
-
-# Optional: inspect IP addresses of the containers
-ssh -n $ZK1_HOST "docker inspect --format '{{ .NetworkSettings.Networks.netzksolr.IPAddress }}' zk1"
-ssh -n $ZK2_HOST "docker inspect --format '{{ .NetworkSettings.Networks.netzksolr.IPAddress }}' zk2"
-ssh -n $ZK3_HOST "docker inspect --format '{{ .NetworkSettings.Networks.netzksolr.IPAddress }}' zk3"
-
-# Optional: verify connectivity and hostnames
-ssh -n $ZK1_HOST 'docker run --rm --net netzksolr -i ubuntu bash -c "echo -n zk1,zk2,zk3 | xargs -n 1 --delimiter=, /bin/ping -c 1"'
-ssh -n $ZK2_HOST 'docker run --rm --net netzksolr -i ubuntu bash -c "echo -n zk1,zk2,zk3 | xargs -n 1 --delimiter=, /bin/ping -c 1"'
-ssh -n $ZK3_HOST 'docker run --rm --net netzksolr -i ubuntu bash -c "echo -n zk1,zk2,zk3 | xargs -n 1 --delimiter=, /bin/ping -c 1"'
-
-# Optional: verify cluster got a leader
-ssh -n $ZK1_HOST "docker exec -i zk1 bash -c 'echo stat | nc localhost 2181'"
-ssh -n $ZK2_HOST "docker exec -i zk2 bash -c 'echo stat | nc localhost 2181'"
-ssh -n $ZK3_HOST "docker exec -i zk3 bash -c 'echo stat | nc localhost 2181'"
-
-# Optional: verify we can connect a zookeeper client. This should show the `[zookeeper]` znode.
-printf "ls /\nquit\n" | ssh $ZK1_HOST docker exec -i zk1 /opt/zookeeper/bin/zkCli.sh
-----
-
-That's the ZooKeeper cluster running.
-
-Next, we create Solr containers in much the same way:
-
-[source,bash]
-----
-ZKSOLR1_HOST=trinity10.lan
-ZKSOLR2_HOST=trinity20.lan
-ZKSOLR3_HOST=trinity30.lan
-
-ZKSOLR1_IP=192.168.22.20
-ZKSOLR2_IP=192.168.22.21
-ZKSOLR3_IP=192.168.22.22
-
-# the Docker image
-SOLR_IMAGE=solr
-
-HOST_OPTIONS="--add-host zk1:$ZK1_IP --add-host zk2:$ZK2_IP --add-host zk3:$ZK3_IP"
-ssh -n $ZKSOLR1_HOST "docker pull $SOLR_IMAGE && docker create --ip=$ZKSOLR1_IP --net netzksolr --name zksolr1 --hostname=zksolr1 -it $HOST_OPTIONS $SOLR_IMAGE"
-ssh -n $ZKSOLR2_HOST "docker pull $SOLR_IMAGE && docker create --ip=$ZKSOLR2_IP --net netzksolr --name zksolr2 --hostname=zksolr2 -it $HOST_OPTIONS $SOLR_IMAGE"
-ssh -n $ZKSOLR3_HOST "docker pull $SOLR_IMAGE && docker create --ip=$ZKSOLR3_IP --net netzksolr --name zksolr3 --hostname=zksolr3 -it $HOST_OPTIONS $SOLR_IMAGE"
-----
-
-Now configure Solr to know where its ZooKeeper cluster is, and start the containers:
-
-[source,bash]
-----
-for h in zksolr1 zksolr2 zksolr3; do
-  docker cp zksolr1:/opt/solr/bin/solr.in.sh .
-  sed -i -e 's/#ZK_HOST=""/ZK_HOST="zk1:2181,zk2:2181,zk3:2181"/' solr.in.sh
-  sed -i -e 's/#*SOLR_HOST=.*/SOLR_HOST="'$h'"/' solr.in.sh
-  mv solr.in.sh solr.in.sh-$h
-done
-cat solr.in.sh-zksolr1 | ssh $ZKSOLR1_HOST "dd of=solr.in.sh && docker cp solr.in.sh zksolr1:/opt/solr/bin/solr.in.sh && rm solr.in.sh"
-cat solr.in.sh-zksolr2 | ssh $ZKSOLR2_HOST "dd of=solr.in.sh && docker cp solr.in.sh zksolr2:/opt/solr/bin/solr.in.sh && rm solr.in.sh"
-cat solr.in.sh-zksolr3 | ssh $ZKSOLR3_HOST "dd of=solr.in.sh && docker cp solr.in.sh zksolr3:/opt/solr/bin/solr.in.sh && rm solr.in.sh"
-rm solr.in.sh*
-
-ssh -n $ZKSOLR1_HOST docker start zksolr1
-ssh -n $ZKSOLR2_HOST docker start zksolr2
-ssh -n $ZKSOLR3_HOST docker start zksolr3
-
-# Optional: print IP addresses to verify
-ssh -n $ZKSOLR1_HOST 'docker inspect --format "{{ .NetworkSettings.Networks.netzksolr.IPAddress }}" zksolr1'
-ssh -n $ZKSOLR2_HOST 'docker inspect --format "{{ .NetworkSettings.Networks.netzksolr.IPAddress }}" zksolr2'
-ssh -n $ZKSOLR3_HOST 'docker inspect --format "{{ .NetworkSettings.Networks.netzksolr.IPAddress }}" zksolr3'
-
-# Optional: check logs
-ssh -n $ZKSOLR1_HOST docker logs zksolr1
-ssh -n $ZKSOLR2_HOST docker logs zksolr2
-ssh -n $ZKSOLR3_HOST docker logs zksolr3
-
-# Optional: check the webserver
-ssh -n $ZKSOLR1_HOST "docker exec -i zksolr1 /bin/bash -c 'wget -O -  http://zksolr1:8983/'"
-ssh -n $ZKSOLR2_HOST "docker exec -i zksolr2 /bin/bash -c 'wget -O -  http://zksolr2:8983/'"
-ssh -n $ZKSOLR3_HOST "docker exec -i zksolr3 /bin/bash -c 'wget -O -  http://zksolr3:8983/'"
-----
-
-Next let's create a collection:
-
-[source,bash]
-----
-ssh -n $ZKSOLR1_HOST docker exec -i zksolr1 /opt/solr/bin/solr create_collection -c my_collection1 -shards 2 -p 8983
-----
-
-To load data, and see it was split over shards:
-
-[source,bash,subs="attributes"]
-----
-docker exec -it --user=solr zksolr1 bin/solr post -c my_collection1 example/exampledocs/manufacturers.xml
-# /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -classpath /opt/solr/server/lib/ext/*:/opt/solr/server/solr-webapp/webapp/WEB-INF/lib/* -Dauto=yes -Dc=my_collection1 -Ddata=files org.apache.solr.cli.SimplePostTool example/exampledocs/manufacturers.xml
-# SimplePostTool version {solr-full-version}
-# Posting files to [base] url http://localhost:8983/solr/my_collection1/update...
-# Entering auto mode. File endings considered are xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
-# POSTing file manufacturers.xml (application/xml) to [base]
-# 1 files indexed.
-# COMMITting Solr index changes to http://localhost:8983/solr/my_collection1/update...
-# Time spent: 0:00:01.093
-docker exec -it --user=solr zksolr1 bash -c "wget -q -O - 'http://zksolr1:8983/solr/my_collection1/select?q=*:*&indent=true&rows=100&fl=id' | egrep '<str name=.id.>' |  wc -l"
-11
-docker exec -it --user=solr zksolr1 bash -c "wget -q -O - 'http://zksolr1:8983/solr/my_collection1/select?q=*:*&shards=shard1&rows=100&indent=true&fl=id' | grep '<str name=.id.>' | wc -l"
-4
-docker exec -it --user=solr zksolr1 bash -c "wget -q -O - 'http://zksolr1:8983/solr/my_collection1/select?q=*:*&shards=shard2&rows=100&indent=true&fl=id' | grep '<str name=.id.>' | wc -l"
-7
-----
-
-Now to get external access to this overlay network from outside we can use a container to proxy the connections.
-For a simple TCP proxy container with an exposed port on the Docker host, proxying to a single Solr node, you can use https://github.com/brandnetworks/tcpproxy[brandnetworks/tcpproxy]:
-
-[source,bash]
-----
-ssh -n trinity10.lan "docker pull brandnetworks/tcpproxy && docker run -p 8001 -p 8002 --net netzksolr --name zksolrproxy --hostname=zksolrproxy.netzksolr -tid brandnetworks/tcpproxy --connections 8002:zksolr1:8983"
-docker port zksolrproxy 8002
-----
-
-Or use a suitably configured HAProxy to round-robin between all Solr nodes.
-Or, instead of the overlay network, use http://www.projectcalico.org[Project Calico] and configure L3 routing so you do not need to mess with proxies.
-
-Now I can get to Solr on `http://trinity10:32774/solr/#/`.
-In the Cloud -> Tree -> /live_nodes view I see the Solr nodes.
-
-From the Solr UI select the collection1 core, and click on Cloud -> Graph to see how it has created
-two shards across our Solr nodes.
-
-Now, by way of test, we'll stop the Solr containers, and start them out-of-order, and verify the IP addresses are unchanged, and check the same results come back:
-
-[source,bash]
-----
-ssh -n $ZKSOLR1_HOST docker kill zksolr1
-ssh -n $ZKSOLR2_HOST docker kill zksolr2
-ssh -n $ZKSOLR3_HOST docker kill zksolr3
-
-ssh -n $ZKSOLR1_HOST docker start zksolr1
-sleep 3
-ssh -n $ZKSOLR3_HOST docker start zksolr3
-sleep 3
-ssh -n $ZKSOLR2_HOST docker start zksolr2
-
-ssh -n $ZKSOLR1_HOST 'docker inspect --format "{{ .NetworkSettings.Networks.netzksolr.IPAddress }}" zksolr1'
-ssh -n $ZKSOLR2_HOST 'docker inspect --format "{{ .NetworkSettings.Networks.netzksolr.IPAddress }}" zksolr2'
-ssh -n $ZKSOLR3_HOST 'docker inspect --format "{{ .NetworkSettings.Networks.netzksolr.IPAddress }}" zksolr3'
-
-docker exec -it --user=solr zksolr1 bash -c "wget -q -O - 'http://zksolr1:8983/solr/my_collection1/select?q=*:*&indent=true&rows=100&fl=id' | egrep '<str name=.id.>' |  wc -l"
-docker exec -it --user=solr zksolr1 bash -c "wget -q -O - 'http://zksolr1:8983/solr/my_collection1/select?q=*:*&shards=shard1&rows=100&indent=true&fl=id' | grep '<str name=.id.>' | wc -l"
-docker exec -it --user=solr zksolr1 bash -c "wget -q -O - 'http://zksolr1:8983/solr/my_collection1/select?q=*:*&shards=shard2&rows=100&indent=true&fl=id' | grep '<str name=.id.>' | wc -l"
-----
-
-Good, that works.
-
-Finally To clean up this example:
-
-[source,bash]
-----
-ssh -n $ZK1_HOST "docker kill zk1; docker rm zk1"
-ssh -n $ZK2_HOST "docker kill zk2; docker rm zk2"
-ssh -n $ZK3_HOST "docker kill zk3; docker rm zk3"
-ssh -n $ZKSOLR1_HOST "docker kill zksolr1; docker rm zksolr1"
-ssh -n $ZKSOLR2_HOST "docker kill zksolr2; docker rm zksolr2"
-ssh -n $ZKSOLR3_HOST "docker kill zksolr3; docker rm zksolr3"
-ssh -n trinity10.lan "docker kill zksolrproxy; docker rm zksolrproxy"
-docker network rm netzksolr
-----
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
index 491b2ba248c..66b90e0d713 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
@@ -1,6 +1,5 @@
 = Solr in Docker
-:page-children: docker-faq, \
-    docker-networking
+:page-children: docker-faq
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -89,9 +88,38 @@ you can simply run:
 
 [source,bash]
 ----
-docker-compose up -d
+docker compose up -d
 ----
 
+Below is an example compose file that starts a Solr Cloud cluster with Zookeeper. By
+creating a Docker network, Solr can reach the zookeeper container with the internal
+name `zoo`.:
+
+[source,yaml]
+----
+version: '3'
+services:
+  solr:
+    image: solr:9-slim
+    ports:
+      - "8983:8983"
+    networks: [search]
+    environment:
+      ZK_HOST: "zoo:2181"
+    depends_on: [zoo]
+
+  zoo:
+    image: zookeeper:3.9
+    networks: [search]
+    environment:
+      ZOO_4LW_COMMANDS_WHITELIST: "mntr,conf,ruok"
+
+networks:
+  search:
+    driver: bridge
+----
+
+
 === Single-Command Demo
 
 For quick demos of Solr docker, there is a single command that starts Solr, creates a collection called "demo", and loads sample data into it:
@@ -174,7 +202,7 @@ docker run -p 8983:8983 -v $PWD/mysetup.sh:/mysetup.sh --name my_solr solr bash
 
 In a "SolrCloud" cluster you create "collections" to store data; and again you have several options for creating a core.
 
-These examples assume you're running a <<docker-compose,docker compose cluster>>.
+These examples assume you're running a xref:docker-compose[docker compose cluster].
 
 The first way to create a collection is to go to the http://localhost:8983/[Solr Admin UI], select "Collections" from the left-hand side navigation menu, then press the "Add Collection" button, give it a name, select the `_default` config set, then press the "Add Collection" button.
 
@@ -220,7 +248,7 @@ wget -O mydata/books.csv https://raw.githubusercontent.com/apache/solr/main/solr
 docker run --rm -v "$PWD/mydata:/mydata" --network=host solr post -c books /mydata/books.csv
 ----
 
-The same works if you use the <<docker-compose,example docker compose cluster>>, or you can just start your loading container in the same network:
+The same works if you use the xref:docker-compose[example docker compose cluster], or you can just start your loading container in the same network:
 
 [source,bash]
 ----
@@ -323,23 +351,6 @@ jattach 10 threaddump
 jattach 10 jcmd GC.heap_info
 ----
 
-== Updating from Solr 5-7 to 8+
-
-In Solr 8, the Solr Docker image switched from just extracting the Solr tar, to using the xref:taking-solr-to-production.adoc#service-installation-script[service installation script].
-This was done for various reasons: to bring it in line with the recommendations by the Solr Ref Guide and to make it easier to mount volumes.
-
-This is a backwards incompatible change, and means that if you're upgrading from an older version, you will most likely need to make some changes.
-If you don't want to upgrade at this time, specify `solr:7` as your container image.
-If you use `solr:8` you will use the new style.
-If you use just `solr` then you risk being tripped up by backwards incompatible changes; always specify at least a major version.
-
-Changes:
-
-* The Solr data is now stored in `/var/solr/data` rather than `/opt/solr/server/solr`.
-The `/opt/solr/server/solr/mycores` no longer exists.
-* The custom `SOLR_HOME` can no longer be used, because various scripts depend on the new locations.
-Consequently, `INIT_SOLR_HOME` is also no longer supported.
-
 == Running under tini
 
 The Solr docker image runs Solr under https://github.com/krallin/tini[tini], to make signal handling work better; in particular, this allows you to `kill -9` the JVM.