You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ep...@apache.org on 2023/03/21 17:48:26 UTC

[solr] branch main updated: SOLR-16610: Support Copy n Paste of Command Line commands in Ref Guide (#1273)

This is an automated email from the ASF dual-hosted git repository.

epugh pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/solr.git


The following commit(s) were added to refs/heads/main by this push:
     new 3e445a6fe76 SOLR-16610: Support Copy n Paste of Command Line commands in Ref Guide (#1273)
3e445a6fe76 is described below

commit 3e445a6fe76560a61bd08f5dbda749664e475192
Author: Eric Pugh <ep...@opensourceconnections.com>
AuthorDate: Tue Mar 21 13:48:20 2023 -0400

    SOLR-16610: Support Copy n Paste of Command Line commands in Ref Guide (#1273)
    
    * Use the built in formatting in Antora to make commands cut'n'pasteable.
---
 .../getting-started/pages/solr-tutorial.adoc       |  13 +-
 .../getting-started/pages/tutorial-aws.adoc        |  35 ++++--
 .../getting-started/pages/tutorial-diy.adoc        |  18 +--
 .../getting-started/pages/tutorial-films.adoc      | 138 +++++++++++++++------
 .../pages/tutorial-five-minutes.adoc               |  28 ++---
 .../getting-started/pages/tutorial-solrcloud.adoc  |  59 +++++----
 .../pages/tutorial-techproducts.adoc               |  72 ++++++-----
 7 files changed, 221 insertions(+), 142 deletions(-)

diff --git a/solr/solr-ref-guide/modules/getting-started/pages/solr-tutorial.adoc b/solr/solr-ref-guide/modules/getting-started/pages/solr-tutorial.adoc
index 062faa39518..df73ff2917b 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/solr-tutorial.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/solr-tutorial.adoc
@@ -45,14 +45,14 @@ For best results, please run the browser showing this tutorial and the Solr serv
 Begin by unzipping the Solr release and changing your working directory to the subdirectory where Solr was installed.
 For example, with a shell in UNIX, Cygwin, or MacOS:
 
-[source,bash,subs="verbatim,attributes+"]
+[,console]
 ----
-~$ ls solr*
+$ ls solr*
 solr-{solr-full-version}.tgz
 
-~$ tar -xzf solr-{solr-full-version}.tgz
+$ tar -xzf solr-{solr-full-version}.tgz
 
-~$ cd solr-{solr-full-version}/
+$ cd solr-{solr-full-version}/
 ----
 
 If you'd like to know more about Solr's directory layout before moving to the first exercise, see the section xref:deployment-guide:installing-solr.adoc#directory-layout[Directory Layout] for details.
@@ -84,9 +84,10 @@ Nice work!
 As you work through this tutorial, you may want to stop Solr and reset the environment back to the starting point.
 The following command line will stop Solr and remove the directories for each of the two nodes that were created all the way back in Exercise 1:
 
-[source,bash]
+[,console]
 ----
-bin/solr stop -all ; rm -Rf example/cloud/
+$ bin/solr stop -all
+$ rm -Rf example/cloud/
 ----
 
 == Where to next?
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-aws.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-aws.adoc
index 4a7d7bdee4a..83d5039fd46 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-aws.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-aws.adoc
@@ -123,13 +123,16 @@ You cannot use the instances until they become *“running”*.
 +
 Using SSH, if your AWS identity key file is `aws-key.pem` and the AMI uses `ec2-user` as login user, on each AWS instance, do the following:
 +
-[source,bash]
+[,console]
+----
 $ ssh-add aws-key.pem
 $ ssh -A ec2-user@<instance-public-dns>
+----
 +
 . While logged in to each of the AWS EC2 instances, configure Java 1.8 and download Solr:
 +
-[source,bash]
+[,console]
+----
 # verify default java version packaged with AWS instances is 1.7
 $ java -version
 $ sudo yum install java-1.8.0
@@ -137,8 +140,10 @@ $ sudo /usr/sbin/alternatives --config java
 # select jdk-1.8
 # verify default java version to java-1.8
 $ java -version
+----
 +
-[source,bash,subs="verbatim,attributes+"]
+[,console]
+----
 # download desired version of Solr
 $ wget http://archive.apache.org/dist/solr/solr/{solr-full-version}/solr-{solr-full-version}.tgz
 # untar
@@ -148,6 +153,7 @@ $ export SOLR_HOME=$PWD/solr-{solr-full-version}
 # put the env variable in .bashrc
 # vim ~/.bashrc
 export SOLR_HOME=/home/ec2-user/solr-{solr-full-version}
+----
 
 . Resolve the Public DNS to simpler hostnames.
 +
@@ -158,10 +164,12 @@ Let’s assume AWS instances public DNS with IPv4 Public IP are as follows:
 +
 Edit `/etc/hosts`, and add entries for the above machines:
 +
-[source,bash]
+[,console]
+----
 $ sudo vim /etc/hosts
 54.1.2.3 solr-node-1
 54.4.5.6 solr-node-2
+----
 
 . Configure Solr in running EC2 instances.
 +
@@ -171,18 +179,21 @@ See <<Deploying with External ZooKeeper>> for configure external ZooKeeper.
 +
 Inside the `ec2-101-1-2-3.us-east-2.compute.amazonaws.com` (`solr-node-1`)
 +
-[source,bash]
+[,console]
+----
 $ cd $SOLR_HOME
 # start Solr node on 8983 and ZooKeeper will start on 8983+1000 9983
 $ bin/solr start -c -p 8983 -h solr-node-1
-
+----
 +
 On the other node, `ec2-101-4-5-6.us-east-2.compute.amazonaws.com` (`solr-node-2`)
 +
-[source,bash]
+[,console]
+----
 $ cd $SOLR_HOME
 # start Solr node on 8983 and connect to ZooKeeper running on first node
 $ bin/solr start -c -p 8983 -h solr-node-2 -z solr-node-1:9983
+----
 
 . Inspect and Verify.
 Inspect the Solr nodes state from browser on local machine:
@@ -228,7 +239,7 @@ See the section xref:deployment-guide:zookeeper-ensemble.adoc[] for information
 In this example we're using ZooKeeper v{dep-version-zookeeper}.
 On the node you're using to host ZooKeeper (`zookeeper-node`), download the package and untar it:
 +
-[source,bash,subs="attributes"]
+[,console]
 ----
 # download stable version of ZooKeeper, here {dep-version-zookeeper}
 $ wget https://archive.apache.org/dist/zookeeper/zookeeper-{dep-version-zookeeper}/zookeeper-{dep-version-zookeeper}.tar.gz
@@ -249,7 +260,7 @@ export ZOO_HOME=/home/ec2-user/zookeeper-{dep-version-zookeeper}
 ----
 . Change directories to `ZOO_HOME`, and create the ZooKeeper configuration by using the template provided by ZooKeeper.
 +
-[source,bash]
+[,console]
 ----
 $ cd $ZOO_HOME
 # create ZooKeeper config by using zoo_sample.cfg
@@ -270,7 +281,7 @@ dataDir=data
 ----
 . Start ZooKeeper.
 +
-[source,bash]
+[,console]
 ----
 $ cd $ZOO_HOME
 # start ZooKeeper, default port: 2181
@@ -279,7 +290,7 @@ $ bin/zkServer.sh start
 
 . On the first node being used for Solr (`solr-node-1`), start Solr and tell it where to find ZooKeeper.
 +
-[source,bash]
+[,console]
 ----
 $ cd $SOLR_HOME
 # start Solr node on 8983 and connect to ZooKeeper running on ZooKeeper node
@@ -288,7 +299,7 @@ $ bin/solr start -c -p 8983 -h solr-node-1 -z zookeeper-node:2181
 +
 . On the second Solr node (`solr-node-2`), again start Solr and tell it where to find ZooKeeper.
 +
-[source,bash]
+[,console]
 ----
 $ cd $SOLR_HOME
 # start Solr node on 8983 and connect to ZooKeeper running on ZooKeeper node
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-diy.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-diy.adoc
index 1283240dfd5..57d4197b740 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-diy.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-diy.adoc
@@ -1,4 +1,4 @@
-= Exercise 3 Index Your Own Data
+= Exercise 3: Index Your Own Data
 :experimental:
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
@@ -35,9 +35,9 @@ This exercise is intended to get you thinking about what you will need to do for
 Before you get started, create a new collection, named whatever you'd like.
 In this example, the collection will be named "localDocs"; replace that name with whatever name you choose if you want to.
 
-[source,bash]
+[,console]
 ----
-./bin/solr create -c localDocs -s 2 -rf 2
+$ bin/solr create -c localDocs -s 2 -rf 2
 ----
 
 Again, as we saw from Exercise 2 above, this will use the `_default` configset and all the schemaless features it provides.
@@ -58,9 +58,9 @@ We used only JSON, XML and CSV in our exercises, but the Post Tool can also hand
 In this example, assume there is a directory named "Documents" locally.
 To index it, we would issue a command like this (correcting the collection name after the `-c` parameter as needed):
 +
-[source,bash]
+[,console]
 ----
-./bin/post -c localDocs ~/Documents
+$ bin/post -c localDocs ~/Documents
 ----
 +
 You may get errors as it works through your documents.
@@ -101,16 +101,16 @@ We can use `bin/post` to delete documents also if we structure the request prope
 
 Execute the following command to delete a specific document:
 
-[source,bash]
+[,console]
 ----
-bin/post -c localDocs -d "<delete><id>SP2514N</id></delete>"
+$ bin/post -c localDocs -d "<delete><id>SP2514N</id></delete>"
 ----
 
 To delete all documents, you can use "delete-by-query" command like:
 
-[source,bash]
+[,console]
 ----
-bin/post -c localDocs -d "<delete><query>*:*</query></delete>"
+$ bin/post -c localDocs -d "<delete><query>*:*</query></delete>"
 ----
 
 You can also modify the above to only delete documents that match a specific query.
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
index 049e753c692..2a171d54c9f 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
@@ -29,17 +29,17 @@ Then go ahead to the next section.
 
 If you did, though, and need to restart Solr, issue these commands:
 
-[source,bash]
+[,console]
 ----
-./bin/solr start -c -p 8983 -s example/cloud/node1/solr
+$ bin/solr start -c -p 8983 -s example/cloud/node1/solr
 ----
 
 This starts the first node.
 When it's done start the second node, and tell it how to connect to ZooKeeper:
 
-[source,bash]
+[,console]
 ----
-./bin/solr start -c -p 7574 -s example/cloud/node2/solr -z localhost:9983
+$ bin/solr start -c -p 7574 -s example/cloud/node2/solr -z localhost:9983
 ----
 
 NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:deployment-guide:zookeeper-ensemble#updating-solr-include-files[Updating Solr Include Files]) you can omit `-z <zk host string>` from the above command.
@@ -72,9 +72,9 @@ This time, we're going to use a configset that has a very minimal schema and let
 
 The data you're going to index is related to movies, so start by creating a collection named "films" that uses the `_default` configset:
 
-[source,bash]
+[,console]
 ----
-bin/solr create -c films -s 2 -rf 2
+$ bin/solr create -c films -s 2 -rf 2
 ----
 
 Whoa, wait.
@@ -87,7 +87,7 @@ This is equivalent to the options we had during the interactive example from the
 
 You should see output like:
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 WARNING: Using _default configset. Data driven schema functionality is enabled by default, which is
          NOT RECOMMENDED for production use.
@@ -165,8 +165,10 @@ That's not going to get us very far.
 What we can do is set up the "name" field in Solr before we index the data to be sure Solr always interprets it as a string.
 At the command line, enter this curl command:
 
-[source,bash]
-curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' http://localhost:8983/solr/films/schema
+[,console]
+----
+$ curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' http://localhost:8983/solr/films/schema
+----
 
 This command uses the Schema API to explicitly define a field named "name" that has the field type "text_general" (a text field).
 It will not be permitted to have multiple values, but it will be stored (meaning it can be retrieved by queries).
@@ -192,8 +194,10 @@ You can use either the Admin UI or the Schema API for this.
 
 At the command line, use the Schema API again to define a copy field:
 
-[source,bash]
-curl -X POST -H 'Content-type:application/json' --data-binary '{"add-copy-field" : {"source":"*","dest":"_text_"}}' http://localhost:8983/solr/films/schema
+[,console]
+----
+$ curl -X POST -H 'Content-type:application/json' --data-binary '{"add-copy-field" : {"source":"*","dest":"_text_"}}' http://localhost:8983/solr/films/schema
+----
 
 In the Admin UI, choose btn:[Add Copy Field], then fill out the source and destination for your field, as in this screenshot.
 
@@ -215,28 +219,81 @@ It comes in three formats: JSON, XML and CSV.
 Pick one of the formats and index it into the "films" collection (in each example, one command is for Unix/MacOS and the other is for Windows):
 
 .To Index JSON Format
-[source,subs="verbatim,attributes+"]
+[.dynamic-tabs]
+--
+[example.tab-pane#unixindexjson]
+====
+[.tab-label]*Linux/Mac*
+
+[,console]
 ----
-bin/post -c films example/films/films.json
+$ bin/post -c films example/films/films.json
 
-C:\solr-{solr-full-version}> java -jar -Dc=films -Dauto example\exampledocs\post.jar example\films\*.json
 ----
+====
+
+[example.tab-pane#winindexjson]
+====
+[.tab-label]*Windows*
+
+[,console]
+----
+$ java -jar -Dc=films -Dauto example\exampledocs\post.jar example\films\*.json
+----
+====
+--
+
 
 .To Index XML Format
-[source,subs="verbatim,attributes+"]
+[.dynamic-tabs]
+--
+[example.tab-pane#unixindexxml]
+====
+[.tab-label]*Linux/Mac*
+
+[,console]
 ----
-bin/post -c films example/films/films.xml
+$ bin/post -c films example/films/films.xml
 
-C:\solr-{solr-full-version}> java -jar -Dc=films -Dauto example\exampledocs\post.jar example\films\*.xml
 ----
+====
+
+[example.tab-pane#winindexxml]
+====
+[.tab-label]*Windows*
+
+[,console]
+----
+$ java -jar -Dc=films -Dauto example\exampledocs\post.jar example\films\*.xml
+----
+====
+--
+
 
 .To Index CSV Format
-[source,subs="verbatim,attributes+"]
+[.dynamic-tabs]
+--
+[example.tab-pane#unixindexcsv]
+====
+[.tab-label]*Linux/Mac*
+
+[,console]
+----
+$ bin/post -c films example/films/films.csv -params "f.genre.split=true&f.directed_by.split=true&f.genre.separator=|&f.directed_by.separator=|"
+
 ----
-bin/post -c films example/films/films.csv -params "f.genre.split=true&f.directed_by.split=true&f.genre.separator=|&f.directed_by.separator=|"
+====
 
-C:\solr-{solr-full-version}> java -jar -Dc=films -Dparams=f.genre.split=true&f.directed_by.split=true&f.genre.separator=|&f.directed_by.separator=| -Dauto example\exampledocs\post.jar example\films\*.csv
+[example.tab-pane#winindexcsv]
+====
+[.tab-label]*Windows*
+
+[,console]
+----
+$ java -jar -Dc=films -Dparams=f.genre.split=true&f.directed_by.split=true&f.genre.separator=|&f.directed_by.separator=| -Dauto example\exampledocs\post.jar example\films\*.csv
 ----
+====
+--
 
 Each command includes these main parameters:
 
@@ -250,10 +307,10 @@ Telling Solr to split these columns this way will ensure proper indexing of the
 
 Each command will produce output similar to the below seen while indexing JSON:
 
-[source,bash,subs="verbatim,attributes"]
+[,console]
 ----
-$ ./bin/post -c films example/films/films.json
-/bin/java -classpath /solr-{solr-full-version}/server/solr-webapp/webapp/WEB-INF/lib/solr-core-{solr-full-version}.jar -Dauto=yes -Dc=films -Ddata=files org.apache.solr.util.SimplePostTool example/films/films.json
+$ bin/post -c films example/films/films.json
+# bin/java -classpath /solr-{solr-full-version}/server/solr-webapp/webapp/WEB-INF/lib/solr-core-{solr-full-version}.jar -Dauto=yes -Dc=films -Ddata=files org.apache.solr.util.SimplePostTool example/films/films.json
 SimplePostTool version 5.0.0
 Posting files to [base] url http://localhost:8983/solr/films/update...
 Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
@@ -292,7 +349,10 @@ To see facet counts from all documents (`q=\*:*`): turn on faceting (`facet=true
 If you only want facets, and no document contents, specify `rows=0`.
 The `curl` command below will return facet counts for the `genre_str` field:
 
-`curl "http://localhost:8983/solr/films/select?q=\*:*&rows=0&facet=true&facet.field=genre_str"`
+[,console]
+----
+$ curl "http://localhost:8983/solr/films/select?q=\*:*&rows=0&facet=true&facet.field=genre_str"`
+----
 
 In your terminal, you'll see something like:
 
@@ -330,9 +390,9 @@ Or, perhaps you do want all the facets, and you'll let your application's front-
 
 If you wanted to control the number of items in a bucket, you could do something like this:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/films/select?=&q=\*:*&facet.field=genre_str&facet.mincount=200&facet=on&rows=0"
+$ curl "http://localhost:8983/solr/films/select?=&q=\*:*&facet.field=genre_str&facet.mincount=200&facet=on&rows=0"
 ----
 
 You should only see 4 facets returned.
@@ -350,16 +410,18 @@ The Solr Admin UI doesn't yet support range facet options, so you will need to u
 
 If we construct a query that looks like this:
 
-[source,bash]
-curl 'http://localhost:8983/solr/films/select?q=*:*&rows=0'\
-'&facet=true'\
-'&facet.range=initial_release_date'\
-'&facet.range.start=NOW/YEAR-25YEAR'\
-'&facet.range.end=NOW'\
-'&facet.range.gap=%2B1YEAR'
+[,console]
+----
+$ curl "http://localhost:8983/solr/films/select?q=*:*&rows=0\
+&facet=true\
+&facet.range=initial_release_date\
+&facet.range.start=NOW/YEAR-25YEAR\
+&facet.range.end=NOW\
+&facet.range.gap=%2B1YEAR"
+----
 
 This will request all films and ask for them to be grouped by year starting with 25 years ago (our earliest release date is in 2000) and ending today.
-Note that this query again URL encodes a `+` as `%2B`.
+Note that this query URL encodes a `+` as `%2B`.
 
 In the terminal you will see:
 
@@ -418,9 +480,9 @@ Another faceting type is pivot facets, also known as "decision trees", allowing
 Using the films data, pivot facets can be used to see how many of the films in the "Drama" category (the `genre_str` field) are directed by a director.
 Here's how to get at the raw data for this scenario:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/films/select?q=\*:*&rows=0&facet=on&facet.pivot=genre_str,directed_by_str"
+$ curl "http://localhost:8983/solr/films/select?q=\*:*&rows=0&facet=on&facet.pivot=genre_str,directed_by_str"
 ----
 
 This results in the following response, which shows a facet for each category and director combination:
@@ -474,7 +536,7 @@ Like our previous exercise, this data may not be relevant to your needs.
 We can clean up our work by deleting the collection.
 To do that, issue this command at the command line:
 
-[source,bash]
+[,console]
 ----
-bin/solr delete -c films
+$ bin/solr delete -c films
 ----
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-five-minutes.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-five-minutes.adoc
index 87b5a756b60..9bf8f6a42b5 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-five-minutes.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-five-minutes.adoc
@@ -24,9 +24,9 @@ To launch Solr, run: `bin/solr start -c` on Unix or MacOS; `bin\solr.cmd start -
 
 To start another Solr node and have it join the cluster alongside the first node,
 
-[source]
+[,console]
 ----
-bin/solr -c -z localhost:9983 -p 8984
+$ bin/solr -c -z localhost:9983 -p 8984
 ----
 
 
@@ -34,9 +34,9 @@ bin/solr -c -z localhost:9983 -p 8984
 
 Like a database system holds data in tables, Solr holds data in collections. A collection can be created as follows:
 
-[source]
+[,console]
 ----
-curl --request POST \
+$ curl --request POST \
 --url http://localhost:8983/api/collections \
 --header 'Content-Type: application/json' \
 --data '{
@@ -52,9 +52,9 @@ curl --request POST \
 
 Let us define some of the fields that our documents will contain.
 
-[source]
+[,console]
 ----
-curl --request POST \
+$ curl --request POST \
   --url http://localhost:8983/api/collections/techproducts/schema \
   --header 'Content-Type: application/json' \
   --data '{
@@ -76,9 +76,9 @@ curl --request POST \
 
 A single document can be indexed as:
 
-[source]
+[,console]
 ----
-curl --request POST \
+$ curl --request POST \
 --url 'http://localhost:8983/api/collections/techproducts/update' \
   --header 'Content-Type: application/json' \
   --data '  {
@@ -97,9 +97,9 @@ curl --request POST \
 
 Multiple documents can be indexed in the same request:
 
-[source]
+[,console]
 ----
-curl --request POST \
+$ curl --request POST \
   --url 'http://localhost:8983/api/collections/techproducts/update' \
   --header 'Content-Type: application/json' \
   --data '  [
@@ -133,9 +133,9 @@ curl --request POST \
 
 A file containing the documents can be indexed as follows:
 
-[source]
+[,console]
 ----
-curl -H "Content-Type: application/json" \
+$ curl -H "Content-Type: application/json" \
        -X POST \
        -d @example/products.json \
        --url 'http://localhost:8983/api/collections/techproducts/update?commit=true'
@@ -144,9 +144,9 @@ curl -H "Content-Type: application/json" \
 == Commit the Changes
 After documents are indexed into a collection, they are not immediately available for searching. In order to have them searchable, a commit operation (also called `refresh` in other search engines like OpenSearch etc.) is needed. Commits can be scheduled at periodic intervals using auto-commits as follows.
 
-[source]
+[,console]
 ----
-curl -X POST -H 'Content-type: application/json' -d '{"set-property":{"updateHandler.autoCommit.maxTime":15000}}' http://localhost:8983/api/collections/techproducts/config
+$ curl -X POST -H 'Content-type: application/json' -d '{"set-property":{"updateHandler.autoCommit.maxTime":15000}}' http://localhost:8983/api/collections/techproducts/config
 ----
 
 == Make some Basic search queries
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-solrcloud.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-solrcloud.adoc
index 7a68b1ead62..f8621017397 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-solrcloud.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-solrcloud.adoc
@@ -49,16 +49,16 @@ For more details see the section xref:deployment-guide:securing-solr.adoc#networ
 The `bin/solr` script makes it easy to get started with SolrCloud as it walks you through the process of launching Solr nodes in SolrCloud mode and adding a collection.
 To get started, simply do:
 
-[source,bash]
+[,console]
 ----
-bin/solr -e cloud
+$ bin/solr -e cloud
 ----
 
 This starts an interactive session to walk you through the steps of setting up a simple SolrCloud cluster with embedded ZooKeeper.
 
 The script starts by asking you how many Solr nodes you want to run in your local cluster, with the default being 2.
 
-[source,plain]
+[console]
 ----
 Welcome to the SolrCloud example!
 
@@ -71,7 +71,7 @@ These nodes will each exist on a single machine, but will use different ports to
 
 Next, the script will prompt you for the port to bind each of the Solr nodes to, such as:
 
-[source,plain]
+[,console]
 ----
  Please enter the port for node1 [8983]
 ----
@@ -79,9 +79,9 @@ Next, the script will prompt you for the port to bind each of the Solr nodes to,
 Choose any available port for each node; the default for the first node is 8983 and 7574 for the second node.
 The script will start each node in order and show you the command it uses to start the server, such as:
 
-[source,bash]
+[,console]
 ----
-solr start -cloud -s example/cloud/node1/solr -p 8983
+$ bin/solr start -cloud -s example/cloud/node1/solr -p 8983
 ----
 
 The first node will also start an embedded ZooKeeper server bound to port 9983.
@@ -89,7 +89,7 @@ The Solr home for the first node is in `example/cloud/node1/solr` as indicated b
 
 After starting up all nodes in the cluster, the script prompts you for the name of the collection to create:
 
-[source,plain]
+[,console]
 ----
  Please provide a name for your new collection: [gettingstarted]
 ----
@@ -114,18 +114,18 @@ This can be done as follows (assuming your collection name is `mycollection`):
 [example.tab-pane#v1autocreatefalse]
 ====
 [.tab-label]*V1 API*
-[source,bash]
+[,console]
 ----
-curl http://host:8983/solr/mycollection/config -d '{"set-user-property": {"update.autoCreateFields":"false"}}'
+$ curl http://host:8983/solr/mycollection/config -d '{"set-user-property": {"update.autoCreateFields":"false"}}'
 ----
 ====
 
 [example.tab-pane#v2autocreatefalse]
 ====
 [.tab-label]*V2 API SolrCloud*
-[source,bash]
+[,console]
 ----
-curl http://host:8983/api/collections/mycollection/config -d '{"set-user-property": {"update.autoCreateFields":"false"}}'
+$ curl http://host:8983/api/collections/mycollection/config -d '{"set-user-property": {"update.autoCreateFields":"false"}}'
 ----
 ====
 --
@@ -133,9 +133,9 @@ curl http://host:8983/api/collections/mycollection/config -d '{"set-user-propert
 At this point, you should have a new collection created in your local SolrCloud cluster.
 To verify this, you can run the status command:
 
-[source,bash]
+[,console]
 ----
-bin/solr status
+$ bin/solr status
 ----
 
 If you encounter any errors during this process, check the Solr log files in `example/cloud/node1/logs` and `example/cloud/node2/logs`.
@@ -143,9 +143,9 @@ If you encounter any errors during this process, check the Solr log files in `ex
 You can see how your collection is deployed across the cluster by visiting the cloud panel in the Solr Admin UI: http://localhost:8983/solr/#/~cloud.
 Solr also provides a way to perform basic diagnostics for a collection using the healthcheck command:
 
-[source,bash]
+[,console]
 ----
-bin/solr healthcheck -c gettingstarted
+$ bin/solr healthcheck -c gettingstarted
 ----
 
 The healthcheck command gathers basic information about each replica in a collection, such as number of docs, current status (active, down, etc.), and address (where the replica lives in the cluster).
@@ -154,18 +154,18 @@ Documents can now be added to SolrCloud using the xref:indexing-guide:post-tool.
 
 To stop Solr in SolrCloud mode, you would use the `bin/solr` script and issue the `stop` command, as in:
 
-[source,bash]
+[,console]
 ----
-bin/solr stop -all
+$ bin/solr stop -all
 ----
 
 === Starting with -noprompt
 
 You can also get SolrCloud started with all the defaults instead of the interactive session using the following command:
 
-[source,bash]
+[,console]
 ----
-bin/solr -e cloud -noprompt
+$ bin/solr -e cloud -noprompt
 ----
 
 === Restarting Nodes
@@ -173,16 +173,16 @@ bin/solr -e cloud -noprompt
 You can restart your SolrCloud nodes using the `bin/solr` script.
 For instance, to restart node1 running on port 8983 (with an embedded ZooKeeper server), you would do:
 
-[source,bash]
+[,console]
 ----
-bin/solr restart -c -p 8983 -s example/cloud/node1/solr
+$ bin/solr restart -c -p 8983 -s example/cloud/node1/solr
 ----
 
 To restart node2 running on port 7574, you can do:
 
-[source,bash]
+[,console]
 ----
-bin/solr restart -c -p 7574 -z localhost:9983 -s example/cloud/node2/solr
+$ bin/solr restart -c -p 7574 -z localhost:9983 -s example/cloud/node2/solr
 ----
 
 Notice that you need to specify the ZooKeeper address (`-z localhost:9983`) when starting node2 so that it can join the cluster with node1.
@@ -192,21 +192,20 @@ Notice that you need to specify the ZooKeeper address (`-z localhost:9983`) when
 Adding a node to an existing cluster is a bit advanced and involves a little more understanding of Solr.
 Once you startup a SolrCloud cluster using the startup scripts, you can add a new node to it by:
 
-[source,bash]
+[,console]
 ----
-mkdir <solr.home for new Solr node>
-cp <existing solr.xml path> <new solr.home>
-bin/solr start -cloud -s solr.home/solr -p <port num> -z <zk hosts string>
+$ mkdir <solr.home for new Solr node>
+$ bin/solr start -cloud -s solr.home/solr -p <port num> -z <zk hosts string>
 ----
 
 Notice that the above requires you to create a Solr home directory.
 
 Example (with directory structure) that adds a node to an example started with "bin/solr -e cloud":
 
-[source,bash]
+[,console]
 ----
-mkdir -p example/cloud/node3/solr
-bin/solr start -cloud -s example/cloud/node3/solr -p 8987 -z localhost:9983
+$ mkdir -p example/cloud/node3/solr
+$ bin/solr start -cloud -s example/cloud/node3/solr -p 8987 -z localhost:9983
 ----
 
 The previous command will start another Solr node on port 8987 with Solr home set to `example/cloud/node3/solr`.
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc
index 9f634555b6a..42ef7c3a5be 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc
@@ -26,32 +26,35 @@ To launch Solr, run: `bin/solr start -e cloud` on Unix or MacOS; `bin\solr.cmd s
 This will start an interactive session that will start two Solr "servers" on your machine.
 This command has an option to run without prompting you for input (`-noprompt`), but we want to modify two of the defaults so we won't use that option now.
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
-solr-{solr-full-version}:$ ./bin/solr start -e cloud
+$ bin/solr start -e cloud
 
 Welcome to the SolrCloud example!
 
 This interactive session will help you launch a SolrCloud cluster on your local workstation.
 To begin, how many Solr nodes would you like to run in your local cluster? (specify 1-4 nodes) [2]:
 ----
+
 The first prompt asks how many nodes we want to run.
 Note the `[2]` at the end of the last line; that is the default number of nodes.
 Two is what we want for this example, so you can simply press kbd:[enter].
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 Ok, let's start up 2 Solr nodes for your example SolrCloud cluster.
 Please enter the port for node1 [8983]:
 ----
+
 This will be the port that the first node runs on.
 Unless you know you have something else running on port 8983 on your machine, accept this default option also by pressing kbd:[enter].
 If something is already using that port, you will be asked to choose another port.
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 Please enter the port for node2 [7574]:
 ----
+
 This is the port the second node will run on.
 Again, unless you know you have something else running on port 7574 on your machine, accept this default option also by pressing kbd:[enter].
 If something is already using that port, you will be asked to choose another port.
@@ -59,7 +62,7 @@ If something is already using that port, you will be asked to choose another por
 Solr will now initialize itself and start running on those two nodes.
 The script will print the commands it uses for your reference.
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 Starting up 2 Solr nodes for your example SolrCloud cluster.
 
@@ -88,7 +91,7 @@ Because we are starting in SolrCloud mode, and did not define any details about
 
 After startup is complete, you'll be prompted to create a collection to use for indexing data.
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 Now let's create a new collection for indexing documents in your 2-node cluster.
 Please provide a name for your new collection: [gettingstarted]
@@ -99,7 +102,7 @@ This tutorial will ask you to index some sample data included with Solr, called
 Let's name our collection "techproducts" so it's easy to differentiate from other collections we'll create later.
 Enter `techproducts` at the prompt and hit kbd:[enter].
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 How many shards would you like to split techproducts into? [2]
 ----
@@ -108,7 +111,7 @@ This is asking how many xref:solr-glossary.adoc#shard[shards] you want to split
 Choosing "2" (the default) means we will split the index relatively evenly across both nodes, which is a good way to start.
 Accept the default by hitting kbd:[enter].
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 How many replicas per shard would you like to create? [2]
 ----
@@ -116,7 +119,7 @@ How many replicas per shard would you like to create? [2]
 A replica is a copy of the index that's used for failover (see also the xref:solr-glossary.adoc#replica[Solr Glossary definition]).
 Again, the default of "2" is fine to start with here also, so accept the default by hitting kbd:[enter].
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 Please choose a configuration for the techproducts collection, available options are:
 _default or sample_techproducts_configs [_default]
@@ -132,7 +135,7 @@ This configset is specifically designed to support the sample data we want to us
 
 At this point, Solr will create the collection and again output to the screen the commands it issues.
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 Uploading /solr-{solr-full-version}/server/solr/configsets/_default/conf for config techproducts to ZooKeeper at localhost:9983
 
@@ -197,20 +200,20 @@ The data we will index is in the `example/exampledocs` directory.
 The documents are in a mix of document formats (JSON, CSV, etc.), and fortunately we can index them all at once:
 
 .Linux/Mac
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
-solr-{solr-full-version}:$ bin/post -c techproducts example/exampledocs/*
+$ bin/post -c techproducts example/exampledocs/*
 ----
 
 .Windows
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
-C:\solr-{solr-full-version}> java -jar -Dc=techproducts -Dauto example\exampledocs\post.jar example\exampledocs\*
+$ java -jar -Dc=techproducts -Dauto example\exampledocs\post.jar example\exampledocs\*
 ----
 
 You should see output similar to the following:
 
-[source,subs="verbatim,attributes+"]
+[,console]
 ----
 SimplePostTool version 5.0.0
 Posting files to [base] url http://localhost:8983/solr/techproducts/update...
@@ -260,11 +263,12 @@ If you click on it, your browser will show you the raw response.
 
 To use curl, give the same URL shown in your browser in quotes on the command line:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/techproducts/select?indent=on&q=\*:*"
+$ curl "http://localhost:8983/solr/techproducts/select?indent=on&q=*:*"
 ----
 
+
 What's happening here is that we are using Solr's query parameter (`q`) with a special syntax that requests all documents in the index (`\*:*`).
 All of the documents are not returned to us, however, because of the default for a parameter called `rows`, which you can see in the form is `10`.
 You can change the parameter in the UI or in the defaults if you wish.
@@ -280,11 +284,12 @@ Enter "foundation" and hit btn:[Execute Query] again.
 
 If you prefer curl, enter something like this:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/techproducts/select?q=foundation"
+$ curl "http://localhost:8983/solr/techproducts/select?q=foundation"
 ----
 
+
 You'll see something like this:
 
 [source,json]
@@ -327,11 +332,12 @@ This is one of the available fields on the query form in the Admin UI.
 Put "id" (without quotes) in the "fl" box and hit btn:[Execute Query] agai
 Or, to specify it with curl:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/techproducts/select?q=foundation&fl=id"
+$ curl "http://localhost:8983/solr/techproducts/select?q=foundation&fl=id"
 ----
 
+
 You should only see the IDs of the matching records returned.
 
 === Field Searches
@@ -423,9 +429,9 @@ For example, search for "CAS latency" by entering that phrase in quotes to the `
 
 If you're following along with curl, note that the space between terms must be converted to "+" in a URL, as so:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/techproducts/select?q=\"CAS+latency\""
+$ curl "http://localhost:8983/solr/techproducts/select?q=\"CAS+latency\""
 ----
 
 We get 2 results:
@@ -484,9 +490,9 @@ To find documents that contain both terms "electronics" and "music", enter `+ele
 If you're using curl, you must encode the `+` character because it has a reserved purpose in URLs (encoding the space character).
 The encoding for `+` is `%2B` as in:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/techproducts/select?q=%2Belectronics%20%2Bmusic"
+$ curl "http://localhost:8983/solr/techproducts/select?q=%2Belectronics%20%2Bmusic"
 ----
 
 You should only get a single result.
@@ -494,9 +500,9 @@ You should only get a single result.
 To search for documents that contain the term "electronics" but *don't* contain the term "music", enter `+electronics -music` in the `q` box in the Admin UI.
 For curl, again, URL encode `+` as `%2B` as in:
 
-[source,bash]
+[,console]
 ----
-curl "http://localhost:8983/solr/techproducts/select?q=%2Belectronics+-music"
+$ curl "http://localhost:8983/solr/techproducts/select?q=%2Belectronics+-music"
 ----
 
 This time you get 13 results.
@@ -514,23 +520,23 @@ You can choose now to continue to the next example which will introduce more Sol
 If you decide not to continue with this tutorial, the data we've indexed so far is likely of little value to you.
 You can delete your installation and start over, or you can use the `bin/solr` script we started out with to delete this collection:
 
-[source,bash]
+[,console]
 ----
-bin/solr delete -c techproducts
+$ bin/solr delete -c techproducts
 ----
 
 And then create a new collection:
 
-[source,bash]
+[,console]
 ----
-bin/solr create -c <yourCollection> -s 2 -rf 2
+$ bin/solr create -c <yourCollection> -s 2 -rf 2
 ----
 
 To stop both of the Solr nodes we started, issue the command:
 
-[source,bash]
+[,console]
 ----
-bin/solr stop -all
+$ bin/solr stop -all
 ----
 
 For more information on start/stop and collection options with `bin/solr`, see xref:deployment-guide:solr-control-script-reference.adoc[].