You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@solr.apache.org by "gerlowskija (via GitHub)" <gi...@apache.org> on 2023/03/14 15:16:37 UTC

[GitHub] [solr] gerlowskija opened a new pull request, #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

gerlowskija opened a new pull request, #1458:
URL: https://github.com/apache/solr/pull/1458

   https://issues.apache.org/jira/browse/SOLR-16697
   
   # Description
   
   Solr has a number of ways to restore indices into a SolrCloud collection, but these all come with a number of assumptions.  Namely, that the entire collection is being restored, and that the data for the whole collection is located in a single place within a backup repository (usually in a special metadata-rich but complex format that enables incremental operations).
   
   Users who generate indices offline (using map-reduce or some other pipeline) tend not to meet these assumptions and thus can't use `/admin/collections?action=RESTORE`.  We should give them a way to import their index files without needing to resort to `scp` or other manual file manipulation.
   
   # Solution
   
   This draft PR solves this problem by way of an "Install Shard" API.  This API (`/POST /api/collections/collName/shards/shardName/install`) takes pointers to a Lucene index directory (located in some backup repository somewhere) and leverages existing restore code to install those files into the specific shard of the specified collection.
   
   Conceptually, it's taking functionality similar to that of the core-admin RESTORECORE API, albeit with a few differences:
   
   - it provides a much cleaner API suitable for end-users, exposed as a "Collection Admin" API
   - it makes use of the "read-only" flag to prevent updates to collections being restored
   - it uses the shard-term mechanism to ensure that other replicas of the specified shard replicate the installed data 
   - it intentionally _doesn't_ support the "incremental" file format, as this API is a bit outside of that usecase.
   
   # Tests
   
   Automated tests still needed.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request title.
   - [x] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `main` branch.
   - [ ] I have run `./gradlew check`.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Reference Guide](https://github.com/apache/solr/tree/main/solr/solr-ref-guide)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org


[GitHub] [solr] sonatype-lift[bot] commented on a diff in pull request #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

Posted by "sonatype-lift[bot] (via GitHub)" <gi...@apache.org>.
sonatype-lift[bot] commented on code in PR #1458:
URL: https://github.com/apache/solr/pull/1458#discussion_r1135802948


##########
solr/core/src/java/org/apache/solr/handler/admin/api/InstallCoreDataAPI.java:
##########
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.handler.admin.api;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.solr.cloud.CloudDescriptor;
+import org.apache.solr.cloud.ZkController;
+import org.apache.solr.cloud.ZkShardTerms;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.cloud.Slice;
+import org.apache.solr.common.params.CoreAdminParams;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.apache.solr.handler.RestoreCore;
+import org.apache.solr.handler.admin.CoreAdminHandler;
+import org.apache.solr.jersey.JacksonReflectMapWriter;
+import org.apache.solr.jersey.PermissionName;
+import org.apache.solr.jersey.SolrJerseyResponse;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.apache.solr.client.solrj.impl.BinaryResponseParser.BINARY_CONTENT_TYPE_V2;
+import static org.apache.solr.security.PermissionNameProvider.Name.CORE_EDIT_PERM;
+
+@Path("/cores/{coreName}/install")
+public class InstallCoreDataAPI extends CoreAdminAPIBase {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    public InstallCoreDataAPI(CoreContainer coreContainer, CoreAdminHandler.CoreAdminAsyncTracker coreAdminAsyncTracker, SolrQueryRequest req, SolrQueryResponse rsp) {
+        super(coreContainer, coreAdminAsyncTracker, req, rsp);
+    }
+
+    @POST
+    @Produces({"application/json", "application/xml", BINARY_CONTENT_TYPE_V2})
+    @PermissionName(CORE_EDIT_PERM)
+    public SolrJerseyResponse installCoreData(@PathParam("coreName") String coreName, InstallCoreDataRequestBody requestBody) throws Exception {
+        log.info("JEGERLOW: In install-core-data v2 API with coreName {}", coreName);
+        final SolrJerseyResponse response = instantiateJerseyResponse(SolrJerseyResponse.class);
+
+        // TODO Actual implementation (look at RESTORECORE for example)
+        final ZkController zkController = coreContainer.getZkController();
+        if (zkController == null) {
+            throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Only valid for SolrCloud");
+        }
+
+        try (BackupRepository repository = coreContainer.newBackupRepository(requestBody.repository); SolrCore core = coreContainer.getCore(coreName)) {
+            String location = repository.getBackupLocation(requestBody.location);
+            if (location == null) {
+                throw new SolrException(
+                        SolrException.ErrorCode.BAD_REQUEST,
+                        "'location' is not specified as a query"
+                                + " parameter or as a default repository property");
+            }
+
+            URI locationUri = repository.createDirectoryURI(location);
+            CloudDescriptor cd = core.getCoreDescriptor().getCloudDescriptor();
+            // this core must be the only replica in its shard otherwise
+            // we cannot guarantee consistency between replicas because when we add data (or restore
+            // index) to this replica
+            Slice slice =
+                    zkController
+                            .getClusterState()
+                            .getCollection(cd.getCollectionName())
+                            .getSlice(cd.getShardId());
+            if (slice.getReplicas().size() != 1 && !core.readOnly) {
+                throw new SolrException(
+                        SolrException.ErrorCode.SERVER_ERROR,
+                        "Failed to restore core="
+                                + core.getName()
+                                + ", the core must be the only replica in its shard or it must be read only");
+            }
+
+            // TODO RestoreCore.create expects a backup 'name' that it appends to the Uri via 'BackupRepository#resolve'...how does this handle null?
+            final RestoreCore restoreCore = RestoreCore.create(repository, core, locationUri, "");
+            boolean success = restoreCore.doRestore();
+            if (!success) {
+                throw new SolrException(
+                        SolrException.ErrorCode.SERVER_ERROR, "Failed to install data to core=" + core.getName());
+            }
+
+            final Set<String> nonLeaderCoreNames = slice.getReplicas()
+                    .stream()
+                    .filter(r -> !r.isLeader())
+                    .map(r -> r.getName())
+                    .collect(Collectors.toSet());
+            log.info("JEGERLOW Non leader core names are: {} and leader is {}", nonLeaderCoreNames, coreName);

Review Comment:
   <picture><img alt="13% of developers fix this issue" src="https://lift.sonatype.com/api/commentimage/fixrate/13/display.svg"></picture>
   
   <b>*[CRLF_INJECTION_LOGS](https://find-sec-bugs.github.io/bugs.htm#CRLF_INJECTION_LOGS):</b>*  This use of org/slf4j/Logger.info(Ljava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V might be used to include CRLF characters into log messages
   
   ---
   
   <details><summary>ℹī¸ Expand to see all <b>@sonatype-lift</b> commands</summary>
   
   You can reply with the following commands. For example, reply with ***@sonatype-lift ignoreall*** to leave out all findings.
   | **Command** | **Usage** |
   | ------------- | ------------- |
   | `@sonatype-lift ignore` | Leave out the above finding from this PR |
   | `@sonatype-lift ignoreall` | Leave out all the existing findings from this PR |
   | `@sonatype-lift exclude <file\|issue\|path\|tool>` | Exclude specified `file\|issue\|path\|tool` from Lift findings by updating your config.toml file |
   
   **Note:** When talking to LiftBot, you need to **refresh** the page to see its response.
   <sub>[Click here](https://github.com/apps/sonatype-lift/installations/new) to add LiftBot to another repo.</sub></details>
   
   
   
   ---
   
   <b>Help us improve LIFT! (<i>Sonatype LiftBot external survey</i>)</b>
   
   Was this a good recommendation for you? <sub><small>Answering this survey will not impact your Lift settings.</small></sub>
   
   [ [🙁 Not relevant](https://www.sonatype.com/lift-comment-rating?comment=434169222&lift_comment_rating=1) ] - [ [😕 Won't fix](https://www.sonatype.com/lift-comment-rating?comment=434169222&lift_comment_rating=2) ] - [ [😑 Not critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=434169222&lift_comment_rating=3) ] - [ [🙂 Critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=434169222&lift_comment_rating=4) ] - [ [😊 Critical, fixing now](https://www.sonatype.com/lift-comment-rating?comment=434169222&lift_comment_rating=5) ]



##########
solr/core/src/java/org/apache/solr/handler/admin/api/InstallCoreDataAPI.java:
##########
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.handler.admin.api;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.solr.cloud.CloudDescriptor;
+import org.apache.solr.cloud.ZkController;
+import org.apache.solr.cloud.ZkShardTerms;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.cloud.Slice;
+import org.apache.solr.common.params.CoreAdminParams;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.apache.solr.handler.RestoreCore;
+import org.apache.solr.handler.admin.CoreAdminHandler;
+import org.apache.solr.jersey.JacksonReflectMapWriter;
+import org.apache.solr.jersey.PermissionName;
+import org.apache.solr.jersey.SolrJerseyResponse;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.apache.solr.client.solrj.impl.BinaryResponseParser.BINARY_CONTENT_TYPE_V2;
+import static org.apache.solr.security.PermissionNameProvider.Name.CORE_EDIT_PERM;
+
+@Path("/cores/{coreName}/install")
+public class InstallCoreDataAPI extends CoreAdminAPIBase {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    public InstallCoreDataAPI(CoreContainer coreContainer, CoreAdminHandler.CoreAdminAsyncTracker coreAdminAsyncTracker, SolrQueryRequest req, SolrQueryResponse rsp) {
+        super(coreContainer, coreAdminAsyncTracker, req, rsp);
+    }
+
+    @POST
+    @Produces({"application/json", "application/xml", BINARY_CONTENT_TYPE_V2})
+    @PermissionName(CORE_EDIT_PERM)
+    public SolrJerseyResponse installCoreData(@PathParam("coreName") String coreName, InstallCoreDataRequestBody requestBody) throws Exception {
+        log.info("JEGERLOW: In install-core-data v2 API with coreName {}", coreName);

Review Comment:
   <picture><img alt="13% of developers fix this issue" src="https://lift.sonatype.com/api/commentimage/fixrate/13/display.svg"></picture>
   
   <b>*[CRLF_INJECTION_LOGS](https://find-sec-bugs.github.io/bugs.htm#CRLF_INJECTION_LOGS):</b>*  This use of org/slf4j/Logger.info(Ljava/lang/String;Ljava/lang/Object;)V might be used to include CRLF characters into log messages
   
   ---
   
   <details><summary>ℹī¸ Expand to see all <b>@sonatype-lift</b> commands</summary>
   
   You can reply with the following commands. For example, reply with ***@sonatype-lift ignoreall*** to leave out all findings.
   | **Command** | **Usage** |
   | ------------- | ------------- |
   | `@sonatype-lift ignore` | Leave out the above finding from this PR |
   | `@sonatype-lift ignoreall` | Leave out all the existing findings from this PR |
   | `@sonatype-lift exclude <file\|issue\|path\|tool>` | Exclude specified `file\|issue\|path\|tool` from Lift findings by updating your config.toml file |
   
   **Note:** When talking to LiftBot, you need to **refresh** the page to see its response.
   <sub>[Click here](https://github.com/apps/sonatype-lift/installations/new) to add LiftBot to another repo.</sub></details>
   
   
   
   ---
   
   <b>Help us improve LIFT! (<i>Sonatype LiftBot external survey</i>)</b>
   
   Was this a good recommendation for you? <sub><small>Answering this survey will not impact your Lift settings.</small></sub>
   
   [ [🙁 Not relevant](https://www.sonatype.com/lift-comment-rating?comment=434169315&lift_comment_rating=1) ] - [ [😕 Won't fix](https://www.sonatype.com/lift-comment-rating?comment=434169315&lift_comment_rating=2) ] - [ [😑 Not critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=434169315&lift_comment_rating=3) ] - [ [🙂 Critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=434169315&lift_comment_rating=4) ] - [ [😊 Critical, fixing now](https://www.sonatype.com/lift-comment-rating?comment=434169315&lift_comment_rating=5) ]



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org


[GitHub] [solr] dsmiley commented on a diff in pull request #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

Posted by "dsmiley (via GitHub)" <gi...@apache.org>.
dsmiley commented on code in PR #1458:
URL: https://github.com/apache/solr/pull/1458#discussion_r1136476558


##########
solr/core/src/java/org/apache/solr/handler/admin/api/InstallCoreDataAPI.java:
##########
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.handler.admin.api;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.solr.cloud.CloudDescriptor;
+import org.apache.solr.cloud.ZkController;
+import org.apache.solr.cloud.ZkShardTerms;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.cloud.Slice;
+import org.apache.solr.common.params.CoreAdminParams;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.apache.solr.handler.RestoreCore;
+import org.apache.solr.handler.admin.CoreAdminHandler;
+import org.apache.solr.jersey.JacksonReflectMapWriter;
+import org.apache.solr.jersey.PermissionName;
+import org.apache.solr.jersey.SolrJerseyResponse;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.apache.solr.client.solrj.impl.BinaryResponseParser.BINARY_CONTENT_TYPE_V2;
+import static org.apache.solr.security.PermissionNameProvider.Name.CORE_EDIT_PERM;
+
+@Path("/cores/{coreName}/install")
+public class InstallCoreDataAPI extends CoreAdminAPIBase {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    public InstallCoreDataAPI(CoreContainer coreContainer, CoreAdminHandler.CoreAdminAsyncTracker coreAdminAsyncTracker, SolrQueryRequest req, SolrQueryResponse rsp) {
+        super(coreContainer, coreAdminAsyncTracker, req, rsp);
+    }
+
+    @POST
+    @Produces({"application/json", "application/xml", BINARY_CONTENT_TYPE_V2})
+    @PermissionName(CORE_EDIT_PERM)
+    public SolrJerseyResponse installCoreData(@PathParam("coreName") String coreName, InstallCoreDataRequestBody requestBody) throws Exception {
+        log.info("JEGERLOW: In install-core-data v2 API with coreName {}", coreName);
+        final SolrJerseyResponse response = instantiateJerseyResponse(SolrJerseyResponse.class);
+
+        // TODO Actual implementation (look at RESTORECORE for example)
+        final ZkController zkController = coreContainer.getZkController();
+        if (zkController == null) {
+            throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Only valid for SolrCloud");
+        }
+
+        try (BackupRepository repository = coreContainer.newBackupRepository(requestBody.repository); SolrCore core = coreContainer.getCore(coreName)) {
+            String location = repository.getBackupLocation(requestBody.location);
+            if (location == null) {
+                throw new SolrException(
+                        SolrException.ErrorCode.BAD_REQUEST,
+                        "'location' is not specified as a query"
+                                + " parameter or as a default repository property");
+            }
+
+            URI locationUri = repository.createDirectoryURI(location);
+            CloudDescriptor cd = core.getCoreDescriptor().getCloudDescriptor();
+            // this core must be the only replica in its shard otherwise
+            // we cannot guarantee consistency between replicas because when we add data (or restore
+            // index) to this replica
+            Slice slice =
+                    zkController
+                            .getClusterState()
+                            .getCollection(cd.getCollectionName())
+                            .getSlice(cd.getShardId());
+            if (slice.getReplicas().size() != 1 && !core.readOnly) {
+                throw new SolrException(
+                        SolrException.ErrorCode.SERVER_ERROR,
+                        "Failed to restore core="
+                                + core.getName()
+                                + ", the core must be the only replica in its shard or it must be read only");
+            }
+
+            // TODO RestoreCore.create expects a backup 'name' that it appends to the Uri via 'BackupRepository#resolve'...how does this handle null?
+            final RestoreCore restoreCore = RestoreCore.create(repository, core, locationUri, "");
+            boolean success = restoreCore.doRestore();
+            if (!success) {
+                throw new SolrException(
+                        SolrException.ErrorCode.SERVER_ERROR, "Failed to install data to core=" + core.getName());
+            }
+
+            final Set<String> nonLeaderCoreNames = slice.getReplicas()
+                    .stream()
+                    .filter(r -> !r.isLeader())
+                    .map(r -> r.getName())
+                    .collect(Collectors.toSet());
+            log.info("JEGERLOW Non leader core names are: {} and leader is {}", nonLeaderCoreNames, coreName);

Review Comment:
   IMO we should disable CRLF_INJECTION_LOGS from being reported.  I disagree with this log category altogether... if someone cares about this matter, it's superior to use structured logging (e.g. JSON) which is possible with configuration.  I can submit a PR for this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org


[GitHub] [solr] gerlowskija commented on a diff in pull request #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

Posted by "gerlowskija (via GitHub)" <gi...@apache.org>.
gerlowskija commented on code in PR #1458:
URL: https://github.com/apache/solr/pull/1458#discussion_r1144825350


##########
solr/core/src/java/org/apache/solr/handler/admin/api/InstallCoreDataAPI.java:
##########
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.handler.admin.api;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.solr.cloud.CloudDescriptor;
+import org.apache.solr.cloud.ZkController;
+import org.apache.solr.cloud.ZkShardTerms;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.cloud.Slice;
+import org.apache.solr.common.params.CoreAdminParams;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.apache.solr.handler.RestoreCore;
+import org.apache.solr.handler.admin.CoreAdminHandler;
+import org.apache.solr.jersey.JacksonReflectMapWriter;
+import org.apache.solr.jersey.PermissionName;
+import org.apache.solr.jersey.SolrJerseyResponse;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.apache.solr.client.solrj.impl.BinaryResponseParser.BINARY_CONTENT_TYPE_V2;
+import static org.apache.solr.security.PermissionNameProvider.Name.CORE_EDIT_PERM;
+
+@Path("/cores/{coreName}/install")
+public class InstallCoreDataAPI extends CoreAdminAPIBase {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    public InstallCoreDataAPI(CoreContainer coreContainer, CoreAdminHandler.CoreAdminAsyncTracker coreAdminAsyncTracker, SolrQueryRequest req, SolrQueryResponse rsp) {
+        super(coreContainer, coreAdminAsyncTracker, req, rsp);
+    }
+
+    @POST
+    @Produces({"application/json", "application/xml", BINARY_CONTENT_TYPE_V2})
+    @PermissionName(CORE_EDIT_PERM)
+    public SolrJerseyResponse installCoreData(@PathParam("coreName") String coreName, InstallCoreDataRequestBody requestBody) throws Exception {
+        log.info("JEGERLOW: In install-core-data v2 API with coreName {}", coreName);
+        final SolrJerseyResponse response = instantiateJerseyResponse(SolrJerseyResponse.class);
+
+        // TODO Actual implementation (look at RESTORECORE for example)
+        final ZkController zkController = coreContainer.getZkController();
+        if (zkController == null) {
+            throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Only valid for SolrCloud");
+        }
+
+        try (BackupRepository repository = coreContainer.newBackupRepository(requestBody.repository); SolrCore core = coreContainer.getCore(coreName)) {
+            String location = repository.getBackupLocation(requestBody.location);
+            if (location == null) {
+                throw new SolrException(
+                        SolrException.ErrorCode.BAD_REQUEST,
+                        "'location' is not specified as a query"
+                                + " parameter or as a default repository property");
+            }
+
+            URI locationUri = repository.createDirectoryURI(location);
+            CloudDescriptor cd = core.getCoreDescriptor().getCloudDescriptor();
+            // this core must be the only replica in its shard otherwise
+            // we cannot guarantee consistency between replicas because when we add data (or restore
+            // index) to this replica
+            Slice slice =
+                    zkController
+                            .getClusterState()
+                            .getCollection(cd.getCollectionName())
+                            .getSlice(cd.getShardId());
+            if (slice.getReplicas().size() != 1 && !core.readOnly) {
+                throw new SolrException(
+                        SolrException.ErrorCode.SERVER_ERROR,
+                        "Failed to restore core="
+                                + core.getName()
+                                + ", the core must be the only replica in its shard or it must be read only");
+            }
+
+            // TODO RestoreCore.create expects a backup 'name' that it appends to the Uri via 'BackupRepository#resolve'...how does this handle null?
+            final RestoreCore restoreCore = RestoreCore.create(repository, core, locationUri, "");
+            boolean success = restoreCore.doRestore();
+            if (!success) {
+                throw new SolrException(
+                        SolrException.ErrorCode.SERVER_ERROR, "Failed to install data to core=" + core.getName());
+            }
+
+            final Set<String> nonLeaderCoreNames = slice.getReplicas()
+                    .stream()
+                    .filter(r -> !r.isLeader())
+                    .map(r -> r.getName())
+                    .collect(Collectors.toSet());
+            log.info("JEGERLOW Non leader core names are: {} and leader is {}", nonLeaderCoreNames, coreName);

Review Comment:
   Disabling makes sense to me 👍 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org


[GitHub] [solr] gerlowskija commented on pull request #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

Posted by "gerlowskija (via GitHub)" <gi...@apache.org>.
gerlowskija commented on PR #1458:
URL: https://github.com/apache/solr/pull/1458#issuecomment-1468314277

   Definitely not ready to commit - needs some input validation, tests, docs, etc.  But should be enough to showcase the general approach for anyone interested.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org


[GitHub] [solr] gerlowskija merged pull request #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

Posted by "gerlowskija (via GitHub)" <gi...@apache.org>.
gerlowskija merged PR #1458:
URL: https://github.com/apache/solr/pull/1458


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org


[GitHub] [solr] sonatype-lift[bot] commented on a diff in pull request #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

Posted by "sonatype-lift[bot] (via GitHub)" <gi...@apache.org>.
sonatype-lift[bot] commented on code in PR #1458:
URL: https://github.com/apache/solr/pull/1458#discussion_r1147706168


##########
solr/core/src/java/org/apache/solr/handler/admin/api/InstallCoreDataAPI.java:
##########
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.handler.admin.api;
+
+import static org.apache.solr.client.solrj.impl.BinaryResponseParser.BINARY_CONTENT_TYPE_V2;
+import static org.apache.solr.security.PermissionNameProvider.Name.CORE_EDIT_PERM;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import org.apache.solr.cloud.CloudDescriptor;
+import org.apache.solr.cloud.ZkController;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.apache.solr.handler.RestoreCore;
+import org.apache.solr.handler.admin.CoreAdminHandler;
+import org.apache.solr.jersey.JacksonReflectMapWriter;
+import org.apache.solr.jersey.PermissionName;
+import org.apache.solr.jersey.SolrJerseyResponse;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * v2 implementation of the "Install Core Data" Core-Admin API
+ *
+ * <p>This is an internal API intended for use only by the Collection Admin "Install Shard Data"
+ * API.
+ */
+@Path("/cores/{coreName}/install")
+public class InstallCoreDataAPI extends CoreAdminAPIBase {
+
+  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  public InstallCoreDataAPI(
+      CoreContainer coreContainer,
+      CoreAdminHandler.CoreAdminAsyncTracker coreAdminAsyncTracker,
+      SolrQueryRequest req,
+      SolrQueryResponse rsp) {
+    super(coreContainer, coreAdminAsyncTracker, req, rsp);
+  }
+
+  @POST
+  @Produces({"application/json", "application/xml", BINARY_CONTENT_TYPE_V2})
+  @PermissionName(CORE_EDIT_PERM)
+  public SolrJerseyResponse installCoreData(
+      @PathParam("coreName") String coreName, InstallCoreDataRequestBody requestBody)
+      throws Exception {
+    final SolrJerseyResponse response = instantiateJerseyResponse(SolrJerseyResponse.class);
+
+    if (requestBody == null) {
+      throw new SolrException(
+          SolrException.ErrorCode.BAD_REQUEST, "Required request body is missing");
+    }
+
+    final ZkController zkController = coreContainer.getZkController();
+    if (zkController == null) {
+      throw new SolrException(
+          SolrException.ErrorCode.BAD_REQUEST,
+          "'Install Core Data' API only supported in SolrCloud clusters");
+    }
+
+    try (BackupRepository repository = coreContainer.newBackupRepository(requestBody.repository);
+        SolrCore core = coreContainer.getCore(coreName)) {
+      String location = repository.getBackupLocation(requestBody.location);
+      if (location == null) {
+        throw new SolrException(
+            SolrException.ErrorCode.BAD_REQUEST,
+            "'location' is not specified as a" + " parameter or as a default repository property");
+      }
+
+      final URI locationUri = repository.createDirectoryURI(location);
+      final CloudDescriptor cd = core.getCoreDescriptor().getCloudDescriptor();

Review Comment:
   <picture><img alt="15% of developers fix this issue" src="https://lift.sonatype.com/api/commentimage/fixrate/15/display.svg"></picture>
   
   <b>*NULL_DEREFERENCE:</b>*  object `core` last assigned on line 86 could be null and is dereferenced at line 95.
   
   ---
   
   <details><summary>ℹī¸ Expand to see all <b>@sonatype-lift</b> commands</summary>
   
   You can reply with the following commands. For example, reply with ***@sonatype-lift ignoreall*** to leave out all findings.
   | **Command** | **Usage** |
   | ------------- | ------------- |
   | `@sonatype-lift ignore` | Leave out the above finding from this PR |
   | `@sonatype-lift ignoreall` | Leave out all the existing findings from this PR |
   | `@sonatype-lift exclude <file\|issue\|path\|tool>` | Exclude specified `file\|issue\|path\|tool` from Lift findings by updating your config.toml file |
   
   **Note:** When talking to LiftBot, you need to **refresh** the page to see its response.
   <sub>[Click here](https://github.com/apps/sonatype-lift/installations/new) to add LiftBot to another repo.</sub></details>
   
   
   
   ---
   
   <b>Help us improve LIFT! (<i>Sonatype LiftBot external survey</i>)</b>
   
   Was this a good recommendation for you? <sub><small>Answering this survey will not impact your Lift settings.</small></sub>
   
   [ [🙁 Not relevant](https://www.sonatype.com/lift-comment-rating?comment=452568133&lift_comment_rating=1) ] - [ [😕 Won't fix](https://www.sonatype.com/lift-comment-rating?comment=452568133&lift_comment_rating=2) ] - [ [😑 Not critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=452568133&lift_comment_rating=3) ] - [ [🙂 Critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=452568133&lift_comment_rating=4) ] - [ [😊 Critical, fixing now](https://www.sonatype.com/lift-comment-rating?comment=452568133&lift_comment_rating=5) ]



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org


[GitHub] [solr] sonatype-lift[bot] commented on a diff in pull request #1458: SOLR-16697: Add new API to install offline-built indices into specific shards

Posted by "sonatype-lift[bot] (via GitHub)" <gi...@apache.org>.
sonatype-lift[bot] commented on code in PR #1458:
URL: https://github.com/apache/solr/pull/1458#discussion_r1146943436


##########
solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java:
##########
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cloud.api.collections;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import org.apache.lucene.store.Directory;
+import org.apache.solr.client.solrj.SolrClient;
+import org.apache.solr.client.solrj.SolrQuery;
+import org.apache.solr.client.solrj.impl.BaseHttpSolrClient;
+import org.apache.solr.client.solrj.impl.CloudSolrClient;
+import org.apache.solr.client.solrj.request.CollectionAdminRequest;
+import org.apache.solr.cloud.SolrCloudTestCase;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.util.ExecutorUtil;
+import org.apache.solr.common.util.SolrNamedThreadFactory;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.CoreDescriptor;
+import org.apache.solr.core.DirectoryFactory;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Base class for testing the "Install Shard API" with various backup repositories.
+ *
+ * <p>Subclasses are expected to bootstrap a Solr cluster with a single configured backup
+ * repository. This base-class will populate that backup repository all data necessary for these
+ * tests.
+ *
+ * @see org.apache.solr.handler.admin.api.InstallShardDataAPI
+ */
+public abstract class AbstractInstallShardTest extends SolrCloudTestCase {
+
+  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  protected static final String INSTALL_DATA_BASE_LOCATION = "/";
+  protected static final String BACKUP_REPO_NAME = "trackingBackupRepository";
+
+  private static long docsSeed; // see indexDocs()
+
+  @BeforeClass
+  public static void seedDocGenerator() {
+    docsSeed = random().nextLong();
+    System.setProperty("solr.directoryFactory", "solr.StandardDirectoryFactory");
+  }
+
+  // Populated by 'bootstrapBackupRepositoryData'
+  private static int singleShardNumDocs = -1;
+  private static int replicasPerShard = -1;
+  private static int multiShardNumDocs = -1;
+  private static URI singleShard1Uri = null;
+  private static URI[] multiShardUris = null;
+
+  public static void bootstrapBackupRepositoryData(String baseRepositoryLocation) throws Exception {
+    final int numShards = random().nextInt(3) + 2;
+    multiShardUris = new URI[numShards];
+    replicasPerShard = random().nextInt(3) + 1;
+    // replicasPerShard = 1;
+    CloudSolrClient solrClient = cluster.getSolrClient();
+
+    // Create collections and index docs
+    final String singleShardCollName = createAndAwaitEmptyCollection(1, replicasPerShard);
+    singleShardNumDocs = indexDocs(singleShardCollName, true);
+    assertCollectionHasNumDocs(singleShardCollName, singleShardNumDocs);
+    final String multiShardCollName = createAndAwaitEmptyCollection(numShards, replicasPerShard);
+    multiShardNumDocs = indexDocs(multiShardCollName, true);
+    assertCollectionHasNumDocs(multiShardCollName, multiShardNumDocs);
+
+    // Upload shard data to BackupRepository - single shard collection
+    singleShard1Uri =
+        createBackupRepoDirectoryForShardData(
+            baseRepositoryLocation, singleShardCollName, "shard1");
+    copyShardDataToBackupRepository(singleShardCollName, "shard1", singleShard1Uri);
+    // Upload shard data to BackupRepository - multi-shard collection
+    for (int i = 0; i < multiShardUris.length; i++) {
+      final String shardName = "shard" + (i + 1);
+      multiShardUris[i] =
+          createBackupRepoDirectoryForShardData(
+              baseRepositoryLocation, multiShardCollName, shardName);
+      copyShardDataToBackupRepository(multiShardCollName, shardName, multiShardUris[i]);
+    }
+
+    // Nuke collections now that we've populated the BackupRepository
+    CollectionAdminRequest.deleteCollection(singleShardCollName).process(solrClient);
+    CollectionAdminRequest.deleteCollection(multiShardCollName).process(solrClient);
+  }
+
+  @Test
+  public void testInstallFailsIfCollectionIsNotInReadOnlyMode() throws Exception {
+    final String collectionName = createAndAwaitEmptyCollection(1, replicasPerShard);
+
+    final String singleShardLocation = singleShard1Uri.toString();
+    final BaseHttpSolrClient.RemoteSolrException rse =
+        expectThrows(
+            BaseHttpSolrClient.RemoteSolrException.class,
+            () -> {
+              CollectionAdminRequest.installDataToShard(
+                      collectionName, "shard1", singleShardLocation, BACKUP_REPO_NAME)
+                  .process(cluster.getSolrClient());
+            });
+    assertEquals(400, rse.code());
+    assertTrue(rse.getMessage().contains("Collection must be in readOnly mode"));
+
+    // Shard-install has failed so collection should still be empty.
+    assertCollectionHasNumDocs(collectionName, 0);
+  }
+
+  @Test
+  public void testInstallToSingleShardCollection() throws Exception {
+    final String collectionName = createAndAwaitEmptyCollection(1, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    final String singleShardLocation = singleShard1Uri.toString();
+    CollectionAdminRequest.installDataToShard(
+            collectionName, "shard1", singleShardLocation, BACKUP_REPO_NAME)
+        .process(cluster.getSolrClient());
+
+    // Shard-install has failed so collection should still be empty.
+    assertCollectionHasNumDocs(collectionName, singleShardNumDocs);
+  }
+
+  @Test
+  public void testSerialInstallToMultiShardCollection() throws Exception {
+    final String collectionName =
+        createAndAwaitEmptyCollection(multiShardUris.length, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    for (int i = 1; i <= multiShardUris.length; i++) {
+      CollectionAdminRequest.installDataToShard(
+              collectionName, "shard" + i, multiShardUris[i - 1].toString(), BACKUP_REPO_NAME)
+          .process(cluster.getSolrClient());
+    }
+
+    assertCollectionHasNumDocs(collectionName, multiShardNumDocs);
+  }
+
+  @Test
+  public void testParallelInstallToMultiShardCollection() throws Exception {
+    final String collectionName =
+        createAndAwaitEmptyCollection(multiShardUris.length, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    runParallelShardInstalls(collectionName, multiShardUris);
+
+    assertCollectionHasNumDocs(collectionName, multiShardNumDocs);
+  }
+
+  /**
+   * Builds a string representation of a valid solr.xml configuration, with the provided
+   * backup-repository configuration inserted
+   *
+   * @param backupRepositoryText a string representing the 'backup' XML tag to put in the
+   *     constructed solr.xml
+   */
+  public static String defaultSolrXmlTextWithBackupRepository(String backupRepositoryText) {
+    return "<solr>\n"
+        + "\n"
+        + "  <str name=\"shareSchema\">${shareSchema:false}</str>\n"
+        + "  <str name=\"configSetBaseDir\">${configSetBaseDir:configsets}</str>\n"
+        + "  <str name=\"coreRootDirectory\">${coreRootDirectory:.}</str>\n"
+        + "\n"
+        + "  <shardHandlerFactory name=\"shardHandlerFactory\" class=\"HttpShardHandlerFactory\">\n"
+        + "    <str name=\"urlScheme\">${urlScheme:}</str>\n"
+        + "    <int name=\"socketTimeout\">${socketTimeout:90000}</int>\n"
+        + "    <int name=\"connTimeout\">${connTimeout:15000}</int>\n"
+        + "  </shardHandlerFactory>\n"
+        + "\n"
+        + "  <solrcloud>\n"
+        + "    <str name=\"host\">127.0.0.1</str>\n"
+        + "    <int name=\"hostPort\">${hostPort:8983}</int>\n"
+        + "    <str name=\"hostContext\">${hostContext:solr}</str>\n"
+        + "    <int name=\"zkClientTimeout\">${solr.zkclienttimeout:30000}</int>\n"
+        + "    <bool name=\"genericCoreNodeNames\">${genericCoreNodeNames:true}</bool>\n"
+        + "    <int name=\"leaderVoteWait\">10000</int>\n"
+        + "    <int name=\"distribUpdateConnTimeout\">${distribUpdateConnTimeout:45000}</int>\n"
+        + "    <int name=\"distribUpdateSoTimeout\">${distribUpdateSoTimeout:340000}</int>\n"
+        + "  </solrcloud>\n"
+        + "  \n"
+        + backupRepositoryText
+        + "  \n"
+        + "</solr>\n";
+  }
+
+  private static void assertCollectionHasNumDocs(String collection, int expectedNumDocs)
+      throws Exception {
+    final SolrClient solrClient = cluster.getSolrClient();
+    assertEquals(
+        expectedNumDocs,
+        solrClient.query(collection, new SolrQuery("*:*")).getResults().getNumFound());
+  }
+
+  private static void copyShardDataToBackupRepository(
+      String collectionName, String shardName, URI destinationUri) throws Exception {
+    final CoreContainer cc = cluster.getJettySolrRunner(0).getCoreContainer();
+    final Collection<String> coreNames = cc.getAllCoreNames();
+    final String coreName =
+        coreNames.stream()
+            .filter(name -> name.contains(collectionName) && name.contains(shardName))
+            .findFirst()
+            .get();
+    final CoreDescriptor cd = cc.getCoreDescriptor(coreName);
+    final Path coreInstanceDir = cd.getInstanceDir();
+    assert coreInstanceDir.toFile().exists();
+    assert coreInstanceDir.toFile().isDirectory();
+
+    final Path coreIndexDir = coreInstanceDir.resolve("data").resolve("index");
+    assert coreIndexDir.toFile().exists();
+    assert coreIndexDir.toFile().isDirectory();
+
+    try (final BackupRepository backupRepository = cc.newBackupRepository(BACKUP_REPO_NAME);
+        final SolrCore core = cc.getCore(coreName)) {
+      final Directory dir =
+          core.getDirectoryFactory()
+              .get(
+                  coreIndexDir.toString(),
+                  DirectoryFactory.DirContext.DEFAULT,
+                  core.getSolrConfig().indexConfig.lockType);
+      try {
+        for (final String dirContent : dir.listAll()) {
+          if (dirContent.contains("write.lock")) continue;
+          backupRepository.copyFileFrom(dir, dirContent, destinationUri);
+        }
+      } finally {
+        core.getDirectoryFactory().release(dir);
+      }
+    }
+  }
+
+  private static URI createBackupRepoDirectoryForShardData(
+      String baseLocation, String collectionName, String shardName) throws Exception {
+    final CoreContainer cc = cluster.getJettySolrRunner(0).getCoreContainer();
+    try (final BackupRepository backupRepository = cc.newBackupRepository(BACKUP_REPO_NAME)) {
+      final URI baseLocationUri = backupRepository.createURI(baseLocation);
+      final URI collectionLocation = backupRepository.resolve(baseLocationUri, collectionName);
+      backupRepository.createDirectory(collectionLocation);
+      final URI shardLocation = backupRepository.resolve(collectionLocation, shardName);
+      backupRepository.createDirectory(shardLocation);
+      return shardLocation;
+    }
+  }
+
+  private static int indexDocs(String collectionName, boolean useUUID) throws Exception {
+    Random random =

Review Comment:
   <picture><img alt="12% of developers fix this issue" src="https://lift.sonatype.com/api/commentimage/fixrate/12/display.svg"></picture>
   
   <b>*[PREDICTABLE_RANDOM](https://find-sec-bugs.github.io/bugs.htm#PREDICTABLE_RANDOM):</b>*  This random generator (java.util.Random) is predictable
   
   ---
   
   <details><summary>ℹī¸ Expand to see all <b>@sonatype-lift</b> commands</summary>
   
   You can reply with the following commands. For example, reply with ***@sonatype-lift ignoreall*** to leave out all findings.
   | **Command** | **Usage** |
   | ------------- | ------------- |
   | `@sonatype-lift ignore` | Leave out the above finding from this PR |
   | `@sonatype-lift ignoreall` | Leave out all the existing findings from this PR |
   | `@sonatype-lift exclude <file\|issue\|path\|tool>` | Exclude specified `file\|issue\|path\|tool` from Lift findings by updating your config.toml file |
   
   **Note:** When talking to LiftBot, you need to **refresh** the page to see its response.
   <sub>[Click here](https://github.com/apps/sonatype-lift/installations/new) to add LiftBot to another repo.</sub></details>
   
   
   
   ---
   
   <b>Help us improve LIFT! (<i>Sonatype LiftBot external survey</i>)</b>
   
   Was this a good recommendation for you? <sub><small>Answering this survey will not impact your Lift settings.</small></sub>
   
   [ [🙁 Not relevant](https://www.sonatype.com/lift-comment-rating?comment=451880815&lift_comment_rating=1) ] - [ [😕 Won't fix](https://www.sonatype.com/lift-comment-rating?comment=451880815&lift_comment_rating=2) ] - [ [😑 Not critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=451880815&lift_comment_rating=3) ] - [ [🙂 Critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=451880815&lift_comment_rating=4) ] - [ [😊 Critical, fixing now](https://www.sonatype.com/lift-comment-rating?comment=451880815&lift_comment_rating=5) ]



##########
solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java:
##########
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cloud.api.collections;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import org.apache.lucene.store.Directory;
+import org.apache.solr.client.solrj.SolrClient;
+import org.apache.solr.client.solrj.SolrQuery;
+import org.apache.solr.client.solrj.impl.BaseHttpSolrClient;
+import org.apache.solr.client.solrj.impl.CloudSolrClient;
+import org.apache.solr.client.solrj.request.CollectionAdminRequest;
+import org.apache.solr.cloud.SolrCloudTestCase;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.util.ExecutorUtil;
+import org.apache.solr.common.util.SolrNamedThreadFactory;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.CoreDescriptor;
+import org.apache.solr.core.DirectoryFactory;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Base class for testing the "Install Shard API" with various backup repositories.
+ *
+ * <p>Subclasses are expected to bootstrap a Solr cluster with a single configured backup
+ * repository. This base-class will populate that backup repository all data necessary for these
+ * tests.
+ *
+ * @see org.apache.solr.handler.admin.api.InstallShardDataAPI
+ */
+public abstract class AbstractInstallShardTest extends SolrCloudTestCase {
+
+  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  protected static final String INSTALL_DATA_BASE_LOCATION = "/";
+  protected static final String BACKUP_REPO_NAME = "trackingBackupRepository";
+
+  private static long docsSeed; // see indexDocs()
+
+  @BeforeClass
+  public static void seedDocGenerator() {
+    docsSeed = random().nextLong();
+    System.setProperty("solr.directoryFactory", "solr.StandardDirectoryFactory");
+  }
+
+  // Populated by 'bootstrapBackupRepositoryData'
+  private static int singleShardNumDocs = -1;
+  private static int replicasPerShard = -1;
+  private static int multiShardNumDocs = -1;
+  private static URI singleShard1Uri = null;
+  private static URI[] multiShardUris = null;
+
+  public static void bootstrapBackupRepositoryData(String baseRepositoryLocation) throws Exception {
+    final int numShards = random().nextInt(3) + 2;
+    multiShardUris = new URI[numShards];
+    replicasPerShard = random().nextInt(3) + 1;
+    // replicasPerShard = 1;
+    CloudSolrClient solrClient = cluster.getSolrClient();
+
+    // Create collections and index docs
+    final String singleShardCollName = createAndAwaitEmptyCollection(1, replicasPerShard);
+    singleShardNumDocs = indexDocs(singleShardCollName, true);
+    assertCollectionHasNumDocs(singleShardCollName, singleShardNumDocs);
+    final String multiShardCollName = createAndAwaitEmptyCollection(numShards, replicasPerShard);
+    multiShardNumDocs = indexDocs(multiShardCollName, true);
+    assertCollectionHasNumDocs(multiShardCollName, multiShardNumDocs);
+
+    // Upload shard data to BackupRepository - single shard collection
+    singleShard1Uri =
+        createBackupRepoDirectoryForShardData(
+            baseRepositoryLocation, singleShardCollName, "shard1");
+    copyShardDataToBackupRepository(singleShardCollName, "shard1", singleShard1Uri);
+    // Upload shard data to BackupRepository - multi-shard collection
+    for (int i = 0; i < multiShardUris.length; i++) {
+      final String shardName = "shard" + (i + 1);
+      multiShardUris[i] =
+          createBackupRepoDirectoryForShardData(
+              baseRepositoryLocation, multiShardCollName, shardName);
+      copyShardDataToBackupRepository(multiShardCollName, shardName, multiShardUris[i]);
+    }
+
+    // Nuke collections now that we've populated the BackupRepository
+    CollectionAdminRequest.deleteCollection(singleShardCollName).process(solrClient);
+    CollectionAdminRequest.deleteCollection(multiShardCollName).process(solrClient);
+  }
+
+  @Test
+  public void testInstallFailsIfCollectionIsNotInReadOnlyMode() throws Exception {
+    final String collectionName = createAndAwaitEmptyCollection(1, replicasPerShard);
+
+    final String singleShardLocation = singleShard1Uri.toString();
+    final BaseHttpSolrClient.RemoteSolrException rse =
+        expectThrows(
+            BaseHttpSolrClient.RemoteSolrException.class,
+            () -> {
+              CollectionAdminRequest.installDataToShard(
+                      collectionName, "shard1", singleShardLocation, BACKUP_REPO_NAME)
+                  .process(cluster.getSolrClient());
+            });
+    assertEquals(400, rse.code());
+    assertTrue(rse.getMessage().contains("Collection must be in readOnly mode"));
+
+    // Shard-install has failed so collection should still be empty.
+    assertCollectionHasNumDocs(collectionName, 0);
+  }
+
+  @Test
+  public void testInstallToSingleShardCollection() throws Exception {
+    final String collectionName = createAndAwaitEmptyCollection(1, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    final String singleShardLocation = singleShard1Uri.toString();
+    CollectionAdminRequest.installDataToShard(
+            collectionName, "shard1", singleShardLocation, BACKUP_REPO_NAME)
+        .process(cluster.getSolrClient());
+
+    // Shard-install has failed so collection should still be empty.
+    assertCollectionHasNumDocs(collectionName, singleShardNumDocs);
+  }
+
+  @Test
+  public void testSerialInstallToMultiShardCollection() throws Exception {
+    final String collectionName =
+        createAndAwaitEmptyCollection(multiShardUris.length, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    for (int i = 1; i <= multiShardUris.length; i++) {
+      CollectionAdminRequest.installDataToShard(
+              collectionName, "shard" + i, multiShardUris[i - 1].toString(), BACKUP_REPO_NAME)
+          .process(cluster.getSolrClient());
+    }
+
+    assertCollectionHasNumDocs(collectionName, multiShardNumDocs);
+  }
+
+  @Test
+  public void testParallelInstallToMultiShardCollection() throws Exception {
+    final String collectionName =
+        createAndAwaitEmptyCollection(multiShardUris.length, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    runParallelShardInstalls(collectionName, multiShardUris);
+
+    assertCollectionHasNumDocs(collectionName, multiShardNumDocs);
+  }
+
+  /**
+   * Builds a string representation of a valid solr.xml configuration, with the provided
+   * backup-repository configuration inserted
+   *
+   * @param backupRepositoryText a string representing the 'backup' XML tag to put in the
+   *     constructed solr.xml
+   */
+  public static String defaultSolrXmlTextWithBackupRepository(String backupRepositoryText) {
+    return "<solr>\n"
+        + "\n"
+        + "  <str name=\"shareSchema\">${shareSchema:false}</str>\n"
+        + "  <str name=\"configSetBaseDir\">${configSetBaseDir:configsets}</str>\n"
+        + "  <str name=\"coreRootDirectory\">${coreRootDirectory:.}</str>\n"
+        + "\n"
+        + "  <shardHandlerFactory name=\"shardHandlerFactory\" class=\"HttpShardHandlerFactory\">\n"
+        + "    <str name=\"urlScheme\">${urlScheme:}</str>\n"
+        + "    <int name=\"socketTimeout\">${socketTimeout:90000}</int>\n"
+        + "    <int name=\"connTimeout\">${connTimeout:15000}</int>\n"
+        + "  </shardHandlerFactory>\n"
+        + "\n"
+        + "  <solrcloud>\n"
+        + "    <str name=\"host\">127.0.0.1</str>\n"
+        + "    <int name=\"hostPort\">${hostPort:8983}</int>\n"
+        + "    <str name=\"hostContext\">${hostContext:solr}</str>\n"
+        + "    <int name=\"zkClientTimeout\">${solr.zkclienttimeout:30000}</int>\n"
+        + "    <bool name=\"genericCoreNodeNames\">${genericCoreNodeNames:true}</bool>\n"
+        + "    <int name=\"leaderVoteWait\">10000</int>\n"
+        + "    <int name=\"distribUpdateConnTimeout\">${distribUpdateConnTimeout:45000}</int>\n"
+        + "    <int name=\"distribUpdateSoTimeout\">${distribUpdateSoTimeout:340000}</int>\n"
+        + "  </solrcloud>\n"
+        + "  \n"
+        + backupRepositoryText
+        + "  \n"
+        + "</solr>\n";
+  }
+
+  private static void assertCollectionHasNumDocs(String collection, int expectedNumDocs)
+      throws Exception {
+    final SolrClient solrClient = cluster.getSolrClient();
+    assertEquals(
+        expectedNumDocs,
+        solrClient.query(collection, new SolrQuery("*:*")).getResults().getNumFound());
+  }
+
+  private static void copyShardDataToBackupRepository(
+      String collectionName, String shardName, URI destinationUri) throws Exception {
+    final CoreContainer cc = cluster.getJettySolrRunner(0).getCoreContainer();
+    final Collection<String> coreNames = cc.getAllCoreNames();
+    final String coreName =
+        coreNames.stream()
+            .filter(name -> name.contains(collectionName) && name.contains(shardName))
+            .findFirst()
+            .get();
+    final CoreDescriptor cd = cc.getCoreDescriptor(coreName);
+    final Path coreInstanceDir = cd.getInstanceDir();
+    assert coreInstanceDir.toFile().exists();
+    assert coreInstanceDir.toFile().isDirectory();
+
+    final Path coreIndexDir = coreInstanceDir.resolve("data").resolve("index");
+    assert coreIndexDir.toFile().exists();
+    assert coreIndexDir.toFile().isDirectory();
+
+    try (final BackupRepository backupRepository = cc.newBackupRepository(BACKUP_REPO_NAME);
+        final SolrCore core = cc.getCore(coreName)) {
+      final Directory dir =
+          core.getDirectoryFactory()
+              .get(
+                  coreIndexDir.toString(),
+                  DirectoryFactory.DirContext.DEFAULT,
+                  core.getSolrConfig().indexConfig.lockType);
+      try {
+        for (final String dirContent : dir.listAll()) {
+          if (dirContent.contains("write.lock")) continue;
+          backupRepository.copyFileFrom(dir, dirContent, destinationUri);
+        }
+      } finally {
+        core.getDirectoryFactory().release(dir);
+      }
+    }
+  }
+
+  private static URI createBackupRepoDirectoryForShardData(
+      String baseLocation, String collectionName, String shardName) throws Exception {
+    final CoreContainer cc = cluster.getJettySolrRunner(0).getCoreContainer();
+    try (final BackupRepository backupRepository = cc.newBackupRepository(BACKUP_REPO_NAME)) {

Review Comment:
   <picture><img alt="15% of developers fix this issue" src="https://lift.sonatype.com/api/commentimage/fixrate/15/display.svg"></picture>
   
   <b>*NULL_DEREFERENCE:</b>*  object `cc` last assigned on line 262 could be null and is dereferenced at line 263.
   
   ❗❗ <b>2 similar findings have been found in this PR</b>
   
   <details><summary>🔎 Expand here to view all instances of this finding</summary><br/>
     
     
   <div align=\"center\">
   
   
   | **File Path** | **Line Number** |
   | ------------- | ------------- |
   | solr/core/src/java/org/apache/solr/handler/admin/api/InstallShardDataAPI.java | [100](https://github.com/apache/solr/blob/4fd67205b96c5c4b290b14c15c691b17fbff6495/solr/core/src/java/org/apache/solr/handler/admin/api/InstallShardDataAPI.java#L100) |
   | solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java | [226](https://github.com/apache/solr/blob/4fd67205b96c5c4b290b14c15c691b17fbff6495/solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java#L226) |
   <p><a href="https://lift.sonatype.com/results/github.com/apache/solr/01GW86YY8J27H55NP8ZPK636RD?t=Infer|NULL_DEREFERENCE" target="_blank">Visit the Lift Web Console</a> to find more details in your report.</p></div></details>
   
   
   
   ---
   
   <details><summary>ℹī¸ Expand to see all <b>@sonatype-lift</b> commands</summary>
   
   You can reply with the following commands. For example, reply with ***@sonatype-lift ignoreall*** to leave out all findings.
   | **Command** | **Usage** |
   | ------------- | ------------- |
   | `@sonatype-lift ignore` | Leave out the above finding from this PR |
   | `@sonatype-lift ignoreall` | Leave out all the existing findings from this PR |
   | `@sonatype-lift exclude <file\|issue\|path\|tool>` | Exclude specified `file\|issue\|path\|tool` from Lift findings by updating your config.toml file |
   
   **Note:** When talking to LiftBot, you need to **refresh** the page to see its response.
   <sub>[Click here](https://github.com/apps/sonatype-lift/installations/new) to add LiftBot to another repo.</sub></details>
   
   
   
   ---
   
   <b>Help us improve LIFT! (<i>Sonatype LiftBot external survey</i>)</b>
   
   Was this a good recommendation for you? <sub><small>Answering this survey will not impact your Lift settings.</small></sub>
   
   [ [🙁 Not relevant](https://www.sonatype.com/lift-comment-rating?comment=451880978&lift_comment_rating=1) ] - [ [😕 Won't fix](https://www.sonatype.com/lift-comment-rating?comment=451880978&lift_comment_rating=2) ] - [ [😑 Not critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=451880978&lift_comment_rating=3) ] - [ [🙂 Critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=451880978&lift_comment_rating=4) ] - [ [😊 Critical, fixing now](https://www.sonatype.com/lift-comment-rating?comment=451880978&lift_comment_rating=5) ]



##########
solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java:
##########
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cloud.api.collections;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import org.apache.lucene.store.Directory;
+import org.apache.solr.client.solrj.SolrClient;
+import org.apache.solr.client.solrj.SolrQuery;
+import org.apache.solr.client.solrj.impl.BaseHttpSolrClient;
+import org.apache.solr.client.solrj.impl.CloudSolrClient;
+import org.apache.solr.client.solrj.request.CollectionAdminRequest;
+import org.apache.solr.cloud.SolrCloudTestCase;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.util.ExecutorUtil;
+import org.apache.solr.common.util.SolrNamedThreadFactory;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.core.CoreDescriptor;
+import org.apache.solr.core.DirectoryFactory;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Base class for testing the "Install Shard API" with various backup repositories.
+ *
+ * <p>Subclasses are expected to bootstrap a Solr cluster with a single configured backup
+ * repository. This base-class will populate that backup repository all data necessary for these
+ * tests.
+ *
+ * @see org.apache.solr.handler.admin.api.InstallShardDataAPI
+ */
+public abstract class AbstractInstallShardTest extends SolrCloudTestCase {
+
+  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  protected static final String INSTALL_DATA_BASE_LOCATION = "/";
+  protected static final String BACKUP_REPO_NAME = "trackingBackupRepository";
+
+  private static long docsSeed; // see indexDocs()
+
+  @BeforeClass
+  public static void seedDocGenerator() {
+    docsSeed = random().nextLong();
+    System.setProperty("solr.directoryFactory", "solr.StandardDirectoryFactory");
+  }
+
+  // Populated by 'bootstrapBackupRepositoryData'
+  private static int singleShardNumDocs = -1;
+  private static int replicasPerShard = -1;
+  private static int multiShardNumDocs = -1;
+  private static URI singleShard1Uri = null;
+  private static URI[] multiShardUris = null;
+
+  public static void bootstrapBackupRepositoryData(String baseRepositoryLocation) throws Exception {
+    final int numShards = random().nextInt(3) + 2;
+    multiShardUris = new URI[numShards];
+    replicasPerShard = random().nextInt(3) + 1;
+    // replicasPerShard = 1;
+    CloudSolrClient solrClient = cluster.getSolrClient();
+
+    // Create collections and index docs
+    final String singleShardCollName = createAndAwaitEmptyCollection(1, replicasPerShard);
+    singleShardNumDocs = indexDocs(singleShardCollName, true);
+    assertCollectionHasNumDocs(singleShardCollName, singleShardNumDocs);
+    final String multiShardCollName = createAndAwaitEmptyCollection(numShards, replicasPerShard);
+    multiShardNumDocs = indexDocs(multiShardCollName, true);
+    assertCollectionHasNumDocs(multiShardCollName, multiShardNumDocs);
+
+    // Upload shard data to BackupRepository - single shard collection
+    singleShard1Uri =
+        createBackupRepoDirectoryForShardData(
+            baseRepositoryLocation, singleShardCollName, "shard1");
+    copyShardDataToBackupRepository(singleShardCollName, "shard1", singleShard1Uri);
+    // Upload shard data to BackupRepository - multi-shard collection
+    for (int i = 0; i < multiShardUris.length; i++) {
+      final String shardName = "shard" + (i + 1);
+      multiShardUris[i] =
+          createBackupRepoDirectoryForShardData(
+              baseRepositoryLocation, multiShardCollName, shardName);
+      copyShardDataToBackupRepository(multiShardCollName, shardName, multiShardUris[i]);
+    }
+
+    // Nuke collections now that we've populated the BackupRepository
+    CollectionAdminRequest.deleteCollection(singleShardCollName).process(solrClient);
+    CollectionAdminRequest.deleteCollection(multiShardCollName).process(solrClient);
+  }
+
+  @Test
+  public void testInstallFailsIfCollectionIsNotInReadOnlyMode() throws Exception {
+    final String collectionName = createAndAwaitEmptyCollection(1, replicasPerShard);
+
+    final String singleShardLocation = singleShard1Uri.toString();
+    final BaseHttpSolrClient.RemoteSolrException rse =
+        expectThrows(
+            BaseHttpSolrClient.RemoteSolrException.class,
+            () -> {
+              CollectionAdminRequest.installDataToShard(
+                      collectionName, "shard1", singleShardLocation, BACKUP_REPO_NAME)
+                  .process(cluster.getSolrClient());
+            });
+    assertEquals(400, rse.code());
+    assertTrue(rse.getMessage().contains("Collection must be in readOnly mode"));
+
+    // Shard-install has failed so collection should still be empty.
+    assertCollectionHasNumDocs(collectionName, 0);
+  }
+
+  @Test
+  public void testInstallToSingleShardCollection() throws Exception {
+    final String collectionName = createAndAwaitEmptyCollection(1, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    final String singleShardLocation = singleShard1Uri.toString();
+    CollectionAdminRequest.installDataToShard(
+            collectionName, "shard1", singleShardLocation, BACKUP_REPO_NAME)
+        .process(cluster.getSolrClient());
+
+    // Shard-install has failed so collection should still be empty.
+    assertCollectionHasNumDocs(collectionName, singleShardNumDocs);
+  }
+
+  @Test
+  public void testSerialInstallToMultiShardCollection() throws Exception {
+    final String collectionName =
+        createAndAwaitEmptyCollection(multiShardUris.length, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    for (int i = 1; i <= multiShardUris.length; i++) {
+      CollectionAdminRequest.installDataToShard(
+              collectionName, "shard" + i, multiShardUris[i - 1].toString(), BACKUP_REPO_NAME)
+          .process(cluster.getSolrClient());
+    }
+
+    assertCollectionHasNumDocs(collectionName, multiShardNumDocs);
+  }
+
+  @Test
+  public void testParallelInstallToMultiShardCollection() throws Exception {
+    final String collectionName =
+        createAndAwaitEmptyCollection(multiShardUris.length, replicasPerShard);
+    enableReadOnly(collectionName);
+
+    runParallelShardInstalls(collectionName, multiShardUris);
+
+    assertCollectionHasNumDocs(collectionName, multiShardNumDocs);
+  }
+
+  /**
+   * Builds a string representation of a valid solr.xml configuration, with the provided
+   * backup-repository configuration inserted
+   *
+   * @param backupRepositoryText a string representing the 'backup' XML tag to put in the
+   *     constructed solr.xml
+   */
+  public static String defaultSolrXmlTextWithBackupRepository(String backupRepositoryText) {
+    return "<solr>\n"
+        + "\n"
+        + "  <str name=\"shareSchema\">${shareSchema:false}</str>\n"
+        + "  <str name=\"configSetBaseDir\">${configSetBaseDir:configsets}</str>\n"
+        + "  <str name=\"coreRootDirectory\">${coreRootDirectory:.}</str>\n"
+        + "\n"
+        + "  <shardHandlerFactory name=\"shardHandlerFactory\" class=\"HttpShardHandlerFactory\">\n"
+        + "    <str name=\"urlScheme\">${urlScheme:}</str>\n"
+        + "    <int name=\"socketTimeout\">${socketTimeout:90000}</int>\n"
+        + "    <int name=\"connTimeout\">${connTimeout:15000}</int>\n"
+        + "  </shardHandlerFactory>\n"
+        + "\n"
+        + "  <solrcloud>\n"
+        + "    <str name=\"host\">127.0.0.1</str>\n"
+        + "    <int name=\"hostPort\">${hostPort:8983}</int>\n"
+        + "    <str name=\"hostContext\">${hostContext:solr}</str>\n"
+        + "    <int name=\"zkClientTimeout\">${solr.zkclienttimeout:30000}</int>\n"
+        + "    <bool name=\"genericCoreNodeNames\">${genericCoreNodeNames:true}</bool>\n"
+        + "    <int name=\"leaderVoteWait\">10000</int>\n"
+        + "    <int name=\"distribUpdateConnTimeout\">${distribUpdateConnTimeout:45000}</int>\n"
+        + "    <int name=\"distribUpdateSoTimeout\">${distribUpdateSoTimeout:340000}</int>\n"
+        + "  </solrcloud>\n"
+        + "  \n"
+        + backupRepositoryText
+        + "  \n"
+        + "</solr>\n";
+  }
+
+  private static void assertCollectionHasNumDocs(String collection, int expectedNumDocs)
+      throws Exception {
+    final SolrClient solrClient = cluster.getSolrClient();
+    assertEquals(
+        expectedNumDocs,
+        solrClient.query(collection, new SolrQuery("*:*")).getResults().getNumFound());
+  }
+
+  private static void copyShardDataToBackupRepository(
+      String collectionName, String shardName, URI destinationUri) throws Exception {
+    final CoreContainer cc = cluster.getJettySolrRunner(0).getCoreContainer();
+    final Collection<String> coreNames = cc.getAllCoreNames();
+    final String coreName =
+        coreNames.stream()
+            .filter(name -> name.contains(collectionName) && name.contains(shardName))
+            .findFirst()
+            .get();
+    final CoreDescriptor cd = cc.getCoreDescriptor(coreName);
+    final Path coreInstanceDir = cd.getInstanceDir();
+    assert coreInstanceDir.toFile().exists();
+    assert coreInstanceDir.toFile().isDirectory();
+
+    final Path coreIndexDir = coreInstanceDir.resolve("data").resolve("index");
+    assert coreIndexDir.toFile().exists();
+    assert coreIndexDir.toFile().isDirectory();
+
+    try (final BackupRepository backupRepository = cc.newBackupRepository(BACKUP_REPO_NAME);
+        final SolrCore core = cc.getCore(coreName)) {
+      final Directory dir =
+          core.getDirectoryFactory()
+              .get(
+                  coreIndexDir.toString(),
+                  DirectoryFactory.DirContext.DEFAULT,
+                  core.getSolrConfig().indexConfig.lockType);
+      try {
+        for (final String dirContent : dir.listAll()) {
+          if (dirContent.contains("write.lock")) continue;
+          backupRepository.copyFileFrom(dir, dirContent, destinationUri);
+        }
+      } finally {
+        core.getDirectoryFactory().release(dir);
+      }
+    }
+  }
+
+  private static URI createBackupRepoDirectoryForShardData(
+      String baseLocation, String collectionName, String shardName) throws Exception {
+    final CoreContainer cc = cluster.getJettySolrRunner(0).getCoreContainer();
+    try (final BackupRepository backupRepository = cc.newBackupRepository(BACKUP_REPO_NAME)) {

Review Comment:
   <picture><img alt="5% of developers fix this issue" src="https://lift.sonatype.com/api/commentimage/fixrate/5/display.svg"></picture>
   
   <b>*NULLPTR_DEREFERENCE:</b>*  `cc` could be null (last assigned on line 262) and is dereferenced.
   
   ❗❗ <b>2 similar findings have been found in this PR</b>
   
   <details><summary>🔎 Expand here to view all instances of this finding</summary><br/>
     
     
   <div align=\"center\">
   
   
   | **File Path** | **Line Number** |
   | ------------- | ------------- |
   | solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java | [233](https://github.com/apache/solr/blob/4fd67205b96c5c4b290b14c15c691b17fbff6495/solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java#L233) |
   | solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java | [226](https://github.com/apache/solr/blob/4fd67205b96c5c4b290b14c15c691b17fbff6495/solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractInstallShardTest.java#L226) |
   <p><a href="https://lift.sonatype.com/results/github.com/apache/solr/01GW86YY8J27H55NP8ZPK636RD?t=Infer|NULLPTR_DEREFERENCE" target="_blank">Visit the Lift Web Console</a> to find more details in your report.</p></div></details>
   
   
   
   ---
   
   <details><summary>ℹī¸ Expand to see all <b>@sonatype-lift</b> commands</summary>
   
   You can reply with the following commands. For example, reply with ***@sonatype-lift ignoreall*** to leave out all findings.
   | **Command** | **Usage** |
   | ------------- | ------------- |
   | `@sonatype-lift ignore` | Leave out the above finding from this PR |
   | `@sonatype-lift ignoreall` | Leave out all the existing findings from this PR |
   | `@sonatype-lift exclude <file\|issue\|path\|tool>` | Exclude specified `file\|issue\|path\|tool` from Lift findings by updating your config.toml file |
   
   **Note:** When talking to LiftBot, you need to **refresh** the page to see its response.
   <sub>[Click here](https://github.com/apps/sonatype-lift/installations/new) to add LiftBot to another repo.</sub></details>
   
   
   
   ---
   
   <b>Help us improve LIFT! (<i>Sonatype LiftBot external survey</i>)</b>
   
   Was this a good recommendation for you? <sub><small>Answering this survey will not impact your Lift settings.</small></sub>
   
   [ [🙁 Not relevant](https://www.sonatype.com/lift-comment-rating?comment=451880984&lift_comment_rating=1) ] - [ [😕 Won't fix](https://www.sonatype.com/lift-comment-rating?comment=451880984&lift_comment_rating=2) ] - [ [😑 Not critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=451880984&lift_comment_rating=3) ] - [ [🙂 Critical, will fix](https://www.sonatype.com/lift-comment-rating?comment=451880984&lift_comment_rating=4) ] - [ [😊 Critical, fixing now](https://www.sonatype.com/lift-comment-rating?comment=451880984&lift_comment_rating=5) ]



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org