You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@solr.apache.org by GitBox <gi...@apache.org> on 2021/07/08 23:12:47 UTC

[GitHub] [solr] madrob commented on a change in pull request #120: SOLR-15089: Allow backup/restoration to Amazon's S3 blobstore

madrob commented on a change in pull request #120:
URL: https://github.com/apache/solr/pull/120#discussion_r666567722



##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3IndexInput.java
##########
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import org.apache.lucene.store.BufferedIndexInput;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.Locale;
+
+class S3IndexInput extends BufferedIndexInput {
+
+    static final int LOCAL_BUFFER_SIZE = 16 * 1024;
+
+    private final InputStream inputStream;
+    private final long length;
+
+    private long position;
+
+    S3IndexInput(InputStream inputStream, String path, long length) {
+        super(path);
+
+        this.inputStream = inputStream;
+        this.length = length;
+    }
+
+    @Override
+    protected void readInternal(ByteBuffer b) throws IOException {
+
+        int expectedLength = b.remaining();
+
+        byte[] localBuffer;
+        if (b.hasArray()) {

Review comment:
       might be personal style, but I feel like the array/non-array logic paths are different enough that we should do two separate methods instead of trying to do both sets of logic in the same place and continually checking which one we are on.

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3BackupRepository.java
##########
@@ -0,0 +1,349 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import org.apache.lucene.index.CorruptIndexException;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.file.Paths;
+import java.time.Duration;
+import java.time.Instant;
+import java.util.Collection;
+import java.util.Objects;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * A concrete implementation of {@link BackupRepository} interface supporting backup/restore of Solr indexes to a blob store like S3, GCS.
+ */
+public class S3BackupRepository implements BackupRepository {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    private static final int CHUNK_SIZE = 16 * 1024 * 1024; // 16 MBs
+    static final String S3_SCHEME = "s3";
+
+    private NamedList<String> config;
+    private S3StorageClient client;
+
+    @Override
+    @SuppressWarnings({"rawtypes", "unchecked"})
+    public void init(NamedList args) {
+        this.config = (NamedList<String>) args;
+        S3BackupRepositoryConfig backupConfig = new S3BackupRepositoryConfig(this.config);
+
+        // If a client was already created, close it to avoid any resource leak
+        if (client != null) {
+            client.close();
+        }
+
+        this.client = backupConfig.buildClient();
+    }
+
+    @Override
+    @SuppressWarnings("unchecked")
+    public <T> T getConfigProperty(String name) {

Review comment:
       This doesn't benefit from generic types AFAICT, we should fix the interface to not do this. Can do so in this issue or a separate one.

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3BackupRepository.java
##########
@@ -0,0 +1,349 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import org.apache.lucene.index.CorruptIndexException;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.file.Paths;
+import java.time.Duration;
+import java.time.Instant;
+import java.util.Collection;
+import java.util.Objects;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * A concrete implementation of {@link BackupRepository} interface supporting backup/restore of Solr indexes to a blob store like S3, GCS.
+ */
+public class S3BackupRepository implements BackupRepository {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    private static final int CHUNK_SIZE = 16 * 1024 * 1024; // 16 MBs
+    static final String S3_SCHEME = "s3";
+
+    private NamedList<String> config;
+    private S3StorageClient client;
+
+    @Override
+    @SuppressWarnings({"rawtypes", "unchecked"})
+    public void init(NamedList args) {
+        this.config = (NamedList<String>) args;
+        S3BackupRepositoryConfig backupConfig = new S3BackupRepositoryConfig(this.config);
+
+        // If a client was already created, close it to avoid any resource leak
+        if (client != null) {
+            client.close();
+        }
+
+        this.client = backupConfig.buildClient();
+    }
+
+    @Override
+    @SuppressWarnings("unchecked")
+    public <T> T getConfigProperty(String name) {
+        return (T) this.config.get(name);
+    }
+
+    @Override
+    public URI createURI(String location) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(location));
+        URI result;
+        try {
+            result = new URI(location);
+            if (!result.isAbsolute()) {
+                if (location.startsWith("/")) {
+                    return new URI(S3_SCHEME, null, location, null);
+                } else {
+                    return new URI(S3_SCHEME, null, "/" + location, null);
+                }
+            }
+        } catch (URISyntaxException ex) {
+            throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, ex);
+        }
+
+        return result;
+    }
+
+    @Override
+    public URI resolve(URI baseUri, String... pathComponents) {
+        Objects.requireNonNull(baseUri);

Review comment:
       Do we expect this in the normal course of operations as a condition that we need to signal back to the caller? Or are these mostly our own internal consistency checks. Are we (i.e. Solr) the caller of this code, or is it users? If it's us then we should switch all of these to assert that can get triggered during tests but don't have an impact on production. If they methods are expected to be called by users, then we need to have useful error messages beyond a stack trace.

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3BackupRepository.java
##########
@@ -0,0 +1,349 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import org.apache.lucene.index.CorruptIndexException;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.file.Paths;
+import java.time.Duration;
+import java.time.Instant;
+import java.util.Collection;
+import java.util.Objects;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * A concrete implementation of {@link BackupRepository} interface supporting backup/restore of Solr indexes to a blob store like S3, GCS.
+ */
+public class S3BackupRepository implements BackupRepository {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    private static final int CHUNK_SIZE = 16 * 1024 * 1024; // 16 MBs
+    static final String S3_SCHEME = "s3";
+
+    private NamedList<String> config;
+    private S3StorageClient client;
+
+    @Override
+    @SuppressWarnings({"rawtypes", "unchecked"})
+    public void init(NamedList args) {
+        this.config = (NamedList<String>) args;
+        S3BackupRepositoryConfig backupConfig = new S3BackupRepositoryConfig(this.config);
+
+        // If a client was already created, close it to avoid any resource leak
+        if (client != null) {
+            client.close();
+        }
+
+        this.client = backupConfig.buildClient();
+    }
+
+    @Override
+    @SuppressWarnings("unchecked")
+    public <T> T getConfigProperty(String name) {
+        return (T) this.config.get(name);
+    }
+
+    @Override
+    public URI createURI(String location) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(location));
+        URI result;
+        try {
+            result = new URI(location);
+            if (!result.isAbsolute()) {
+                if (location.startsWith("/")) {
+                    return new URI(S3_SCHEME, null, location, null);
+                } else {
+                    return new URI(S3_SCHEME, null, "/" + location, null);
+                }
+            }
+        } catch (URISyntaxException ex) {
+            throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, ex);
+        }
+
+        return result;
+    }
+
+    @Override
+    public URI resolve(URI baseUri, String... pathComponents) {
+        Objects.requireNonNull(baseUri);
+        Preconditions.checkArgument(baseUri.isAbsolute());
+        Preconditions.checkArgument(pathComponents.length > 0);
+        Preconditions.checkArgument(baseUri.getScheme().equalsIgnoreCase(S3_SCHEME));
+
+        // If paths contains unnecessary '/' separators, they'll be removed by URI.normalize()
+        String path = baseUri.toString() + "/" + String.join("/", pathComponents);
+        return URI.create(path).normalize();
+    }
+
+    @Override
+    public void createDirectory(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Create directory '{}'", blobPath);
+        }
+
+        client.createDirectory(blobPath);
+    }
+
+    @Override
+    public void deleteDirectory(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Delete directory '{}'", blobPath);
+        }
+
+        client.deleteDirectory(blobPath);
+    }
+
+    @Override
+    public void delete(URI path, Collection<String> files, boolean ignoreNoSuchFileException) throws IOException {
+        Objects.requireNonNull(path);
+        Objects.requireNonNull(files);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Delete files {} from {}", files, getS3Path(path));
+        }
+        Set<String> filesToDelete = files.stream()
+            .map(file -> resolve(path, file))
+            .map(S3BackupRepository::getS3Path)
+            .collect(Collectors.toSet());
+
+        try {
+            client.delete(filesToDelete);
+        } catch (S3NotFoundException e) {
+            if (!ignoreNoSuchFileException) {
+                throw e;
+            }
+        }
+    }
+
+    @Override
+    public boolean exists(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Path exists '{}'", blobPath);
+        }
+
+        return client.pathExists(blobPath);
+    }
+
+    @Override
+    public IndexInput openInput(URI path, String fileName, IOContext ctx) throws IOException {
+        Objects.requireNonNull(path);
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(fileName));
+
+        URI filePath = resolve(path, fileName);
+        String blobPath = getS3Path(filePath);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Read from blob '{}'", blobPath);
+        }
+
+        return new S3IndexInput(client.pullStream(blobPath), blobPath, client.length(blobPath));
+    }
+
+    @Override
+    public OutputStream createOutput(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Write to blob '{}'", blobPath);
+        }
+
+        return client.pushStream(blobPath);
+    }
+
+    /**
+     * This method returns all the entries (files and directories) in the specified directory.
+     *
+     * @param path The directory path
+     * @return an array of strings, one for each entry in the directory
+     */
+    @Override
+    public String[] listAll(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("listAll for '{}'", blobPath);
+        }
+
+        return client.listDir(blobPath);
+    }
+
+    @Override
+    public PathType getPathType(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("getPathType for '{}'", blobPath);
+        }
+
+        return client.isDirectory(blobPath) ? PathType.DIRECTORY : PathType.FILE;
+    }
+
+    /**
+     * Copy an index file from specified <code>sourceDir</code> to the destination repository (i.e. backup).
+     *
+     * @param sourceDir
+     *          The source directory hosting the file to be copied.
+     * @param sourceFileName
+     *          The name of the file to be copied
+     * @param dest
+     *          The destination backup location.
+     * @throws IOException
+     *          in case of errors
+     * @throws CorruptIndexException
+     *          in case checksum of the file does not match with precomputed checksum stored at the end of the file
+     * @since 8.3.0
+     */
+    @Override
+    public void copyIndexFileFrom(Directory sourceDir, String sourceFileName, URI dest, String destFileName) throws IOException {
+        Objects.requireNonNull(sourceDir);
+        Objects.requireNonNull(dest);
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sourceFileName));
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(destFileName));
+
+        URI filePath = resolve(dest, destFileName);
+        String blobPath = getS3Path(filePath);
+        Instant start = Instant.now();
+        if (log.isDebugEnabled()) {
+            log.debug("Upload started to blob'{}'", blobPath);
+        }
+
+        IndexInput indexInput = null;
+        OutputStream outputStream = null;

Review comment:
       try-with-resources?

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3BackupRepository.java
##########
@@ -0,0 +1,349 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import org.apache.lucene.index.CorruptIndexException;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.file.Paths;
+import java.time.Duration;
+import java.time.Instant;
+import java.util.Collection;
+import java.util.Objects;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * A concrete implementation of {@link BackupRepository} interface supporting backup/restore of Solr indexes to a blob store like S3, GCS.
+ */
+public class S3BackupRepository implements BackupRepository {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    private static final int CHUNK_SIZE = 16 * 1024 * 1024; // 16 MBs
+    static final String S3_SCHEME = "s3";
+
+    private NamedList<String> config;
+    private S3StorageClient client;
+
+    @Override
+    @SuppressWarnings({"rawtypes", "unchecked"})
+    public void init(NamedList args) {

Review comment:
       This needs to be updated with the correct signature on main branch.

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/AdobeMockS3StorageClient.java
##########
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.amazonaws.client.builder.AwsClientBuilder;
+import com.amazonaws.regions.Regions;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3ClientBuilder;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+
+/**
+ * This storage client exists to work around some of the incongruencies Adobe S3Mock has with the S3 API.
+ * The main difference is that S3Mock does not support paths with a leading '/', but S3 does, and our code
+ * in {@link S3StorageClient} requires all paths to have a leading '/'.
+ */
+class AdobeMockS3StorageClient extends S3StorageClient {
+
+    static final int DEFAULT_MOCK_S3_PORT = 9090;
+    private static final String DEFAULT_MOCK_S3_ENDPOINT = "http://localhost:" + DEFAULT_MOCK_S3_PORT;
+
+    AdobeMockS3StorageClient(String bucketName) {
+        super(createInternalClient(), bucketName);
+    }
+
+    @VisibleForTesting
+    AdobeMockS3StorageClient(AmazonS3 s3client, String bucketName) {
+        super(s3client, bucketName);
+    }
+
+    private static AmazonS3 createInternalClient() {
+        String s3MockEndpoint = System.getenv().getOrDefault("MOCK_S3_ENDPOINT", DEFAULT_MOCK_S3_ENDPOINT);
+
+        return AmazonS3ClientBuilder.standard()
+            .enablePathStyleAccess()
+            .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(s3MockEndpoint, Regions.US_EAST_1.name()))
+            .build();
+    }
+
+    /**
+     * Ensures path adheres to some rules (different than the rules that S3 cares about):
+     * -Trims leading slash, if given
+     * -If it's a file, throw an error if it ends with a trailing slash
+     */
+    @Override
+    String sanitizedPath(String path, boolean isFile) throws S3Exception {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(path));

Review comment:
       Use `org.apache.solr.common.StringUtils.isEmpty`, which is null safe as well.
   
   I'm not in favor of pulling in Guava just for Preconditions, can we check these manually and throw IllegalArgumentException?

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3IndexInput.java
##########
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import org.apache.lucene.store.BufferedIndexInput;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.Locale;
+
+class S3IndexInput extends BufferedIndexInput {
+
+    static final int LOCAL_BUFFER_SIZE = 16 * 1024;
+
+    private final InputStream inputStream;
+    private final long length;
+
+    private long position;
+
+    S3IndexInput(InputStream inputStream, String path, long length) {
+        super(path);
+
+        this.inputStream = inputStream;
+        this.length = length;
+    }
+
+    @Override
+    protected void readInternal(ByteBuffer b) throws IOException {
+
+        int expectedLength = b.remaining();
+
+        byte[] localBuffer;
+        if (b.hasArray()) {
+            localBuffer = b.array();
+        } else {
+            localBuffer = new byte[LOCAL_BUFFER_SIZE];
+        }
+
+        // We have no guarantee we read all the requested bytes from the underlying InputStream
+        // in a single call. Loop until we reached the requested number of bytes.
+        while (b.hasRemaining()) {
+            int read;
+
+            if (b.hasArray()) {
+                read = inputStream.read(localBuffer, b.position(), b.remaining());
+            } else {
+                read = inputStream.read(localBuffer, 0, Math.min(b.remaining(), LOCAL_BUFFER_SIZE));
+            }
+
+            // Abort if we can't read any more data
+            if (read < 0) {
+                break;
+            }
+
+            if (b.hasArray()) {
+                b.position(b.position() + read);
+            } else {
+                b.put(localBuffer, 0, read);
+            }
+        }
+
+        if (b.remaining() > 0) {
+            throw new IOException(String.format(Locale.ROOT, "Failed to read %d bytes; only %d available", expectedLength, (expectedLength - b.remaining())));
+        }
+
+        position += expectedLength;
+    }
+
+    @Override
+    protected void seekInternal(long pos) throws IOException {
+        if (pos > length()) {
+            throw new EOFException("read past EOF: pos=" + pos + " vs length=" + length() + ": " + this);
+        }
+
+        long diff = pos - this.position;
+
+        // If we seek forward, skip unread bytes
+        if (diff > 0) {
+            inputStream.skip(diff);

Review comment:
       `skip` may skip less than n bytes, need to check the output value here.

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3OutputStream.java
##########
@@ -0,0 +1,250 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.event.ProgressEvent;
+import com.amazonaws.event.SyncProgressListener;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Implementation is adapted from https://github.com/confluentinc/kafka-connect-storage-cloud/blob/master/kafka-connect-s3/src/main/java/io/confluent/connect/s3/storage/S3OutputStream.java
+ */
+public class S3OutputStream extends OutputStream {
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    // 16 MB. Part sizes must be between 5MB to 5GB.
+    // https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
+    static final int PART_SIZE = 16777216;
+    static final int MIN_PART_SIZE = 5242880;
+
+    private final AmazonS3 s3Client;
+    private final String bucketName;
+    private final String key;
+    private final SyncProgressListener progressListener;
+    private volatile boolean closed;
+    private final ByteBuffer buffer;
+    private MultipartUpload multiPartUpload;
+
+    public S3OutputStream(AmazonS3 s3Client, String key, String bucketName) {
+        this.s3Client = s3Client;
+        this.bucketName = bucketName;
+        this.key = key;
+        this.closed = false;
+        this.buffer = ByteBuffer.allocate(PART_SIZE);
+        this.progressListener = new ConnectProgressListener();
+        this.multiPartUpload = null;
+
+        if (log.isDebugEnabled()) {
+            log.debug("Created S3OutputStream for bucketName '{}' key '{}'", bucketName, key);
+        }
+    }
+
+    @Override
+    public void write(int b) throws IOException {
+        if (closed) {
+            throw new IOException("Stream closed");
+        }
+
+        buffer.put((byte) b);
+
+        // If the buffer is now full, push it to remote S3.
+        if (!buffer.hasRemaining()) {
+            uploadPart(false);
+        }
+    }
+
+    @Override
+    public void write(byte[] b, int off, int len) throws IOException {
+        if (closed) {
+            throw new IOException("Stream closed");
+        }
+
+        if (b == null) {
+            throw new NullPointerException();
+        } else if (outOfRange(off, b.length) || len < 0 || outOfRange(off + len, b.length)) {

Review comment:
       I think the second and third conditions here imply the first

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3OutputStream.java
##########
@@ -0,0 +1,250 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.event.ProgressEvent;
+import com.amazonaws.event.SyncProgressListener;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Implementation is adapted from https://github.com/confluentinc/kafka-connect-storage-cloud/blob/master/kafka-connect-s3/src/main/java/io/confluent/connect/s3/storage/S3OutputStream.java
+ */
+public class S3OutputStream extends OutputStream {
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    // 16 MB. Part sizes must be between 5MB to 5GB.
+    // https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
+    static final int PART_SIZE = 16777216;
+    static final int MIN_PART_SIZE = 5242880;
+
+    private final AmazonS3 s3Client;
+    private final String bucketName;
+    private final String key;
+    private final SyncProgressListener progressListener;
+    private volatile boolean closed;
+    private final ByteBuffer buffer;
+    private MultipartUpload multiPartUpload;
+
+    public S3OutputStream(AmazonS3 s3Client, String key, String bucketName) {
+        this.s3Client = s3Client;
+        this.bucketName = bucketName;
+        this.key = key;
+        this.closed = false;
+        this.buffer = ByteBuffer.allocate(PART_SIZE);
+        this.progressListener = new ConnectProgressListener();
+        this.multiPartUpload = null;
+
+        if (log.isDebugEnabled()) {
+            log.debug("Created S3OutputStream for bucketName '{}' key '{}'", bucketName, key);
+        }
+    }
+
+    @Override
+    public void write(int b) throws IOException {
+        if (closed) {
+            throw new IOException("Stream closed");
+        }
+
+        buffer.put((byte) b);
+
+        // If the buffer is now full, push it to remote S3.
+        if (!buffer.hasRemaining()) {
+            uploadPart(false);
+        }
+    }
+
+    @Override
+    public void write(byte[] b, int off, int len) throws IOException {
+        if (closed) {
+            throw new IOException("Stream closed");
+        }
+
+        if (b == null) {
+            throw new NullPointerException();
+        } else if (outOfRange(off, b.length) || len < 0 || outOfRange(off + len, b.length)) {
+            throw new IndexOutOfBoundsException();
+        } else if (len == 0) {
+            return;
+        }
+
+        if (buffer.remaining() <= len) {
+            int firstPart = buffer.remaining();
+            buffer.put(b, off, firstPart);
+            uploadPart(false);
+            write(b, off + firstPart, len - firstPart);

Review comment:
       I don't think we want a recursive call here, can we switch this to a loop?

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3StorageClient.java
##########
@@ -0,0 +1,475 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.AmazonServiceException;
+import com.amazonaws.ClientConfiguration;
+import com.amazonaws.Protocol;
+import com.amazonaws.regions.Regions;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3ClientBuilder;
+import com.amazonaws.services.s3.model.*;
+import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.collect.Lists;
+import org.apache.commons.io.input.ClosedInputStream;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.Closeable;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.util.*;

Review comment:
       please don't use wildcard imports

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3BackupRepositoryConfig.java
##########
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.google.common.base.Strings;
+import org.apache.solr.common.util.NamedList;
+
+import java.util.Locale;
+import java.util.Map;
+
+/**
+ * Class representing the {@code backup} blob config bundle specified in solr.xml. All user-provided config can be
+ * overridden via environment variables (use uppercase, with '_' instead of '.'), see {@link S3BackupRepositoryConfig#toEnvVar}.
+ */
+public class S3BackupRepositoryConfig {
+
+    public static final String BUCKET_NAME = "blob.s3.bucket.name";
+    public static final String REGION = "blob.s3.region";
+    public static final String PROXY_HOST = "blob.s3.proxy.host";
+    public static final String PROXY_PORT = "blob.s3.proxy.port";
+    public static final String S3MOCK = "blob.s3.mock";
+
+    private final String bucketName;
+    private final String region;
+    private final String proxyHost;
+    private final int proxyPort;
+    private final boolean s3mock;
+
+    @SuppressWarnings({"rawtypes", "unchecked"})
+    public S3BackupRepositoryConfig(NamedList args) {

Review comment:
       please use a generic type

##########
File path: solr/contrib/blob-repository/src/test/org/apache/solr/s3/S3PathsTest.java
##########
@@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.google.common.collect.Sets;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Test creating and deleting objects at different paths.
+ */
+public class S3PathsTest extends AbstractS3ClientTest {
+
+    /**
+     * The root must always exist.
+     */
+    @Test
+    public void testRoot() throws S3Exception {
+        assertTrue(client.pathExists("/"));
+    }
+
+    /**
+     * Simple tests with files.
+     */
+    @Test
+    public void testFiles() throws S3Exception {
+        assertFalse(client.pathExists("/simple-file"));
+        assertFalse(client.pathExists("/simple-file/"));
+
+        pushContent("/simple-file", "blah");
+        assertTrue(client.pathExists("/simple-file"));
+        assertTrue(client.pathExists("/simple-file/"));
+    }
+
+    /**
+     * Simple tests with a directory.
+     */
+    @Test
+    public void testDirectory() throws S3Exception {
+
+        client.createDirectory("/simple-directory");
+        assertTrue(client.pathExists("/simple-directory"));
+        assertTrue(client.pathExists("/simple-directory/"));
+
+    }
+
+    /**
+     * Happy path of deleting a directory 
+     */
+    @Test
+    public void testDeleteDirectory() throws S3Exception {
+
+        client.createDirectory("/delete-dir");
+
+        pushContent("/delete-dir/file1", "file1");
+        pushContent("/delete-dir/file2", "file2");
+
+        client.deleteDirectory("/delete-dir");
+        
+        assertFalse(client.pathExists("/delete-dir"));
+        assertFalse(client.pathExists("/delete-dir/file1"));
+        assertFalse(client.pathExists("/delete-dir/file2"));
+    }
+
+    /**
+     * Ensure directory deletion is recursive.
+     */
+    @Test
+    public void testDeleteDirectoryMultipleLevels() throws S3Exception {
+
+        client.createDirectory("/delete-dir");
+        pushContent("/delete-dir/file1", "file1");
+
+        client.createDirectory("/delete-dir/sub-dir1");
+        pushContent("/delete-dir/sub-dir1/file2", "file2");
+
+        client.createDirectory("/delete-dir/sub-dir1/sub-dir2");
+        pushContent("/delete-dir/sub-dir1/sub-dir2/file3", "file3");
+
+        client.deleteDirectory("/delete-dir");
+
+        assertFalse(client.pathExists("/delete-dir"));
+        assertFalse(client.pathExists("/delete-dir/file1"));
+        assertFalse(client.pathExists("/delete-dir/sub-dir1"));
+        assertFalse(client.pathExists("/delete-dir/sub-dir1/file2"));
+        assertFalse(client.pathExists("/delete-dir/sub-dir1/sub-dir2"));
+        assertFalse(client.pathExists("/delete-dir/sub-dir1/sub-dir2/file3"));
+    }
+
+    /**
+     * S3StorageClient batches deletes (1000 per request) to adhere to S3's hard limit. Since the S3Mock does not
+     * enforce this limitation, however, the exact batch size doesn't matter here: all we're really testing is that
+     * the partition logic works and doesn't miss any files.
+     */
+    @Test
+    public void testDeleteBatching() throws S3Exception {
+
+        client.createDirectory("/delete-dir");
+
+        List<String> pathsToDelete = new ArrayList<>();
+        for (int i = 0; i < 101; i++) {
+            String path = "delete-dir/file" + i;
+            pathsToDelete.add(path);
+            pushContent(path, "foo");
+        }
+
+        client.deleteObjects(pathsToDelete, 10);
+        for (String path : pathsToDelete) {
+            assertFalse("file " + path + " does exist", client.pathExists(path));
+        }
+    }
+
+    @Test
+    public void testDeleteMultipleFiles() throws S3Exception {
+
+        client.createDirectory("/my");
+        pushContent("/my/file1", "file1");
+        pushContent("/my/file2", "file2");
+        pushContent("/my/file3", "file3");
+
+        client.delete(List.of("/my/file1", "my/file3"));
+
+        assertFalse(client.pathExists("/my/file1"));
+        assertFalse(client.pathExists("/my/file3"));
+
+        // Other files with same prefix should be there
+        assertTrue(client.pathExists("/my/file2"));
+    }
+
+    /**
+     * Test deleting a directory which is the prefix of another objects (without deleting them).
+     */
+    @Test
+    public void testDeletePrefix() throws S3Exception {
+
+        client.createDirectory("/my");
+        pushContent("/my/file", "file");
+
+        pushContent("/my-file1", "file1");
+        pushContent("/my-file2", "file2");
+
+        client.deleteDirectory("/my");
+
+        // Deleted directory and its file should be gone
+        assertFalse(client.pathExists("/my/file"));
+        assertFalse(client.pathExists("/my"));
+
+        // Other files with same prefix should be there
+        assertTrue(client.pathExists("/my-file1"));
+        assertTrue(client.pathExists("/my-file2"));
+    }
+
+    /**
+     * Check listing objects of a directory.
+     */
+    @Test
+    public void testListDir() throws S3Exception {
+
+        client.createDirectory("/list-dir");
+        client.createDirectory("/list-dir/sub-dir");
+        pushContent("/list-dir/file", "file");
+        pushContent("/list-dir/sub-dir/file", "file");
+
+        // These files have same prefix in name, but should not be returned
+        pushContent("/list-dir-file1", "file1");
+        pushContent("/list-dir-file2", "file2");
+
+        String[] items = client.listDir("/list-dir");
+        assertEquals(Sets.newHashSet("file", "sub-dir"), Sets.newHashSet(items));

Review comment:
       Use `java.util.Set.of`

##########
File path: solr/contrib/blob-repository/src/java/org/apache/solr/s3/S3BackupRepository.java
##########
@@ -0,0 +1,349 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.s3;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import org.apache.lucene.index.CorruptIndexException;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.core.backup.repository.BackupRepository;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.lang.invoke.MethodHandles;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.file.Paths;
+import java.time.Duration;
+import java.time.Instant;
+import java.util.Collection;
+import java.util.Objects;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * A concrete implementation of {@link BackupRepository} interface supporting backup/restore of Solr indexes to a blob store like S3, GCS.
+ */
+public class S3BackupRepository implements BackupRepository {
+
+    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+    private static final int CHUNK_SIZE = 16 * 1024 * 1024; // 16 MBs
+    static final String S3_SCHEME = "s3";
+
+    private NamedList<String> config;
+    private S3StorageClient client;
+
+    @Override
+    @SuppressWarnings({"rawtypes", "unchecked"})
+    public void init(NamedList args) {
+        this.config = (NamedList<String>) args;
+        S3BackupRepositoryConfig backupConfig = new S3BackupRepositoryConfig(this.config);
+
+        // If a client was already created, close it to avoid any resource leak
+        if (client != null) {
+            client.close();
+        }
+
+        this.client = backupConfig.buildClient();
+    }
+
+    @Override
+    @SuppressWarnings("unchecked")
+    public <T> T getConfigProperty(String name) {
+        return (T) this.config.get(name);
+    }
+
+    @Override
+    public URI createURI(String location) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(location));
+        URI result;
+        try {
+            result = new URI(location);
+            if (!result.isAbsolute()) {
+                if (location.startsWith("/")) {
+                    return new URI(S3_SCHEME, null, location, null);
+                } else {
+                    return new URI(S3_SCHEME, null, "/" + location, null);
+                }
+            }
+        } catch (URISyntaxException ex) {
+            throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, ex);
+        }
+
+        return result;
+    }
+
+    @Override
+    public URI resolve(URI baseUri, String... pathComponents) {
+        Objects.requireNonNull(baseUri);
+        Preconditions.checkArgument(baseUri.isAbsolute());
+        Preconditions.checkArgument(pathComponents.length > 0);
+        Preconditions.checkArgument(baseUri.getScheme().equalsIgnoreCase(S3_SCHEME));
+
+        // If paths contains unnecessary '/' separators, they'll be removed by URI.normalize()
+        String path = baseUri.toString() + "/" + String.join("/", pathComponents);
+        return URI.create(path).normalize();
+    }
+
+    @Override
+    public void createDirectory(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Create directory '{}'", blobPath);
+        }
+
+        client.createDirectory(blobPath);
+    }
+
+    @Override
+    public void deleteDirectory(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Delete directory '{}'", blobPath);
+        }
+
+        client.deleteDirectory(blobPath);
+    }
+
+    @Override
+    public void delete(URI path, Collection<String> files, boolean ignoreNoSuchFileException) throws IOException {
+        Objects.requireNonNull(path);
+        Objects.requireNonNull(files);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Delete files {} from {}", files, getS3Path(path));
+        }
+        Set<String> filesToDelete = files.stream()
+            .map(file -> resolve(path, file))
+            .map(S3BackupRepository::getS3Path)
+            .collect(Collectors.toSet());
+
+        try {
+            client.delete(filesToDelete);
+        } catch (S3NotFoundException e) {
+            if (!ignoreNoSuchFileException) {
+                throw e;
+            }
+        }
+    }
+
+    @Override
+    public boolean exists(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Path exists '{}'", blobPath);
+        }
+
+        return client.pathExists(blobPath);
+    }
+
+    @Override
+    public IndexInput openInput(URI path, String fileName, IOContext ctx) throws IOException {
+        Objects.requireNonNull(path);
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(fileName));
+
+        URI filePath = resolve(path, fileName);
+        String blobPath = getS3Path(filePath);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Read from blob '{}'", blobPath);
+        }
+
+        return new S3IndexInput(client.pullStream(blobPath), blobPath, client.length(blobPath));
+    }
+
+    @Override
+    public OutputStream createOutput(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("Write to blob '{}'", blobPath);
+        }
+
+        return client.pushStream(blobPath);
+    }
+
+    /**
+     * This method returns all the entries (files and directories) in the specified directory.
+     *
+     * @param path The directory path
+     * @return an array of strings, one for each entry in the directory
+     */
+    @Override
+    public String[] listAll(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("listAll for '{}'", blobPath);
+        }
+
+        return client.listDir(blobPath);
+    }
+
+    @Override
+    public PathType getPathType(URI path) throws IOException {
+        Objects.requireNonNull(path);
+
+        String blobPath = getS3Path(path);
+
+        if (log.isDebugEnabled()) {
+            log.debug("getPathType for '{}'", blobPath);
+        }
+
+        return client.isDirectory(blobPath) ? PathType.DIRECTORY : PathType.FILE;
+    }
+
+    /**
+     * Copy an index file from specified <code>sourceDir</code> to the destination repository (i.e. backup).
+     *
+     * @param sourceDir
+     *          The source directory hosting the file to be copied.
+     * @param sourceFileName
+     *          The name of the file to be copied
+     * @param dest
+     *          The destination backup location.
+     * @throws IOException
+     *          in case of errors
+     * @throws CorruptIndexException
+     *          in case checksum of the file does not match with precomputed checksum stored at the end of the file
+     * @since 8.3.0

Review comment:
       I don't think this is correct?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@solr.apache.org
For additional commands, e-mail: issues-help@solr.apache.org