You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ambari.apache.org by ol...@apache.org on 2018/06/16 00:30:39 UTC

[ambari] branch trunk updated (623834f -> 16a19c9)

This is an automated email from the ASF dual-hosted git repository.

oleewere pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git.


    from 623834f  AMBARI-24094 ADDENDUM Hive Upgrade in Atlantic (#1553)
     new e691c7b  AMBARI-23945. Infra Solr migration: Add dump collections support & refactor.
     new 16a19c9  AMBARI-23945. Infra Solr migration - Update asciinema links

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ambari-infra/ambari-infra-solr-client/README.md    |   4 +-
 .../ambari/infra/solr/AmbariSolrCloudCLI.java      |  28 ++-
 .../ambari/infra/solr/AmbariSolrCloudClient.java   |  14 ++
 .../infra/solr/AmbariSolrCloudClientBuilder.java   |   6 +
 .../solr/commands/DumpCollectionsCommand.java      | 145 +++++++++++++++
 .../infra/solr/domain/json/SolrCollection.java     |  80 ++++++++
 .../infra/solr/domain/json/SolrCoreData.java       |  37 +++-
 .../ambari/infra/solr/domain/json/SolrShard.java   |  31 +++-
 .../src/main/python/migrationConfigGenerator.py    |   4 +-
 .../src/main/python/migrationHelper.py             | 205 ++++++++++++++-------
 .../0.1.0/package/scripts/collection.py            |  56 +++---
 .../0.1.0/package/scripts/command_commons.py       |  99 +---------
 12 files changed, 501 insertions(+), 208 deletions(-)
 create mode 100644 ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/commands/DumpCollectionsCommand.java
 create mode 100644 ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCollection.java
 copy ambari-logsearch/ambari-logsearch-server/src/main/java/org/apache/ambari/logsearch/model/request/impl/query/ServiceLogHostComponentQueryRequest.java => ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCoreData.java (56%)
 copy ambari-logsearch/ambari-logsearch-server/src/main/java/org/apache/ambari/logsearch/web/model/Privilege.java => ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrShard.java (64%)

-- 
To stop receiving notification emails like this one, please contact
oleewere@apache.org.

[ambari] 01/02: AMBARI-23945. Infra Solr migration: Add dump collections support & refactor.

Posted by ol...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

oleewere pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit e691c7bcde8fd36b343b274da7b4ca41068296fa
Author: Oliver Szabo <ol...@gmail.com>
AuthorDate: Sat Jun 16 02:22:24 2018 +0200

    AMBARI-23945. Infra Solr migration: Add dump collections support & refactor.
---
 .../ambari/infra/solr/AmbariSolrCloudCLI.java      |  28 ++-
 .../ambari/infra/solr/AmbariSolrCloudClient.java   |  14 ++
 .../infra/solr/AmbariSolrCloudClientBuilder.java   |   6 +
 .../solr/commands/DumpCollectionsCommand.java      | 145 +++++++++++++++
 .../infra/solr/domain/json/SolrCollection.java     |  80 ++++++++
 .../infra/solr/domain/json/SolrCoreData.java       |  57 ++++++
 .../ambari/infra/solr/domain/json/SolrShard.java   |  55 ++++++
 .../src/main/python/migrationConfigGenerator.py    |   4 +-
 .../src/main/python/migrationHelper.py             | 205 ++++++++++++++-------
 .../0.1.0/package/scripts/collection.py            |  56 +++---
 .../0.1.0/package/scripts/command_commons.py       |  99 +---------
 11 files changed, 562 insertions(+), 187 deletions(-)

diff --git a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudCLI.java b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudCLI.java
index 818ccf0..44db3af 100644
--- a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudCLI.java
+++ b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudCLI.java
@@ -55,6 +55,7 @@ public class AmbariSolrCloudCLI {
   private static final String REMOVE_ADMIN_HANDLERS = "remove-admin-handlers";
   private static final String TRANSFER_ZNODE_COMMAND = "transfer-znode";
   private static final String DELETE_ZNODE_COMMAND = "delete-znode";
+  private static final String DUMP_COLLECTIONS_DATA_COMMAND = "dump-collections";
   private static final String CMD_LINE_SYNTAX =
     "\n./solrCloudCli.sh --create-collection -z host1:2181,host2:2181/ambari-solr -c collection -cs conf_set"
       + "\n./solrCloudCli.sh --upload-config -z host1:2181,host2:2181/ambari-solr -d /tmp/myconfig_dir -cs config_set"
@@ -62,6 +63,7 @@ public class AmbariSolrCloudCLI {
       + "\n./solrCloudCli.sh --check-config -z host1:2181,host2:2181/ambari-solr -cs config_set"
       + "\n./solrCloudCli.sh --create-shard -z host1:2181,host2:2181/ambari-solr -c collection -sn myshard"
       + "\n./solrCloudCli.sh --remove-admin-handlers -z host1:2181,host2:2181/ambari-solr -c collection"
+      + "\n./solrCloudCli.sh --dump-collections -z host1:2181,host2:2181/ambari-solr -o collection-data.json"
       + "\n./solrCloudCli.sh --create-znode -z host1:2181,host2:2181 -zn /ambari-solr"
       + "\n./solrCloudCli.sh --check-znode -z host1:2181,host2:2181 -zn /ambari-solr"
       + "\n./solrCloudCli.sh --delete-znode -z host1:2181,host2:2181 -zn /ambari-solr"
@@ -158,6 +160,11 @@ public class AmbariSolrCloudCLI {
       .desc("Delete znode")
       .build();
 
+    final Option dumpCollectionsOption = Option.builder("dcd")
+      .longOpt(DUMP_COLLECTIONS_DATA_COMMAND)
+      .desc("Dump collections data")
+      .build();
+
     final Option shardNameOption = Option.builder("sn")
       .longOpt("shard-name")
       .desc("Name of the shard for create-shard command")
@@ -361,6 +368,12 @@ public class AmbariSolrCloudCLI {
       .desc("Flag for enable/disable kerberos (with --setup-kerberos or --setup-kerberos-plugin)")
       .build();
 
+    final Option outputOption = Option.builder("o")
+      .longOpt("output")
+      .desc("File output for collections dump")
+      .numberOfArgs(1)
+      .build();
+
     options.addOption(helpOption);
     options.addOption(retryOption);
     options.addOption(removeAdminHandlerOption);
@@ -404,8 +417,10 @@ public class AmbariSolrCloudCLI {
     options.addOption(saslUsersOption);
     options.addOption(checkZnodeOption);
     options.addOption(deleteZnodeOption);
+    options.addOption(dumpCollectionsOption);
     options.addOption(setupKerberosPluginOption);
     options.addOption(securityJsonLocationOption);
+    options.addOption(outputOption);
 
     AmbariSolrCloudClient solrCloudClient = null;
 
@@ -463,10 +478,14 @@ public class AmbariSolrCloudCLI {
       } else if (cli.hasOption("dz")) {
         command = DELETE_ZNODE_COMMAND;
         validateRequiredOptions(cli, command, zkConnectStringOption, znodeOption);
+      } else if (cli.hasOption("dcd")) {
+        command = DUMP_COLLECTIONS_DATA_COMMAND;
+        validateRequiredOptions(cli, command, zkConnectStringOption, outputOption);
       } else {
         List<String> commands = Arrays.asList(CREATE_COLLECTION_COMMAND, CREATE_SHARD_COMMAND, UPLOAD_CONFIG_COMMAND,
           DOWNLOAD_CONFIG_COMMAND, CONFIG_CHECK_COMMAND, SET_CLUSTER_PROP, CREATE_ZNODE, SECURE_ZNODE_COMMAND, UNSECURE_ZNODE_COMMAND,
-          SECURE_SOLR_ZNODE_COMMAND, CHECK_ZNODE, SETUP_KERBEROS_PLUGIN, REMOVE_ADMIN_HANDLERS, TRANSFER_ZNODE_COMMAND, DELETE_ZNODE_COMMAND);
+          SECURE_SOLR_ZNODE_COMMAND, CHECK_ZNODE, SETUP_KERBEROS_PLUGIN, REMOVE_ADMIN_HANDLERS, TRANSFER_ZNODE_COMMAND, DELETE_ZNODE_COMMAND,
+          DUMP_COLLECTIONS_DATA_COMMAND);
         helpFormatter.printHelp(CMD_LINE_SYNTAX, options);
         exit(1, String.format("One of the supported commands is required (%s)", StringUtils.join(commands, "|")));
       }
@@ -500,6 +519,7 @@ public class AmbariSolrCloudCLI {
       String copySrc = cli.hasOption("cps") ? cli.getOptionValue("cps") : null;
       String copyDest = cli.hasOption("cpd") ? cli.getOptionValue("cpd") : null;
       String transferMode = cli.hasOption("tm") ? cli.getOptionValue("tm") : "NONE";
+      String output = cli.hasOption("o") ? cli.getOptionValue("o") : null;
 
       AmbariSolrCloudClientBuilder clientBuilder = new AmbariSolrCloudClientBuilder()
         .withZkConnectString(zkConnectString)
@@ -526,6 +546,7 @@ public class AmbariSolrCloudCLI {
         .withTransferMode(transferMode)
         .withCopySrc(copySrc)
         .withCopyDest(copyDest)
+        .withOutput(output)
         .withSecurityJsonLocation(securityJsonLocation)
         .withZnode(znode)
         .withSecure(isSecure)
@@ -606,6 +627,11 @@ public class AmbariSolrCloudCLI {
           solrCloudClient = clientBuilder.build();
           solrCloudClient.deleteZnode();
           break;
+        case DUMP_COLLECTIONS_DATA_COMMAND:
+          solrCloudClient = clientBuilder
+            .withSolrCloudClient().build();
+          solrCloudClient.outputCollectionData();
+          break;
         default:
           throw new AmbariSolrCloudClientException(String.format("Not found command: '%s'", command));
       }
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClient.java b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClient.java
index 2632fcc..5aecae0 100644
--- a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClient.java
+++ b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClient.java
@@ -24,6 +24,7 @@ import org.apache.ambari.infra.solr.commands.CreateShardCommand;
 import org.apache.ambari.infra.solr.commands.CreateSolrZnodeZkCommand;
 import org.apache.ambari.infra.solr.commands.DeleteZnodeZkCommand;
 import org.apache.ambari.infra.solr.commands.DownloadConfigZkCommand;
+import org.apache.ambari.infra.solr.commands.DumpCollectionsCommand;
 import org.apache.ambari.infra.solr.commands.EnableKerberosPluginSolrZkCommand;
 import org.apache.ambari.infra.solr.commands.GetShardsCommand;
 import org.apache.ambari.infra.solr.commands.GetSolrHostsCommand;
@@ -77,6 +78,7 @@ public class AmbariSolrCloudClient {
   private final String transferMode;
   private final String copySrc;
   private final String copyDest;
+  private final String output;
 
   public AmbariSolrCloudClient(AmbariSolrCloudClientBuilder builder) {
     this.zkConnectString = builder.zkConnectString;
@@ -103,6 +105,7 @@ public class AmbariSolrCloudClient {
     this.transferMode = builder.transferMode;
     this.copySrc = builder.copySrc;
     this.copyDest = builder.copyDest;
+    this.output = builder.output;
   }
 
   /**
@@ -129,6 +132,13 @@ public class AmbariSolrCloudClient {
     return getCollection();
   }
 
+  public String outputCollectionData() throws Exception {
+    List<String> collections = listCollections();
+    String result = new DumpCollectionsCommand(getRetryTimes(), getInterval(), collections).run(this);
+    LOG.info("Dump collections response: {}", result);
+    return result;
+  }
+
   /**
    * Set cluster property in clusterprops.json.
    */
@@ -382,4 +392,8 @@ public class AmbariSolrCloudClient {
   public String getCopyDest() {
     return copyDest;
   }
+
+  public String getOutput() {
+    return output;
+  }
 }
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClientBuilder.java b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClientBuilder.java
index f33ca9e..87ad1be 100644
--- a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClientBuilder.java
+++ b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/AmbariSolrCloudClientBuilder.java
@@ -57,6 +57,7 @@ public class AmbariSolrCloudClientBuilder {
   String transferMode;
   String copySrc;
   String copyDest;
+  String output;
 
   public AmbariSolrCloudClient build() {
     return new AmbariSolrCloudClient(this);
@@ -215,6 +216,11 @@ public class AmbariSolrCloudClientBuilder {
     return this;
   }
 
+  public AmbariSolrCloudClientBuilder withOutput(String output) {
+    this.output = output;
+    return this;
+  }
+
   public AmbariSolrCloudClientBuilder withSecurityJsonLocation(String securityJson) {
     this.securityJsonLocation = securityJson;
     return this;
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/commands/DumpCollectionsCommand.java b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/commands/DumpCollectionsCommand.java
new file mode 100644
index 0000000..1167a51
--- /dev/null
+++ b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/commands/DumpCollectionsCommand.java
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.ambari.infra.solr.commands;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.ObjectWriter;
+import org.apache.ambari.infra.solr.AmbariSolrCloudClient;
+import org.apache.ambari.infra.solr.domain.json.SolrCollection;
+import org.apache.ambari.infra.solr.domain.json.SolrCoreData;
+import org.apache.ambari.infra.solr.domain.json.SolrShard;
+import org.apache.solr.client.solrj.impl.CloudSolrClient;
+import org.apache.solr.common.cloud.DocCollection;
+import org.apache.solr.common.cloud.Replica;
+import org.apache.solr.common.cloud.Slice;
+import org.apache.solr.common.cloud.SolrZkClient;
+import org.apache.solr.common.cloud.SolrZooKeeper;
+import org.apache.solr.common.cloud.ZkStateReader;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class DumpCollectionsCommand extends AbstractZookeeperRetryCommand<String> {
+
+  private static final Logger logger = LoggerFactory.getLogger(DumpCollectionsCommand.class);
+
+  private final List<String> collections;
+
+  public DumpCollectionsCommand(int maxRetries, int interval, List<String> collections) {
+    super(maxRetries, interval);
+    this.collections = collections;
+  }
+
+  @Override
+  protected String executeZkCommand(AmbariSolrCloudClient client, SolrZkClient zkClient, SolrZooKeeper solrZooKeeper) throws Exception {
+    if (!this.collections.isEmpty()) {
+      ObjectMapper objectMapper = new ObjectMapper();
+      Map<String, SolrCollection> collectionMap = new HashMap<>();
+      for (String collection : this.collections) {
+        SolrCollection solrCollection = new SolrCollection();
+        Collection<Slice> slices = getSlices(client.getSolrCloudClient(), collection);
+        Integer numShards = slices.size();
+        Map<String, SolrShard> solrShardMap = new HashMap<>();
+        Map<String, List<String>> leaderHostCoreMap = new HashMap<>();
+        Map<String, SolrCoreData> leaderCoreDataMap = new HashMap<>();
+        Map<String, List<String>> leaderShardCoreMap = new HashMap<>();
+        Map<String, String> leaderCoreHostMap = new HashMap<>();
+        for (Slice slice : slices) {
+          SolrShard solrShard = new SolrShard();
+          solrShard.setName(slice.getName());
+          solrShard.setState(slice.getState());
+          Collection<Replica> replicas = slice.getReplicas();
+          Map<String, Replica> replicaMap = new HashMap<>();
+          leaderShardCoreMap.put(slice.getName(), new ArrayList<>());
+          for (Replica replica : replicas) {
+            replicaMap.put(replica.getName(), replica);
+            Replica.State state = replica.getState();
+            if (Replica.State.ACTIVE.equals(state)
+              && replica.getProperties().get("leader") != null && "true".equals(replica.getProperties().get("leader"))) {
+              String coreName = replica.getCoreName();
+              String hostName = getHostFromNodeName(replica.getNodeName());
+              if (leaderHostCoreMap.containsKey(hostName)) {
+                List<String> coresList = leaderHostCoreMap.get(hostName);
+                coresList.add(coreName);
+              } else {
+                List<String> coreList = new ArrayList<>();
+                coreList.add(coreName);
+                leaderHostCoreMap.put(hostName, coreList);
+              }
+              Map<String, String> properties = new HashMap<>();
+              properties.put("name", coreName);
+              properties.put("coreNodeName", replica.getName());
+              properties.put("shard", slice.getName());
+              properties.put("collection", collection);
+              properties.put("numShards", numShards.toString());
+              properties.put("replicaType", replica.getType().name());
+              SolrCoreData solrCoreData = new SolrCoreData(replica.getName(), hostName, properties);
+              leaderCoreDataMap.put(coreName, solrCoreData);
+              leaderShardCoreMap.get(slice.getName()).add(coreName);
+              leaderCoreHostMap.put(coreName, hostName);
+            }
+          }
+          solrShard.setReplicas(replicaMap);
+          solrShardMap.put(slice.getName(), solrShard);
+        }
+        solrCollection.setShards(solrShardMap);
+        solrCollection.setLeaderHostCoreMap(leaderHostCoreMap);
+        solrCollection.setLeaderSolrCoreDataMap(leaderCoreDataMap);
+        solrCollection.setLeaderShardsMap(leaderShardCoreMap);
+        solrCollection.setLeaderCoreHostMap(leaderCoreHostMap);
+        solrCollection.setName(collection);
+        collectionMap.put(collection, solrCollection);
+      }
+      File file = new File(client.getOutput());
+      if (!file.exists()) {
+        file.createNewFile();
+      }
+      final ObjectWriter objectWriter = objectMapper
+        .writerWithDefaultPrettyPrinter();
+      objectWriter.writeValue(file, collectionMap);
+      return objectWriter.writeValueAsString(collectionMap);
+    }
+    return null;
+  }
+
+  private String getHostFromNodeName(String nodeName) {
+    String[] splitted = nodeName.split(":");
+    if (splitted.length > 0) {
+      return splitted[0];
+    } else {
+      if (nodeName.endsWith("_solr")) {
+        String[] splitted_ = nodeName.split("_");
+        return splitted_[0];
+      }
+      return nodeName;
+    }
+  }
+
+  private Collection<Slice> getSlices(CloudSolrClient solrClient, String collection) {
+    ZkStateReader reader = solrClient.getZkStateReader();
+    DocCollection docCollection = reader.getClusterState().getCollection(collection);
+    return docCollection.getSlices();
+  }
+}
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCollection.java b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCollection.java
new file mode 100644
index 0000000..86751e5
--- /dev/null
+++ b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCollection.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.ambari.infra.solr.domain.json;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class SolrCollection {
+  private String name;
+  private Map<String, SolrShard> shards = new HashMap<>();
+  private Map<String, List<String>> leaderHostCoreMap = new HashMap<>();
+  private Map<String, SolrCoreData> leaderSolrCoreDataMap = new HashMap<>();
+  private Map<String, List<String>> leaderShardsMap = new HashMap<>();
+  private Map<String, String> leaderCoreHostMap = new HashMap<>();
+
+  public Map<String, SolrShard> getShards() {
+    return shards;
+  }
+
+  public void setShards(Map<String, SolrShard> shards) {
+    this.shards = shards;
+  }
+
+  public String getName() {
+    return name;
+  }
+
+  public void setName(String name) {
+    this.name = name;
+  }
+
+  public Map<String, List<String>> getLeaderHostCoreMap() {
+    return leaderHostCoreMap;
+  }
+
+  public void setLeaderHostCoreMap(Map<String, List<String>> leaderHostCoreMap) {
+    this.leaderHostCoreMap = leaderHostCoreMap;
+  }
+
+  public Map<String, SolrCoreData> getLeaderSolrCoreDataMap() {
+    return leaderSolrCoreDataMap;
+  }
+
+  public void setLeaderSolrCoreDataMap(Map<String, SolrCoreData> leaderSolrCoreDataMap) {
+    this.leaderSolrCoreDataMap = leaderSolrCoreDataMap;
+  }
+
+  public Map<String, List<String>> getLeaderShardsMap() {
+    return leaderShardsMap;
+  }
+
+  public void setLeaderShardsMap(Map<String, List<String>> leaderShardsMap) {
+    this.leaderShardsMap = leaderShardsMap;
+  }
+
+  public Map<String, String> getLeaderCoreHostMap() {
+    return leaderCoreHostMap;
+  }
+
+  public void setLeaderCoreHostMap(Map<String, String> leaderCoreHostMap) {
+    this.leaderCoreHostMap = leaderCoreHostMap;
+  }
+}
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCoreData.java b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCoreData.java
new file mode 100644
index 0000000..5724a51
--- /dev/null
+++ b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrCoreData.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.ambari.infra.solr.domain.json;
+
+import java.util.Map;
+
+public class SolrCoreData {
+  private String coreNodeName;
+  private String hostName;
+  private Map<String, String> properties;
+
+  public SolrCoreData(String coreNodeName, String hostName, Map<String, String> properties) {
+    this.coreNodeName = coreNodeName;
+    this.hostName = hostName;
+    this.properties = properties;
+  }
+
+  public String getCoreNodeName() {
+    return coreNodeName;
+  }
+
+  public void setCoreNodeName(String coreNodeName) {
+    this.coreNodeName = coreNodeName;
+  }
+
+  public String getHostName() {
+    return hostName;
+  }
+
+  public void setHostName(String hostName) {
+    this.hostName = hostName;
+  }
+
+  public Map<String, String> getProperties() {
+    return properties;
+  }
+
+  public void setProperties(Map<String, String> properties) {
+    this.properties = properties;
+  }
+}
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrShard.java b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrShard.java
new file mode 100644
index 0000000..f121663
--- /dev/null
+++ b/ambari-infra/ambari-infra-solr-client/src/main/java/org/apache/ambari/infra/solr/domain/json/SolrShard.java
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.ambari.infra.solr.domain.json;
+
+import org.apache.solr.common.cloud.Replica;
+import org.apache.solr.common.cloud.Slice.State;
+
+import java.util.Map;
+
+public class SolrShard {
+
+  private String name;
+  private State state;
+  private Map<String, Replica> replicas;
+
+  public Map<String, Replica> getReplicas() {
+    return replicas;
+  }
+
+  public void setReplicas(Map<String, Replica> replicas) {
+    this.replicas = replicas;
+  }
+
+  public String getName() {
+    return name;
+  }
+
+  public void setName(String name) {
+    this.name = name;
+  }
+
+  public State getState() {
+    return state;
+  }
+
+  public void setState(State state) {
+    this.state = state;
+  }
+}
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/python/migrationConfigGenerator.py b/ambari-infra/ambari-infra-solr-client/src/main/python/migrationConfigGenerator.py
index 0dbba82..b8e45f9 100755
--- a/ambari-infra/ambari-infra-solr-client/src/main/python/migrationConfigGenerator.py
+++ b/ambari-infra/ambari-infra-solr-client/src/main/python/migrationConfigGenerator.py
@@ -301,6 +301,7 @@ def generate_ambari_solr_migration_ini_file(options, accessor, protocol):
   infra_solr_user = infra_solr_env_props['infra_solr_user'] if 'infra_solr_user' in infra_solr_env_props else 'infra-solr'
   infra_solr_kerberos_keytab = infra_solr_env_props['infra_solr_kerberos_keytab'] if 'infra_solr_kerberos_keytab' in infra_solr_env_props else '/etc/security/keytabs/ambari-infra-solr.service.keytab'
   infra_solr_kerberos_principal = infra_solr_user + "/" + host
+  infra_solr_port = infra_solr_env_props['infra_solr_port'] if 'infra_solr_port' in infra_solr_env_props else '8886'
 
   config.add_section('local')
   config.set('local', 'java_home', options.java_home)
@@ -315,10 +316,11 @@ def generate_ambari_solr_migration_ini_file(options, accessor, protocol):
 
   config.add_section('infra_solr')
   config.set('infra_solr', 'protocol', solr_protocol)
-  config.set('infra_solr', 'urls', solr_urls)
+  config.set('infra_solr', 'hosts', ','.join(solr_hosts))
   config.set('infra_solr', 'zk_connect_string', zk_connect_string)
   config.set('infra_solr', 'znode', solr_znode)
   config.set('infra_solr', 'user', infra_solr_user)
+  config.set('infra_solr', 'port', infra_solr_port)
   if security_enabled == 'true':
     config.set('infra_solr', 'keytab', infra_solr_kerberos_keytab)
     config.set('infra_solr', 'principal', infra_solr_kerberos_principal)
diff --git a/ambari-infra/ambari-infra-solr-client/src/main/python/migrationHelper.py b/ambari-infra/ambari-infra-solr-client/src/main/python/migrationHelper.py
index 29ae933..87a64ae 100755
--- a/ambari-infra/ambari-infra-solr-client/src/main/python/migrationHelper.py
+++ b/ambari-infra/ambari-infra-solr-client/src/main/python/migrationHelper.py
@@ -65,6 +65,7 @@ RELOAD_SOLR_COLLECTION_URL = '{0}/admin/collections?action=RELOAD&name={1}&wt=js
 INFRA_SOLR_CLIENT_BASE_PATH = '/usr/lib/ambari-infra-solr-client/'
 RANGER_NEW_SCHEMA = 'migrate/managed-schema'
 SOLR_CLOUD_CLI_SCRIPT = 'solrCloudCli.sh'
+COLLECTIONS_DATA_JSON_LOCATION = INFRA_SOLR_CLIENT_BASE_PATH + "migrate/data/{0}"
 
 logger = logging.getLogger()
 handler = logging.StreamHandler()
@@ -152,7 +153,7 @@ def create_solr_api_request_command(request_url, config, output=None):
   logger.debug("Solr API command: {0}".format(api_cmd))
   return api_cmd
 
-def create_infra_solr_client_command(options, config, command):
+def create_infra_solr_client_command(options, config, command, appendZnode=False):
   user='infra-solr'
   kerberos_enabled='false'
   infra_solr_cli_opts = ''
@@ -179,6 +180,13 @@ def create_infra_solr_client_command(options, config, command):
     raise Exception("'local' section or 'java_home' is missing (or empty) from the configuration")
   if not zkConnectString:
     raise Exception("'zk_connect_string' section or 'external_zk_connect_string' is missing (or empty) from the configuration")
+  if appendZnode:
+    if config.has_option('infra_solr', 'znode'):
+      znode_to_append=config.get('infra_solr', 'znode')
+      zkConnectString+="{0}".format(znode_to_append)
+    else:
+      raise Exception("'znode' option is required for infra_solr section")
+
   set_java_home_= 'JAVA_HOME={0}'.format(java_home)
   set_infra_solr_cli_opts = ' INFRA_SOLR_CLI_OPTS="{0}"'.format(infra_solr_cli_opts) if infra_solr_cli_opts != '' else ''
   solr_cli_cmd = '{0} {1}{2} /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string {3} {4}'\
@@ -187,28 +195,8 @@ def create_infra_solr_client_command(options, config, command):
   return solr_cli_cmd
 
 def get_random_solr_url(solr_urls, options):
-  splitted_solr_urls = solr_urls.split(',')
-
-  if options.include_solr_hosts:
-    # keep only included ones, do not override any
-    include_solr_hosts_list = options.include_solr_hosts.split(',')
-    new_splitted_urls = []
-    for url in splitted_solr_urls:
-      if any(inc_solr_host in url for inc_solr_host in include_solr_hosts_list):
-        new_splitted_urls.append(url)
-    splitted_solr_urls = new_splitted_urls
-
-  if options.exclude_solr_hosts:
-    exclude_solr_hosts_list = options.exclude_solr_hosts.split(',')
-    urls_to_exclude = []
-    for url in splitted_solr_urls:
-      if any(exc_solr_host in url for exc_solr_host in exclude_solr_hosts_list):
-        urls_to_exclude.append(url)
-    for excluded_url in urls_to_exclude:
-      splitted_solr_urls.remove(excluded_url)
-
-  random_index = randrange(0, len(splitted_solr_urls))
-  result = splitted_solr_urls[random_index]
+  random_index = randrange(0, len(solr_urls))
+  result = solr_urls[random_index]
   logger.debug("Use {0} solr address for next request.".format(result))
   return result
 
@@ -326,6 +314,33 @@ def create_command_request(command, parameters, hosts, cluster, context, service
   request["Requests/resource_filters"] = resource_filters
   return request
 
+def fill_params_for_backup(params, collection):
+  collections_data = get_collections_data(COLLECTIONS_DATA_JSON_LOCATION.format("backup_collections.json"))
+  if collection in collections_data and 'leaderHostCoreMap' in collections_data[collection]:
+    params["solr_backup_host_cores_map"] = json.dumps(collections_data[collection]['leaderHostCoreMap'])
+  if collection in collections_data and 'leaderCoreHostMap' in collections_data[collection]:
+    params["solr_backup_core_host_map"] = json.dumps(collections_data[collection]['leaderCoreHostMap'])
+  return params
+
+def fill_params_for_restore(params, original_collection, collection, config_set):
+  backup_collections_data = get_collections_data(COLLECTIONS_DATA_JSON_LOCATION.format("backup_collections.json"))
+  if original_collection in backup_collections_data and 'leaderHostCoreMap' in backup_collections_data[original_collection]:
+    params["solr_backup_host_cores_map"] = json.dumps(backup_collections_data[original_collection]['leaderHostCoreMap'])
+  if original_collection in backup_collections_data and 'leaderCoreHostMap' in backup_collections_data[original_collection]:
+    params["solr_backup_core_host_map"] = json.dumps(backup_collections_data[original_collection]['leaderCoreHostMap'])
+
+  collections_data = get_collections_data(COLLECTIONS_DATA_JSON_LOCATION.format("restore_collections.json"))
+  if collection in collections_data and 'leaderHostCoreMap' in collections_data[collection]:
+    params["solr_restore_host_cores_map"] = json.dumps(collections_data[collection]['leaderHostCoreMap'])
+  if collection in collections_data and 'leaderCoreHostMap' in collections_data[collection]:
+    params["solr_restore_core_host_map"] = json.dumps(collections_data[collection]['leaderCoreHostMap'])
+  if collection in collections_data and 'leaderSolrCoreDataMap' in collections_data[collection]:
+    params["solr_restore_core_data"] = json.dumps(collections_data[collection]['leaderSolrCoreDataMap'])
+  if config_set:
+    params["solr_restore_config_set"] = config_set
+
+  return params
+
 def fill_parameters(options, config, collection, index_location, hdfs_path=None, shards=None):
   params = {}
   if collection:
@@ -396,7 +411,7 @@ def get_solr_hosts(options, accessor, cluster):
         component_hosts.remove(exclude_host)
   return component_hosts
 
-def restore(options, accessor, parser, config, collection, index_location, hdfs_path, shards):
+def restore(options, accessor, parser, config, original_collection, collection, config_set, index_location, hdfs_path, shards):
   """
   Send restore solr collection custom command request to ambari-server
   """
@@ -404,6 +419,7 @@ def restore(options, accessor, parser, config, collection, index_location, hdfs_
 
   component_hosts = get_solr_hosts(options, accessor, cluster)
   parameters = fill_parameters(options, config, collection, index_location, hdfs_path, shards)
+  parameters = fill_params_for_restore(parameters, original_collection, collection, config_set)
 
   cmd_request = create_command_request("RESTORE", parameters, component_hosts, cluster, 'Restore Solr Collection: ' + collection)
   return post_json(accessor, CLUSTERS_URL.format(cluster) + REQUESTS_API_URL, cmd_request)
@@ -429,6 +445,8 @@ def backup(options, accessor, parser, config, collection, index_location):
   component_hosts = get_solr_hosts(options, accessor, cluster)
   parameters = fill_parameters(options, config, collection, index_location)
 
+  parameters = fill_params_for_backup(parameters, collection)
+
   cmd_request = create_command_request("BACKUP", parameters, component_hosts, cluster, 'Backup Solr Collection: ' + collection)
   return post_json(accessor, CLUSTERS_URL.format(cluster) + REQUESTS_API_URL, cmd_request)
 
@@ -613,10 +631,41 @@ def filter_collections(options, collections):
   else:
     return collections
 
-def get_solr_urls(config):
-  solr_urls = None
-  if config.has_section('infra_solr') and config.has_option('infra_solr', 'urls'):
-    return config.get('infra_solr', 'urls')
+def get_solr_urls(options, config, collection, collections_json):
+  solr_urls = []
+  solr_hosts = None
+  solr_port = "8886"
+  solr_protocol = "http"
+  if config.has_section("infra_solr") and config.has_option("infra_solr", "port"):
+    solr_port = config.get('infra_solr', 'port')
+  if config.has_section("infra_solr") and config.has_option("infra_solr", "protocol"):
+    solr_protocol = config.get('infra_solr', 'protocol')
+  if config.has_section('infra_solr') and config.has_option('infra_solr', 'hosts'):
+    solr_hosts = config.get('infra_solr', 'hosts')
+
+  splitted_solr_hosts = solr_hosts.split(',')
+  if options.include_solr_hosts:
+    # keep only included ones, do not override any
+    include_solr_hosts_list = options.include_solr_hosts.split(',')
+    new_splitted_hosts = []
+    for host in splitted_solr_hosts:
+      if any(inc_solr_host in host for inc_solr_host in include_solr_hosts_list):
+        new_splitted_hosts.append(host)
+    splitted_solr_hosts = new_splitted_hosts
+
+  if options.exclude_solr_hosts:
+    exclude_solr_hosts_list = options.exclude_solr_hosts.split(',')
+    hosts_to_exclude = []
+    for host in splitted_solr_hosts:
+      if any(exc_solr_host in host for exc_solr_host in exclude_solr_hosts_list):
+        hosts_to_exclude.append(host)
+    for excluded_url in hosts_to_exclude:
+      splitted_solr_hosts.remove(excluded_url)
+
+  for solr_host in splitted_solr_hosts:
+    solr_addr = "{0}://{1}:{2}/solr".format(solr_protocol, solr_host, solr_port)
+    solr_urls.append(solr_addr)
+
   return solr_urls
 
 def is_atlas_available(config, service_filter):
@@ -646,20 +695,6 @@ def delete_collection(options, config, collection, solr_urls):
   else:
     raise Exception("DELETE collection ('{0}') failed. Response: {1}".format(collection, str(out)))
 
-def list_collections(options, config, solr_urls):
-  request = LIST_SOLR_COLLECTION_URL.format(get_random_solr_url(solr_urls, options))
-  logger.debug("Solr request: {0}".format(request))
-  list_collection_json_cmd=create_solr_api_request_command(request, config)
-  process = Popen(list_collection_json_cmd, stdout=PIPE, stderr=PIPE, shell=True)
-  out, err = process.communicate()
-  if process.returncode != 0:
-    raise Exception("{0} command failed: {1}".format(list_collection_json_cmd, str(err)))
-  response=json.loads(str(out))
-  if 'collections' in response:
-    return response['collections']
-  else:
-    raise Exception("LIST collections failed ({0}). Response: {1}".format(request, str(out)))
-
 def create_collection(options, config, solr_urls, collection, config_set, shards, replica, max_shards_per_node):
   request = CREATE_SOLR_COLLECTION_URL.format(get_random_solr_url(solr_urls, options), collection, config_set, shards, replica, max_shards_per_node)
   logger.debug("Solr request: {0}".format(request))
@@ -724,57 +759,87 @@ def copy_znode(options, config, copy_src, copy_dest, copy_from_local=False, copy
   sys.stdout.flush()
   logger.debug(str(out))
 
-def delete_logsearch_collections(options, config, solr_urls, collections):
+def list_collections(options, config, output_file):
+  solr_cli_command=create_infra_solr_client_command(options, config, '--dump-collections --output {0}'.format(output_file), appendZnode=True)
+  logger.debug("Solr cli command: {0}".format(solr_cli_command))
+  sys.stdout.write('Dumping collections data to {0} ... '.format(output_file))
+  sys.stdout.flush()
+  process = Popen(solr_cli_command, stdout=PIPE, stderr=PIPE, shell=True)
+  out, err = process.communicate()
+  if process.returncode != 0:
+    sys.stdout.write(colors.FAIL + 'FAILED\n' + colors.ENDC)
+    sys.stdout.flush()
+    raise Exception("{0} command failed: {1}".format(solr_cli_command, str(err)))
+  sys.stdout.write(colors.OKGREEN + 'DONE\n' + colors.ENDC)
+  sys.stdout.flush()
+  logger.debug(str(out))
+  collections_data = get_collections_data(output_file)
+  return collections_data.keys() if collections_data is not None else []
+
+def get_collections_data(output_file):
+  return read_json(output_file)
+
+def get_collection_data(collections_data, collection):
+  return collections_data[collection] if collection in collections_data else None
+
+def delete_logsearch_collections(options, config, collections_json_location, collections):
   service_logs_collection = config.get('logsearch_collections', 'hadoop_logs_collection_name')
   audit_logs_collection = config.get('logsearch_collections', 'audit_logs_collection_name')
   history_collection = config.get('logsearch_collections', 'history_collection_name')
   if service_logs_collection in collections:
+    solr_urls = get_solr_urls(options, config, service_logs_collection, collections_json_location)
     retry(delete_collection, options, config, service_logs_collection, solr_urls, context='[Delete {0} collection]'.format(service_logs_collection))
   else:
     print 'Collection {0} does not exist or filtered out. Skipping delete operation.'.format(service_logs_collection)
   if audit_logs_collection in collections:
+    solr_urls = get_solr_urls(options, config, audit_logs_collection, collections_json_location)
     retry(delete_collection, options, config, audit_logs_collection, solr_urls, context='[Delete {0} collection]'.format(audit_logs_collection))
   else:
     print 'Collection {0} does not exist or filtered out. Skipping delete operation.'.format(audit_logs_collection)
   if history_collection in collections:
+    solr_urls = get_solr_urls(options, config, history_collection, collections_json_location)
     retry(delete_collection, options, config, history_collection, solr_urls, context='[Delete {0} collection]'.format(history_collection))
   else:
     print 'Collection {0} does not exist or filtered out. Skipping delete operation.'.format(history_collection)
 
-def delete_atlas_collections(options, config, solr_urls, collections):
+def delete_atlas_collections(options, config, collections_json_location, collections):
   fulltext_collection = config.get('atlas_collections', 'fulltext_index_name')
   edge_index_collection = config.get('atlas_collections', 'edge_index_name')
   vertex_index_collection = config.get('atlas_collections', 'vertex_index_name')
   if fulltext_collection in collections:
+    solr_urls = get_solr_urls(options, config, fulltext_collection, collections_json_location)
     retry(delete_collection, options, config, fulltext_collection, solr_urls, context='[Delete {0} collection]'.format(fulltext_collection))
   else:
     print 'Collection {0} does not exist or filtered out. Skipping delete operation.'.format(fulltext_collection)
   if edge_index_collection in collections:
+    solr_urls = get_solr_urls(options, config, edge_index_collection, collections_json_location)
     retry(delete_collection, options, config, edge_index_collection, solr_urls, context='[Delete {0} collection]'.format(edge_index_collection))
   else:
     print 'Collection {0} does not exist or filtered out. Skipping delete operation.'.format(edge_index_collection)
   if vertex_index_collection in collections:
+    solr_urls = get_solr_urls(options, config, vertex_index_collection, collections_json_location)
     retry(delete_collection, options, config, vertex_index_collection, solr_urls, context='[Delete {0} collection]'.format(vertex_index_collection))
   else:
     print 'Collection {0} does not exist or filtered out. Skipping delete operation.'.format(vertex_index_collection)
 
-def delete_ranger_collection(options, config, solr_urls, collections):
+def delete_ranger_collection(options, config, collections_json_location, collections):
   ranger_collection_name = config.get('ranger_collection', 'ranger_collection_name')
   if ranger_collection_name in collections:
+    solr_urls = get_solr_urls(options, config, ranger_collection_name, collections_json_location)
     retry(delete_collection, options, config, ranger_collection_name, solr_urls, context='[Delete {0} collection]'.format(ranger_collection_name))
   else:
     print 'Collection {0} does not exist or filtered out. Skipping delete operation'.format(ranger_collection_name)
 
 def delete_collections(options, config, service_filter):
-  solr_urls = get_solr_urls(config)
-  collections=retry(list_collections, options, config, solr_urls, context="[List Solr Collections]")
+  collections_json_location = COLLECTIONS_DATA_JSON_LOCATION.format("delete_collections.json")
+  collections=list_collections(options, config, collections_json_location)
   collections=filter_collections(options, collections)
   if is_ranger_available(config, service_filter):
-    delete_ranger_collection(options, config, solr_urls, collections)
+    delete_ranger_collection(options, config, collections_json_location, collections)
   if is_atlas_available(config, service_filter):
-    delete_atlas_collections(options, config, solr_urls, collections)
+    delete_atlas_collections(options, config, collections_json_location, collections)
   if is_logsearch_available(config, service_filter):
-    delete_logsearch_collections(options, config, solr_urls, collections)
+    delete_logsearch_collections(options, config, collections_json_location, collections)
 
 def upgrade_ranger_schema(options, config, service_filter):
   solr_znode='/infra-solr'
@@ -928,11 +993,11 @@ def do_migrate_request(options, accessor, parser, config, collection, index_loca
     monitor_request(options, accessor, cluster, request_id, 'Migrate Solr collection index: ' + collection)
     print "Migrate index '{0}'... {1}DONE{2}".format(collection, colors.OKGREEN, colors.ENDC)
 
-def do_restore_request(options, accessor, parser, config, collection, index_location, shards, hdfs_path):
+def do_restore_request(options, accessor, parser, config, original_collection, collection, config_set, index_location, shards, hdfs_path):
   sys.stdout.write("Sending restore collection request ('{0}') to Ambari to process (backup location: '{1}')..."
                    .format(collection, index_location))
   sys.stdout.flush()
-  response = restore(options, accessor, parser, config, collection, index_location, hdfs_path, shards)
+  response = restore(options, accessor, parser, config, original_collection, collection, config_set, index_location, hdfs_path, shards)
   request_id = get_request_id(response)
   sys.stdout.write(colors.OKGREEN + 'DONE\n' + colors.ENDC)
   sys.stdout.flush()
@@ -976,8 +1041,7 @@ def get_atlas_index_location(collection, config, options):
   return atlas_index_location
 
 def backup_collections(options, accessor, parser, config, service_filter):
-  solr_urls = get_solr_urls(config)
-  collections=retry(list_collections, options, config, solr_urls, context="[List Solr Collections]")
+  collections=list_collections(options, config, COLLECTIONS_DATA_JSON_LOCATION.format("backup_collections.json"))
   collections=filter_collections(options, collections)
   if is_ranger_available(config, service_filter):
     collection_name = config.get('ranger_collection', 'ranger_collection_name')
@@ -1035,8 +1099,8 @@ def migrate_snapshots(options, accessor, parser, config, service_filter):
       print "Collection ('{0}') backup index has filtered out. Skipping migrate operation.".format(edge_index_collection)
 
 def create_backup_collections(options, accessor, parser, config, service_filter):
-  solr_urls = get_solr_urls(config)
-  collections=retry(list_collections, options, config, solr_urls, context="[List Solr Collections]")
+  collections_json_location = COLLECTIONS_DATA_JSON_LOCATION.format("before_restore_collections.json")
+  collections=list_collections(options, config, collections_json_location)
   replica_number = "1" # hard coded
   if is_ranger_available(config, service_filter):
     backup_ranger_collection = config.get('ranger_collection', 'backup_ranger_collection_name')
@@ -1044,6 +1108,7 @@ def create_backup_collections(options, accessor, parser, config, service_filter)
       if options.collection is not None and options.collection != backup_ranger_collection:
         print "Collection {0} has filtered out. Skipping create operation.".format(backup_ranger_collection)
       else:
+        solr_urls = get_solr_urls(options, config, backup_ranger_collection, collections_json_location)
         backup_ranger_config_set = config.get('ranger_collection', 'backup_ranger_config_set_name')
         backup_ranger_shards = config.get('ranger_collection', 'ranger_collection_shards')
         backup_ranger_max_shards = config.get('ranger_collection', 'ranger_collection_max_shards_per_node')
@@ -1058,6 +1123,7 @@ def create_backup_collections(options, accessor, parser, config, service_filter)
       if options.collection is not None and options.collection != backup_fulltext_index_name:
         print "Collection {0} has filtered out. Skipping create operation.".format(backup_fulltext_index_name)
       else:
+        solr_urls = get_solr_urls(options, config, backup_fulltext_index_name, collections_json_location)
         backup_fulltext_index_shards = config.get('atlas_collections', 'fulltext_index_shards')
         backup_fulltext_index_max_shards = config.get('atlas_collections', 'fulltext_index_max_shards_per_node')
         retry(create_collection, options, config, solr_urls, backup_fulltext_index_name, backup_atlas_config_set,
@@ -1069,6 +1135,7 @@ def create_backup_collections(options, accessor, parser, config, service_filter)
       if options.collection is not None and options.collection != backup_edge_index_name:
         print "Collection {0} has filtered out. Skipping create operation.".format(backup_edge_index_name)
       else:
+        solr_urls = get_solr_urls(options, config, backup_edge_index_name, collections_json_location)
         backup_edge_index_shards = config.get('atlas_collections', 'edge_index_shards')
         backup_edge_index_max_shards = config.get('atlas_collections', 'edge_index_max_shards_per_node')
         retry(create_collection, options, config, solr_urls, backup_edge_index_name, backup_atlas_config_set,
@@ -1080,6 +1147,7 @@ def create_backup_collections(options, accessor, parser, config, service_filter)
       if options.collection is not None and options.collection != backup_vertex_index_name:
         print "Collection {0} has filtered out. Skipping create operation.".format(backup_vertex_index_name)
       else:
+        solr_urls = get_solr_urls(options, config, backup_vertex_index_name, collections_json_location)
         backup_vertex_index_shards = config.get('atlas_collections', 'vertex_index_shards')
         backup_vertex_index_max_shards = config.get('atlas_collections', 'vertex_index_max_shards_per_node')
         retry(create_collection, options, config, solr_urls, backup_vertex_index_name, backup_atlas_config_set,
@@ -1088,13 +1156,13 @@ def create_backup_collections(options, accessor, parser, config, service_filter)
       print "Collection {0} has already exist. Skipping create operation.".format(backup_fulltext_index_name)
 
 def restore_collections(options, accessor, parser, config, service_filter):
-  solr_urls = get_solr_urls(config)
-  collections=retry(list_collections, options, config, solr_urls, context="[List Solr Collections]")
+  collections=list_collections(options, config, COLLECTIONS_DATA_JSON_LOCATION.format("restore_collections.json"))
   collections=filter_collections(options, collections)
   if 'RANGER' in service_filter and config.has_section('ranger_collection') and config.has_option('ranger_collection', 'enabled') \
     and config.get('ranger_collection', 'enabled') == 'true':
     collection_name = config.get('ranger_collection', 'ranger_collection_name')
     backup_ranger_collection = config.get('ranger_collection', 'backup_ranger_collection_name')
+    backup_ranger_config_set_name = config.get('ranger_collection', 'backup_ranger_config_set_name')
 
     hdfs_base_path = None
     if options.ranger_hdfs_base_path:
@@ -1106,7 +1174,7 @@ def restore_collections(options, accessor, parser, config, service_filter):
     if backup_ranger_collection in collections:
       backup_ranger_shards = config.get('ranger_collection', 'ranger_collection_shards')
       ranger_index_location=get_ranger_index_location(collection_name, config, options)
-      do_restore_request(options, accessor, parser, config, backup_ranger_collection, ranger_index_location, backup_ranger_shards, hdfs_base_path)
+      do_restore_request(options, accessor, parser, config, collection_name, backup_ranger_collection, backup_ranger_config_set_name, ranger_index_location, backup_ranger_shards, hdfs_base_path)
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping restore operation.".format(backup_ranger_collection)
 
@@ -1118,12 +1186,14 @@ def restore_collections(options, accessor, parser, config, service_filter):
       hdfs_base_path = options.hdfs_base_path
     elif config.has_option('atlas_collections', 'hdfs_base_path'):
       hdfs_base_path = config.get('atlas_collections', 'hdfs_base_path')
+    atlas_config_set = config.get('atlas_collections', 'config_set')
+
     fulltext_index_collection = config.get('atlas_collections', 'fulltext_index_name')
     backup_fulltext_index_name = config.get('atlas_collections', 'backup_fulltext_index_name')
     if backup_fulltext_index_name in collections:
       backup_fulltext_index_shards = config.get('atlas_collections', 'fulltext_index_shards')
       fulltext_index_location=get_atlas_index_location(fulltext_index_collection, config, options)
-      do_restore_request(options, accessor, parser, config, backup_fulltext_index_name, fulltext_index_location, backup_fulltext_index_shards, hdfs_base_path)
+      do_restore_request(options, accessor, parser, config, fulltext_index_collection, backup_fulltext_index_name, atlas_config_set, fulltext_index_location, backup_fulltext_index_shards, hdfs_base_path)
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping restore operation.".format(fulltext_index_collection)
 
@@ -1132,7 +1202,7 @@ def restore_collections(options, accessor, parser, config, service_filter):
     if backup_edge_index_name in collections:
       backup_edge_index_shards = config.get('atlas_collections', 'edge_index_shards')
       edge_index_location=get_atlas_index_location(edge_index_collection, config, options)
-      do_restore_request(options, accessor, parser, config, backup_edge_index_name, edge_index_location, backup_edge_index_shards, hdfs_base_path)
+      do_restore_request(options, accessor, parser, config, edge_index_collection, backup_edge_index_name, atlas_config_set, edge_index_location, backup_edge_index_shards, hdfs_base_path)
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping restore operation.".format(edge_index_collection)
 
@@ -1141,33 +1211,37 @@ def restore_collections(options, accessor, parser, config, service_filter):
     if backup_vertex_index_name in collections:
       backup_vertex_index_shards = config.get('atlas_collections', 'vertex_index_shards')
       vertex_index_location=get_atlas_index_location(vertex_index_collection, config, options)
-      do_restore_request(options, accessor, parser, config, backup_vertex_index_name, vertex_index_location, backup_vertex_index_shards, hdfs_base_path)
+      do_restore_request(options, accessor, parser, config, vertex_index_collection, backup_vertex_index_name, atlas_config_set, vertex_index_location, backup_vertex_index_shards, hdfs_base_path)
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping restore operation.".format(vertex_index_collection)
 
 def reload_collections(options, accessor, parser, config, service_filter):
-  solr_urls = get_solr_urls(config)
-  collections=retry(list_collections, options, config, solr_urls, context="[List Solr Collections]")
+  collections_json_location = config, COLLECTIONS_DATA_JSON_LOCATION.format("reload_collections.json")
+  collections=list_collections(options, config, collections_json_location)
   collections=filter_collections(options, collections)
   if is_ranger_available(config, service_filter):
     backup_ranger_collection = config.get('ranger_collection', 'backup_ranger_collection_name')
     if backup_ranger_collection in collections:
+      solr_urls = get_solr_urls(options, config, backup_ranger_collection, collections_json_location)
       retry(reload_collection, options, config, solr_urls, backup_ranger_collection, context="[Reload Solr Collections]")
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping reload operation.".format(backup_ranger_collection)
   if is_atlas_available(config, service_filter):
     backup_fulltext_index_name = config.get('atlas_collections', 'backup_fulltext_index_name')
     if backup_fulltext_index_name in collections:
+      solr_urls = get_solr_urls(options, config, backup_fulltext_index_name, collections_json_location)
       retry(reload_collection, options, config, solr_urls, backup_fulltext_index_name, context="[Reload Solr Collections]")
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping reload operation.".format(backup_fulltext_index_name)
     backup_edge_index_name = config.get('atlas_collections', 'backup_edge_index_name')
     if backup_edge_index_name in collections:
+      solr_urls = get_solr_urls(options, config, backup_edge_index_name, collections_json_location)
       retry(reload_collection, options, config, solr_urls, backup_edge_index_name, context="[Reload Solr Collections]")
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping reload operation.".format(backup_edge_index_name)
     backup_vertex_index_name = config.get('atlas_collections', 'backup_vertex_index_name')
     if backup_vertex_index_name in collections:
+      solr_urls = get_solr_urls(options, config, backup_vertex_index_name, collections_json_location)
       retry(reload_collection, options, config, solr_urls, backup_vertex_index_name, context="[Reload Solr Collections]")
     else:
       print "Collection ('{0}') does not exist or filtered out. Skipping reload operation.".format(backup_fulltext_index_name)
@@ -1192,8 +1266,7 @@ def rolling_restart_solr(options, accessor, parser, config):
   print "Rolling Restart Infra Solr Instances request sent. (check Ambari UI about the requests)"
 
 def update_state_jsons(options, accessor, parser, config, service_filter):
-  solr_urls = get_solr_urls(config)
-  collections=retry(list_collections, options, config, solr_urls, context="[List Solr Collections]")
+  collections=list_collections(options, config, COLLECTIONS_DATA_JSON_LOCATION.format("collections.json"))
   collections=filter_collections(options, collections)
   if is_ranger_available(config, service_filter):
     backup_ranger_collection = config.get('ranger_collection', 'backup_ranger_collection_name')
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py b/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py
index 3d83e14..ca9c19a 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py
@@ -39,11 +39,10 @@ def backup_collection(env):
             owner=params.infra_solr_user,
             group=params.user_group
             )
-  host_cores_data_map = command_commons.get_host_cores_for_collection()
 
   Logger.info(format("Backup Solr Collection {collection} to {index_location}"))
 
-  host_core_map = host_cores_data_map[command_commons.HOST_CORES]
+  host_core_map = command_commons.solr_backup_host_cores_map
 
   host_or_ip = params.hostname
   # IP resolve - for unsecure cluster
@@ -92,11 +91,14 @@ def restore_collection(env):
   if command_commons.solr_num_shards == 0:
     raise Exception(format("The 'solr_shards' command parameter is required to set."))
 
-  host_cores_backup_map = command_commons.read_backup_json()
-  host_cores_map = command_commons.get_host_cores_for_collection(backup=False)
+  if not command_commons.solr_restore_config_set:
+    raise Exception(format("The 'solr_restore_config_set' command parameter is required to set."))
 
-  original_core_host_pairs = command_commons.sort_core_host_pairs(host_cores_backup_map[command_commons.CORE_HOST])
-  new_core_host_pairs = command_commons.sort_core_host_pairs(host_cores_map[command_commons.CORE_HOST])
+  Logger.info("Original core / host map: " + str(command_commons.solr_backup_core_host_map))
+  Logger.info("New core / host map: " + str(command_commons.solr_restore_core_host_map))
+
+  original_core_host_pairs = command_commons.sort_core_host_pairs(command_commons.solr_backup_core_host_map)
+  new_core_host_pairs = command_commons.sort_core_host_pairs(command_commons.solr_restore_core_host_map)
 
   core_pairs = command_commons.create_core_pairs(original_core_host_pairs, new_core_host_pairs)
   Logger.info("Generated core pairs: " + str(core_pairs))
@@ -115,9 +117,9 @@ def restore_collection(env):
 
   hdfs_cores_on_host=[]
 
-  for core_data in core_pairs:
-    src_core = core_data['src_core']
-    target_core = core_data['target_core']
+  for core_pair in core_pairs:
+    src_core = core_pair['src_core']
+    target_core = core_pair['target_core']
 
     if src_core in command_commons.skip_cores:
       Logger.info(format("Core '{src_core}' (src) is filtered out."))
@@ -126,7 +128,7 @@ def restore_collection(env):
       Logger.info(format("Core '{target_core}' (target) is filtered out."))
       continue
 
-    core_data = host_cores_map[command_commons.CORE_DATA]
+    core_data = command_commons.solr_restore_core_data
     only_if_cmd = format("test -d {index_location}/snapshot.{src_core}")
     core_root_dir = format("{solr_datadir}/backup_{target_core}")
     core_root_without_backup_dir = format("{solr_datadir}/{target_core}")
@@ -152,17 +154,17 @@ def restore_collection(env):
                 only_if=only_if_cmd
                 )
 
-    core_details = core_data[target_core]
+    core_details = core_data[target_core]['properties']
     core_properties = {}
-    core_properties['numShards'] = command_commons.solr_num_shards
-    core_properties['collection.configName'] = "ranger_audits"
+    core_properties['numShards'] = core_details['numShards']
+    core_properties['collection.configName'] = "ranger_audits" # TODO
     core_properties['name'] = target_core
-    core_properties['replicaType'] = core_details['type']
+    core_properties['replicaType'] = core_details['replicaType']
     core_properties['collection'] = command_commons.collection
     if command_commons.solr_hdfs_path:
-      core_properties['coreNodeName'] = 'backup_' + core_details['node']
+      core_properties['coreNodeName'] = 'backup_' + core_details['coreNodeName']
     else:
-      core_properties['coreNodeName'] = core_details['node']
+      core_properties['coreNodeName'] = core_details['coreNodeName']
     core_properties['shard'] = core_details['shard']
     if command_commons.solr_hdfs_path:
       hdfs_solr_node_folder=command_commons.solr_hdfs_path + format("/backup_{collection}/") + core_details['node']
@@ -211,10 +213,10 @@ def restore_collection(env):
   Execute(format("rm -rf {solr_datadir}/{collection}*"),
           user=params.infra_solr_user,
           logoutput=True)
-  for core_data in core_pairs:
-    src_core = core_data['src_core']
-    src_host = core_data['src_host']
-    target_core = core_data['target_core']
+  for core_pair in core_pairs:
+    src_core = core_pair['src_core']
+    src_host = core_pair['src_host']
+    target_core = core_pair['target_core']
 
     if src_core in command_commons.skip_cores:
       Logger.info(format("Core '{src_core}' (src) is filtered out."))
@@ -225,12 +227,12 @@ def restore_collection(env):
 
     if os.path.exists(format("{index_location}/snapshot.{src_core}")):
       data_to_save = {}
-      host_core_data=host_cores_map[command_commons.CORE_DATA]
-      core_details=host_core_data[target_core]
-      core_node=core_details['node']
+      host_core_data=command_commons.solr_restore_core_data
+      core_details=host_core_data[target_core]['properties']
+      core_node=core_details['coreNodeName']
       data_to_save['core']=target_core
       data_to_save['core_node']=core_node
-      data_to_save['old_host']=core_data['target_host']
+      data_to_save['old_host']=core_pair['target_host']
       data_to_save['new_host']=src_host
       if command_commons.solr_hdfs_path:
         data_to_save['new_core_node']="backup_" + core_node
@@ -250,10 +252,10 @@ def restore_collection(env):
       if target_core in hdfs_cores_on_host:
 
         Logger.info(format("Core data '{target_core}' is located on this host, processing..."))
-        host_core_data=host_cores_map[command_commons.CORE_DATA]
-        core_details=host_core_data[target_core]
+        host_core_data=command_commons.solr_restore_core_data
+        core_details=host_core_data[target_core]['properties']
 
-        core_node=core_details['node']
+        core_node=core_details['coreNodeName']
         collection_core_dir=command_commons.solr_hdfs_path + format("/{collection}/{core_node}")
         backup_collection_core_dir=command_commons.solr_hdfs_path + format("/backup_{collection}/{core_node}")
         command_commons.HdfsResource(collection_core_dir,
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py b/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py
index 8051d6c..561a0c4 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py
@@ -87,6 +87,13 @@ solr_num_shards = int(default("/commandParams/solr_shards", "0"))
 
 solr_hdfs_path=default("/commandParams/solr_hdfs_path", None)
 
+solr_backup_host_cores_map = json.loads(default("/commandParams/solr_backup_host_cores_map", "{}"))
+solr_backup_core_host_map = json.loads(default("/commandParams/solr_backup_core_host_map", "{}"))
+solr_restore_host_cores_map = json.loads(default("/commandParams/solr_restore_host_cores_map", "{}"))
+solr_restore_core_host_map = json.loads(default("/commandParams/solr_restore_core_host_map", "{}"))
+solr_restore_core_data = json.loads(default("/commandParams/solr_restore_core_data", "{}"))
+solr_restore_config_set = default("/commandParams/solr_restore_config_set", None)
+
 keytab = None
 principal = None
 if params.security_enabled:
@@ -139,15 +146,9 @@ if solr_hdfs_path:
 
 hostname_suffix = params.hostname.replace(".", "_")
 
-HOST_CORES='host-cores'
-CORE_HOST='core-host'
-HOST_SHARDS='host-shards'
-CORE_DATA='core-data'
-
 if shared_fs:
   index_location = format("{index_location}_{hostname_suffix}")
 
-
 def get_files_by_pattern(directory, pattern):
   for root, dirs, files in os.walk(directory):
     for basename in files:
@@ -266,97 +267,11 @@ def __get_domain_name(url):
   dm = spltAr[i].split('/')[0].split(':')[0].lower()
   return dm
 
-def __read_host_cores_from_clusterstate_json(json_zk_state_path, json_host_cores_path):
-  """
-  Fill (and write to file) a JSON object with core data from state.json (znode).
-  """
-  json_content={}
-  hosts_core_map={}
-  hosts_shard_map={}
-  core_host_map={}
-  core_data_map={}
-  with open(json_zk_state_path) as json_file:
-    json_data = json.load(json_file)
-    znode = json_data['znode']
-    data = json.loads(znode['data'])
-    collection_data = data[collection]
-    shards = collection_data['shards']
-
-    for shard in shards:
-      Logger.info(format("Found shard: {shard}"))
-      replicas = shards[shard]['replicas']
-      for replica in replicas:
-        core_data = replicas[replica]
-        core = core_data['core']
-        base_url = core_data['base_url']
-        state = core_data['state']
-        leader = core_data['leader'] if 'leader' in core_data else 'false'
-        domain = __get_domain_name(base_url)
-        if state == 'active' and leader == 'true':
-          if domain not in hosts_core_map:
-            hosts_core_map[domain]=[]
-          if domain not in hosts_shard_map:
-            hosts_shard_map[domain]=[]
-          if core not in core_data_map:
-            core_data_map[core]={}
-          hosts_core_map[domain].append(core)
-          hosts_shard_map[domain].append(shard)
-          core_host_map[core]=domain
-          core_data_map[core]['host']=domain
-          core_data_map[core]['node']=replica
-          if 'type' in core_data:
-            core_data_map[core]['type']=core_data['type']
-          else:
-            core_data_map[core]['type']='NRT'
-          core_data_map[core]['shard']=shard
-          Logger.info(format("Found leader/active replica: {replica} (core '{core}') in {shard} on {domain}"))
-        else:
-          Logger.info(format("Found non-leader/active replica: {replica} (core '{core}') in {shard} on {domain}"))
-  json_content[HOST_CORES]=hosts_core_map
-  json_content[CORE_HOST]=core_host_map
-  json_content[HOST_SHARDS]=hosts_shard_map
-  json_content[CORE_DATA]=core_data_map
-  with open(json_host_cores_path, 'w') as outfile:
-    json.dump(json_content, outfile)
-  return json_content
-
 def write_core_file(core, core_data):
   core_json_location = format("{index_location}/{core}.json")
   with open(core_json_location, 'w') as outfile:
     json.dump(core_data, outfile)
 
-def __read_host_cores_from_file(json_host_cores_path):
-  """
-  Read host cores from file, can be useful if you do not want to regenerate host core data (with that you can generate your own host core pairs for restore)
-  """
-  with open(json_host_cores_path) as json_file:
-    host_cores_json_data = json.load(json_file)
-    return host_cores_json_data
-
-
-def get_host_cores_for_collection(backup=True):
-  """
-  Get core details to an object and write them to a file as well. Backup data will be used during restore.
-  :param backup: if enabled, save file into backup_host_cores.json, otherwise use restore_host_cores.json
-  :return: detailed json about the cores
-  """
-  request_path = 'admin/zookeeper?wt=json&detail=true&path=%2Fclusterstate.json&view=graph'
-  json_folder = format("{index_location}")
-  json_zk_state_path = format("{json_folder}/zk_state.json")
-  if backup:
-    json_host_cores_path = format("{json_folder}/backup_host_cores.json")
-  else:
-    json_host_cores_path = format("{json_folder}/restore_host_cores.json")
-  api_request = create_solr_api_request_command(request_path, output=json_zk_state_path)
-  Execute(api_request, user=params.infra_solr_user)
-  return __read_host_cores_from_file(json_host_cores_path) if skip_generate_restore_host_cores \
-    else __read_host_cores_from_clusterstate_json(json_zk_state_path, json_host_cores_path)
-
-def read_backup_json():
-  with open(format("{index_location}/backup_host_cores.json")) as json_file:
-    json_data = json.load(json_file)
-    return json_data
-
 def create_core_pairs(original_cores, new_cores):
   """
   Create core pairss from the original and new cores (backups -> restored ones), use alphabetic order

-- 
To stop receiving notification emails like this one, please contact
oleewere@apache.org.

[ambari] 02/02: AMBARI-23945. Infra Solr migration - Update asciinema links

Posted by ol...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

oleewere pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 16a19c9a16bc25917479533fbc6e222c7f4be1bd
Author: Oliver Szabo <ol...@gmail.com>
AuthorDate: Sat Jun 16 02:24:34 2018 +0200

    AMBARI-23945. Infra Solr migration - Update asciinema links
---
 ambari-infra/ambari-infra-solr-client/README.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ambari-infra/ambari-infra-solr-client/README.md b/ambari-infra/ambari-infra-solr-client/README.md
index 901573a..a75fb74 100644
--- a/ambari-infra/ambari-infra-solr-client/README.md
+++ b/ambari-infra/ambari-infra-solr-client/README.md
@@ -158,7 +158,7 @@ These tasks can be done with 1 [migrationHelper.py](#solr-migration-helper-scrip
 
 If the script finished successfully and everything looks green on Ambari UI as well, you can go ahead with [Infra Solr package upgrade](#iii.-upgrade-infra-solr-packages). Otherwise (or if you want to go step by step instead of the command above) you have to option to run tasks step by step (or manually as well). Those tasks are found in the next sections.
 
-[![asciicast](https://asciinema.org/a/187124.png)](https://asciinema.org/a/187124?speed=2)
+[![asciicast](https://asciinema.org/a/187421.png)](https://asciinema.org/a/187421?speed=2)
 
 #### <a id="ii/1.-backup-ranger-collection">III/1. Backup Ranger collection</a>
 
@@ -434,7 +434,7 @@ The collection creation and restore part can be done with 1 command:
 
 If the script finished successfully and everything looks green on Ambari UI as well, you can go ahead with [Restart Solr Instances](#viii.-restart-infra-solr-instances). Otherwise (or if you want to go step by step instead of the command above) you have to option to run tasks step by step (or manually as well). Those tasks are found in the next sections.
 
-[![asciicast](https://asciinema.org/a/187126.png)](https://asciinema.org/a/187126?speed=2)
+[![asciicast](https://asciinema.org/a/187423.png)](https://asciinema.org/a/187423?speed=2)
 
 #### <a id="vi/1.-restore-old-ranger-collection">VII/1. Restore Old Ranger collection</a>
 

-- 
To stop receiving notification emails like this one, please contact
oleewere@apache.org.