You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ab...@apache.org on 2018/04/23 17:35:05 UTC

[01/40] lucene-solr:jira/solr-11833: LUCENE-8253: Mute test while a fix is worked on

Repository: lucene-solr
Updated Branches:
  refs/heads/jira/solr-11833 14824ca38 -> 880ce3f90


LUCENE-8253: Mute test while a fix is worked on


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/f7f12a51
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/f7f12a51
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/f7f12a51

Branch: refs/heads/jira/solr-11833
Commit: f7f12a51f313bf406f0fa3d48e74864268338c6d
Parents: 4ee92c2
Author: Alan Woodward <ro...@apache.org>
Authored: Tue Apr 17 11:58:59 2018 +0100
Committer: Alan Woodward <ro...@apache.org>
Committed: Tue Apr 17 11:58:59 2018 +0100

----------------------------------------------------------------------
 .../apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java  | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7f12a51/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java b/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
index 3173c12..c501f5f 100644
--- a/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
+++ b/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
@@ -16,6 +16,7 @@
  */
 package org.apache.solr.handler.admin;
 
+import org.apache.lucene.util.LuceneTestCase;
 import org.apache.lucene.util.Version;
 import org.apache.solr.index.LogDocMergePolicyFactory;
 import org.apache.solr.SolrTestCaseJ4;
@@ -26,6 +27,7 @@ import org.junit.Test;
 /**
  * Tests for SegmentsInfoRequestHandler. Plugin entry, returning data of created segment.
  */
+@LuceneTestCase.AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-8253")
 public class SegmentsInfoRequestHandlerTest extends SolrTestCaseJ4 {
   private static final int DOC_COUNT = 5;
   


[10/40] lucene-solr:jira/solr-11833: Explicitly call out the fact that schema api modification request bodies are in JSON format.

Posted by ab...@apache.org.
Explicitly call out the fact that schema api modification request bodies are in JSON format.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/79ed3bdf
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/79ed3bdf
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/79ed3bdf

Branch: refs/heads/jira/solr-11833
Commit: 79ed3bdf5349ae987cc7b5debd8855daa663d2d7
Parents: dd39128
Author: Steve Rowe <sa...@apache.org>
Authored: Wed Apr 18 14:35:01 2018 -0400
Committer: Steve Rowe <sa...@apache.org>
Committed: Wed Apr 18 14:58:54 2018 -0400

----------------------------------------------------------------------
 solr/solr-ref-guide/src/schema-api.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79ed3bdf/solr/solr-ref-guide/src/schema-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index 7a61857..798cc44 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -71,7 +71,7 @@ bin/solr -e cloud -noprompt
 
 == Modify the Schema
 
-To add, remove or replace fields, dynamic field rules, copy field rules, or new field types, you can send a POST request to the `/collection/schema/` endpoint with a sequence of commands to perform the requested actions. The following commands are supported:
+To add, remove or replace fields, dynamic field rules, copy field rules, or new field types, you can send a POST request to the `/collection/schema/` endpoint with a sequence of commands in JSON format to perform the requested actions. The following commands are supported:
 
 * `add-field`: add a new field with parameters you provide.
 * `delete-field`: delete a field.


[18/40] lucene-solr:jira/solr-11833: SOLR-12028: BadApple and AwaitsFix annotations usage

Posted by ab...@apache.org.
SOLR-12028: BadApple and AwaitsFix annotations usage


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/5ef43e90
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/5ef43e90
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/5ef43e90

Branch: refs/heads/jira/solr-11833
Commit: 5ef43e900f8abeeb56cb9bba8ca1d050ec956f21
Parents: 46037dc
Author: Erick Erickson <er...@apache.org>
Authored: Thu Apr 19 13:14:12 2018 -0700
Committer: Erick Erickson <er...@apache.org>
Committed: Thu Apr 19 13:14:12 2018 -0700

----------------------------------------------------------------------
 .../test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java | 1 +
 .../apache/solr/cloud/autoscaling/sim/TestTriggerIntegration.java   | 1 +
 2 files changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/5ef43e90/solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java b/solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java
index 00f7d42..be2b4a7 100644
--- a/solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java
@@ -224,6 +224,7 @@ public class NodeLostTriggerTest extends SolrCloudTestCase {
   }
 
   @Test
+  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 16-Apr-2018
   public void testListenerAcceptance() throws Exception {
     CoreContainer container = cluster.getJettySolrRunners().get(0).getCoreContainer();
     Map<String, Object> props = createTriggerProps(0);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/5ef43e90/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestTriggerIntegration.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestTriggerIntegration.java b/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestTriggerIntegration.java
index 25f8e9e..41316ae 100644
--- a/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestTriggerIntegration.java
+++ b/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestTriggerIntegration.java
@@ -613,6 +613,7 @@ public class TestTriggerIntegration extends SimSolrCloudTestCase {
   public static long eventQueueActionWait = 5000;
 
   @Test
+  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 16-Apr-2018
   public void testEventQueue() throws Exception {
     waitForSeconds = 1;
     SolrClient solrClient = cluster.simGetSolrClient();


[08/40] lucene-solr:jira/solr-11833: SOLR-12155: making TestUnInvertedFieldException more thread-safe

Posted by ab...@apache.org.
SOLR-12155: making TestUnInvertedFieldException more thread-safe


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/dbdedf3e
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/dbdedf3e
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/dbdedf3e

Branch: refs/heads/jira/solr-11833
Commit: dbdedf3e3f4b4f839348cf4759dc65092f7d5baf
Parents: 507c439
Author: Mikhail Khludnev <mk...@apache.org>
Authored: Wed Apr 18 14:57:49 2018 +0300
Committer: Mikhail Khludnev <mk...@apache.org>
Committed: Wed Apr 18 14:57:49 2018 +0300

----------------------------------------------------------------------
 solr/CHANGES.txt                                             | 3 ++-
 .../apache/solr/request/TestUnInvertedFieldException.java    | 8 +++++---
 2 files changed, 7 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/dbdedf3e/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index c1efc85..df7df15 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -144,7 +144,8 @@ Bug Fixes
 * SOLR-12207: Just rethrowing AssertionError caused by jdk bug in reflection with invocation details.
  (ab, Dawid Weiss, Mikhail Khludnev)
 
-* SOLR-12155: Exception from UnInvertedField constructor puts threads to infinite wait. (Mikhail Khludnev)
+* SOLR-12155: Exception from UnInvertedField constructor puts threads to infinite wait.
+ (Andrey Kudryavtsev, Mikhail Khludnev)
 
 * SOLR-12201: TestReplicationHandler.doTestIndexFetchOnMasterRestart(): handle unexpected replication failures
   (Steve Rowe)

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/dbdedf3e/solr/core/src/test/org/apache/solr/request/TestUnInvertedFieldException.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/request/TestUnInvertedFieldException.java b/solr/core/src/test/org/apache/solr/request/TestUnInvertedFieldException.java
index 56addf6..f927baf 100644
--- a/solr/core/src/test/org/apache/solr/request/TestUnInvertedFieldException.java
+++ b/solr/core/src/test/org/apache/solr/request/TestUnInvertedFieldException.java
@@ -34,6 +34,7 @@ import org.apache.solr.SolrTestCaseJ4;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrException.ErrorCode;
 import org.apache.solr.common.util.ExecutorUtil.MDCAwareThreadPoolExecutor;
+import org.apache.solr.search.SolrIndexSearcher;
 import org.apache.solr.search.facet.UnInvertedField;
 import org.apache.solr.util.TestInjection;
 import org.junit.After;
@@ -78,10 +79,11 @@ public class TestUnInvertedFieldException extends SolrTestCaseJ4 {
   @Test
   public void testConcurrentInit() throws Exception {
     final SolrQueryRequest req = req("*:*");
+    final SolrIndexSearcher searcher = req.getSearcher();
 
     List<Callable<UnInvertedField>> initCallables = new ArrayList<>();
     for (int i=0;i< TestUtil.nextInt(random(), 10, 30);i++) {
-      initCallables.add(()-> UnInvertedField.getUnInvertedField(proto.field(), req.getSearcher()));
+      initCallables.add(()-> UnInvertedField.getUnInvertedField(proto.field(), searcher));
     }
 
     final ThreadPoolExecutor pool  = new MDCAwareThreadPoolExecutor(3, 
@@ -101,7 +103,7 @@ public class TestUnInvertedFieldException extends SolrTestCaseJ4 {
             assertEquals(ErrorCode.SERVER_ERROR.code, solrException.code());
             assertSame(solrException.getCause().getClass(), OutOfMemoryError.class);
           }
-          assertNull(UnInvertedField.checkUnInvertedField(proto.field(), req.getSearcher()));
+          assertNull(UnInvertedField.checkUnInvertedField(proto.field(), searcher));
         }
         TestInjection.uifOutOfMemoryError = false;
       }
@@ -111,7 +113,7 @@ public class TestUnInvertedFieldException extends SolrTestCaseJ4 {
       for (Future<UnInvertedField> uifuture : futures) {
         final UnInvertedField uif = uifuture.get();
         assertNotNull(uif);
-        assertSame(uif, UnInvertedField.checkUnInvertedField(proto.field(), req.getSearcher()));
+        assertSame(uif, UnInvertedField.checkUnInvertedField(proto.field(), searcher));
         if (prev != null) {
           assertSame(prev, uif);
         }


[04/40] lucene-solr:jira/solr-11833: SOLR-11924: Added CloudCollectionsListener to watch the list of collections in a cloud. This closes #313

Posted by ab...@apache.org.
SOLR-11924: Added CloudCollectionsListener to watch the list of collections in a cloud. This closes #313


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/ae0190b6
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/ae0190b6
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/ae0190b6

Branch: refs/heads/jira/solr-11833
Commit: ae0190b696396bc2fc4d239a22d568c8438b8c4f
Parents: d904112
Author: Houston Putman <hp...@bloomberg.net>
Authored: Tue Apr 17 13:43:53 2018 +0000
Committer: Dennis Gove <dp...@gmail.com>
Committed: Tue Apr 17 18:57:04 2018 -0400

----------------------------------------------------------------------
 .../common/cloud/CloudCollectionsListener.java  |  40 +++
 .../apache/solr/common/cloud/ZkStateReader.java |  52 +++-
 .../cloud/TestCloudCollectionsListeners.java    | 311 +++++++++++++++++++
 3 files changed, 402 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ae0190b6/solr/solrj/src/java/org/apache/solr/common/cloud/CloudCollectionsListener.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/CloudCollectionsListener.java b/solr/solrj/src/java/org/apache/solr/common/cloud/CloudCollectionsListener.java
new file mode 100644
index 0000000..9920e59
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/CloudCollectionsListener.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.common.cloud;
+
+import java.util.Set;
+
+/**
+ * Callback registered with {@link ZkStateReader#registerCloudCollectionsListener(CloudCollectionsListener)}
+ * and called whenever the cloud's set of collections changes.
+ */
+public interface CloudCollectionsListener {
+
+  /**
+   * Called when a collection is created, a collection is deleted or a watched collection's state changes.
+   *
+   * Note that, due to the way Zookeeper watchers are implemented, a single call may be
+   * the result of multiple or no collection creation or deletions. Also, multiple calls to this method can be made
+   * with the same set of collections, ie. without any new updates.
+   *
+   * @param oldCollections       the previous set of collections
+   * @param newCollections       the new set of collections
+   */
+  void onChange(Set<String> oldCollections, Set<String> newCollections);
+
+}

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ae0190b6/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
index 7d5401d..a73e4c1 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
@@ -171,6 +171,8 @@ public class ZkStateReader implements Closeable {
 
   private ConcurrentHashMap<String, CollectionWatch<CollectionPropsWatcher>> collectionPropsWatches = new ConcurrentHashMap<>();
 
+  private Set<CloudCollectionsListener> cloudCollectionsListeners = ConcurrentHashMap.newKeySet();
+
   private final ExecutorService notifications = ExecutorUtil.newMDCAwareCachedThreadPool("watches");
   
   /** Used to submit notifications to Collection Properties watchers in order **/
@@ -545,6 +547,8 @@ public class ZkStateReader implements Closeable {
           clusterState.getCollectionStates());
     }
 
+    notifyCloudCollectionsListeners();
+
     for (String collection : changedCollections) {
       notifyStateWatchers(liveNodes, collection, clusterState.getCollectionOrNull(collection));
     }
@@ -650,6 +654,52 @@ public class ZkStateReader implements Closeable {
     }
   }
 
+  // We don't get a Stat or track versions on getChildren() calls, so force linearization.
+  private final Object refreshCollectionsSetLock = new Object();
+  // Ensures that only the latest getChildren fetch gets applied.
+  private final AtomicReference<Set<String>> lastFetchedCollectionSet = new AtomicReference<>();
+
+  /**
+   * Register a CloudCollectionsListener to be called when the set of collections within a cloud changes.
+   */
+  public void registerCloudCollectionsListener(CloudCollectionsListener cloudCollectionsListener) {
+    cloudCollectionsListeners.add(cloudCollectionsListener);
+    notifyNewCloudCollectionsListener(cloudCollectionsListener);
+  }
+
+  /**
+   * Remove a registered CloudCollectionsListener.
+   */
+  public void removeCloudCollectionsListener(CloudCollectionsListener cloudCollectionsListener) {
+    cloudCollectionsListeners.remove(cloudCollectionsListener);
+  }
+
+  private void notifyNewCloudCollectionsListener(CloudCollectionsListener listener) {
+    listener.onChange(Collections.emptySet(), lastFetchedCollectionSet.get());
+  }
+
+  private void notifyCloudCollectionsListeners() {
+    notifyCloudCollectionsListeners(false);
+  }
+
+  private void notifyCloudCollectionsListeners(boolean notifyIfSame) {
+    synchronized (refreshCollectionsSetLock) {
+      final Set<String> newCollections = getCurrentCollections();
+      final Set<String> oldCollections = lastFetchedCollectionSet.getAndSet(newCollections);
+      if (!newCollections.equals(oldCollections) || notifyIfSame) {
+        cloudCollectionsListeners.forEach(listener -> listener.onChange(oldCollections, newCollections));
+      }
+    }
+  }
+
+  private Set<String> getCurrentCollections() {
+    Set<String> collections = new HashSet<>();
+    collections.addAll(legacyCollectionStates.keySet());
+    collections.addAll(watchedCollectionStates.keySet());
+    collections.addAll(lazyCollectionStates.keySet());
+    return collections;
+  }
+
   private class LazyCollectionRef extends ClusterState.CollectionRef {
     private final String collName;
     private long lastUpdateTime;
@@ -1364,7 +1414,7 @@ public class ZkStateReader implements Closeable {
     if (watchSet.get()) {
       new StateWatcher(collection).refreshAndWatch();
     }
-    
+
     DocCollection state = clusterState.getCollectionOrNull(collection);
     if (stateWatcher.onStateChanged(liveNodes, state) == true) {
       removeCollectionStateWatcher(collection, stateWatcher);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ae0190b6/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java b/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java
new file mode 100644
index 0000000..6d08180
--- /dev/null
+++ b/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java
@@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.common.cloud;
+
+import java.lang.invoke.MethodHandles;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import org.apache.solr.client.solrj.impl.CloudSolrClient;
+import org.apache.solr.client.solrj.request.CollectionAdminRequest;
+import org.apache.solr.cloud.SolrCloudTestCase;
+import org.apache.solr.common.util.ExecutorUtil;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestCloudCollectionsListeners extends SolrCloudTestCase {
+
+  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  private static final int CLUSTER_SIZE = 4;
+
+  private static final ExecutorService executor = ExecutorUtil.newMDCAwareCachedThreadPool("backgroundWatchers");
+
+  private static final int MAX_WAIT_TIMEOUT = 30;
+
+  @BeforeClass
+  public static void startCluster() throws Exception {
+    configureCluster(CLUSTER_SIZE)
+        .addConfig("config", getFile("solrj/solr/collection1/conf").toPath())
+        .configure();
+  }
+
+  @AfterClass
+  public static void shutdownBackgroundExecutors() {
+    executor.shutdown();
+  }
+
+  @Before
+  public void prepareCluster() throws Exception {
+    int missingServers = CLUSTER_SIZE - cluster.getJettySolrRunners().size();
+    for (int i = 0; i < missingServers; i++) {
+      cluster.startJettySolrRunner();
+    }
+    cluster.waitForAllNodes(30);
+  }
+
+  @Test
+  public void testSimpleCloudCollectionsListener() throws Exception {
+
+    CloudSolrClient client = cluster.getSolrClient();
+
+    Map<Integer, Set<String>> oldResults = new HashMap<>();
+    Map<Integer, Set<String>> newResults = new HashMap<>();
+
+    CloudCollectionsListener watcher1 = (oldCollections, newCollections) -> {
+      log.info("New set of collections: {}, {}", oldCollections, newCollections);
+      oldResults.put(1, oldCollections);
+      newResults.put(1, newCollections);
+    };
+    client.getZkStateReader().registerCloudCollectionsListener(watcher1);
+    CloudCollectionsListener watcher2 = (oldCollections, newCollections) -> {
+      log.info("New set of collections: {}, {}", oldCollections, newCollections);
+      oldResults.put(2, oldCollections);
+      newResults.put(2, newCollections);
+    };
+    client.getZkStateReader().registerCloudCollectionsListener(watcher2);
+
+    assertFalse("CloudCollectionsListener not triggered after registration", oldResults.get(1).contains("testcollection1"));
+    assertFalse("CloudCollectionsListener not triggered after registration", oldResults.get(2).contains("testcollection1"));
+
+    assertFalse("CloudCollectionsListener not triggered after registration", newResults.get(1).contains("testcollection1"));
+    assertFalse("CloudCollectionsListener not triggered after registration", newResults.get(2).contains("testcollection1"));
+
+    CollectionAdminRequest.createCollection("testcollection1", "config", 4, 1)
+        .processAndWait(client, MAX_WAIT_TIMEOUT);
+    client.waitForState("testcollection1", MAX_WAIT_TIMEOUT, TimeUnit.SECONDS,
+        (n, c) -> DocCollection.isFullyActive(n, c, 4, 1));
+
+    assertFalse("CloudCollectionsListener has new collection in old set of collections", oldResults.get(1).contains("testcollection1"));
+    assertFalse("CloudCollectionsListener has new collection in old set of collections", oldResults.get(2).contains("testcollection1"));
+
+    assertTrue("CloudCollectionsListener doesn't have new collection in new set of collections", newResults.get(1).contains("testcollection1"));
+    assertTrue("CloudCollectionsListener doesn't have new collection in new set of collections", newResults.get(2).contains("testcollection1"));
+
+    client.getZkStateReader().removeCloudCollectionsListener(watcher1);
+
+    CollectionAdminRequest.createCollection("testcollection2", "config", 4, 1)
+        .processAndWait(client, MAX_WAIT_TIMEOUT);
+    client.waitForState("testcollection2", MAX_WAIT_TIMEOUT, TimeUnit.SECONDS,
+        (n, c) -> DocCollection.isFullyActive(n, c, 4, 1));
+
+
+    assertFalse("CloudCollectionsListener notified after removal", oldResults.get(1).contains("testcollection1"));
+    assertTrue("CloudCollectionsListener does not contain old collection in list of old collections", oldResults.get(2).contains("testcollection1"));
+    assertFalse("CloudCollectionsListener contains new collection in old collection set", oldResults.get(1).contains("testcollection2"));
+    assertFalse("CloudCollectionsListener contains new collection in old collection set", oldResults.get(2).contains("testcollection2"));
+
+    assertFalse("CloudCollectionsListener notified after removal", newResults.get(1).contains("testcollection2"));
+    assertTrue("CloudCollectionsListener does not contain new collection in list of new collections", newResults.get(2).contains("testcollection2"));
+
+    CollectionAdminRequest.deleteCollection("testcollection1").processAndWait(client, MAX_WAIT_TIMEOUT);
+
+    CollectionAdminRequest.deleteCollection("testcollection2").processAndWait(client, MAX_WAIT_TIMEOUT);
+
+    client.getZkStateReader().removeCloudCollectionsListener(watcher2);
+  }
+
+  @Test
+  public void testCollectionDeletion() throws Exception {
+
+    CloudSolrClient client = cluster.getSolrClient();
+
+    CollectionAdminRequest.createCollection("testcollection1", "config", 4, 1)
+        .processAndWait(client, MAX_WAIT_TIMEOUT);
+    client.waitForState("testcollection1", MAX_WAIT_TIMEOUT, TimeUnit.SECONDS,
+        (n, c) -> DocCollection.isFullyActive(n, c, 4, 1));
+
+    CollectionAdminRequest.createCollection("testcollection2", "config", 4, 1)
+        .processAndWait(client, MAX_WAIT_TIMEOUT);
+    client.waitForState("testcollection2", MAX_WAIT_TIMEOUT, TimeUnit.SECONDS,
+        (n, c) -> DocCollection.isFullyActive(n, c, 4, 1));
+
+    Map<Integer, Set<String>> oldResults = new HashMap<>();
+    Map<Integer, Set<String>> newResults = new HashMap<>();
+
+    CloudCollectionsListener watcher1 = (oldCollections, newCollections) -> {
+      log.info("New set of collections: {}, {}", oldCollections, newCollections);
+      oldResults.put(1, oldCollections);
+      newResults.put(1, newCollections);
+    };
+    client.getZkStateReader().registerCloudCollectionsListener(watcher1);
+    CloudCollectionsListener watcher2 = (oldCollections, newCollections) -> {
+      log.info("New set of collections: {}, {}", oldCollections, newCollections);
+      oldResults.put(2, oldCollections);
+      newResults.put(2, newCollections);
+    };
+    client.getZkStateReader().registerCloudCollectionsListener(watcher2);
+
+
+    assertEquals("CloudCollectionsListener has old collection with size > 0 after registration", 0, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener has old collection with size > 0 after registration", 0, oldResults.get(2).size());
+
+    assertTrue("CloudCollectionsListener not notified of all collections after registration", newResults.get(1).contains("testcollection1"));
+    assertTrue("CloudCollectionsListener not notified of all collections after registration", newResults.get(1).contains("testcollection2"));
+    assertTrue("CloudCollectionsListener not notified of all collections after registration", newResults.get(2).contains("testcollection1"));
+    assertTrue("CloudCollectionsListener not notified of all collections after registration", newResults.get(2).contains("testcollection2"));
+
+    CollectionAdminRequest.deleteCollection("testcollection1").processAndWait(client, MAX_WAIT_TIMEOUT);
+
+    assertEquals("CloudCollectionsListener missing old collection after collection removal", 2, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener missing old collection after collection removal", 2, oldResults.get(2).size());
+
+    assertFalse("CloudCollectionsListener notifies with collection that no longer exists", newResults.get(1).contains("testcollection1"));
+    assertTrue("CloudCollectionsListener doesn't notify of collection that exists", newResults.get(1).contains("testcollection2"));
+    assertFalse("CloudCollectionsListener notifies with collection that no longer exists", newResults.get(2).contains("testcollection1"));
+    assertTrue("CloudCollectionsListener doesn't notify of collection that exists", newResults.get(2).contains("testcollection2"));
+
+    client.getZkStateReader().removeCloudCollectionsListener(watcher2);
+
+    CollectionAdminRequest.deleteCollection("testcollection2").processAndWait(client, MAX_WAIT_TIMEOUT);
+
+    assertEquals("CloudCollectionsListener has incorrect number of old collections", 1, oldResults.get(1).size());
+    assertTrue("CloudCollectionsListener has incorrect old collection after collection removal", oldResults.get(1).contains("testcollection2"));
+    assertEquals("CloudCollectionsListener called after removal", 2, oldResults.get(2).size());
+
+    assertFalse("CloudCollectionsListener shows live collection after removal", newResults.get(1).contains("testcollection1"));
+    assertFalse("CloudCollectionsListener shows live collection after removal", newResults.get(1).contains("testcollection2"));
+    assertFalse("CloudCollectionsListener called after removal", newResults.get(2).contains("testcollection1"));
+    assertTrue("CloudCollectionsListener called after removal", newResults.get(2).contains("testcollection2"));
+
+    client.getZkStateReader().removeCloudCollectionsListener(watcher1);
+  }
+
+  @Test
+  public void testWatchesWorkForBothStateFormats() throws Exception {
+    CloudSolrClient client = cluster.getSolrClient();
+
+    Map<Integer, Set<String>> oldResults = new HashMap<>();
+    Map<Integer, Set<String>> newResults = new HashMap<>();
+
+    CloudCollectionsListener watcher1 = (oldCollections, newCollections) -> {
+      log.info("New set of collections: {}, {}", oldCollections, newCollections);
+      oldResults.put(1, oldCollections);
+      newResults.put(1, newCollections);
+    };
+    client.getZkStateReader().registerCloudCollectionsListener(watcher1);
+    CloudCollectionsListener watcher2 = (oldCollections, newCollections) -> {
+      log.info("New set of collections: {}, {}", oldCollections, newCollections);
+      oldResults.put(2, oldCollections);
+      newResults.put(2, newCollections);
+    };
+    client.getZkStateReader().registerCloudCollectionsListener(watcher2);
+
+    assertEquals("CloudCollectionsListener has old collections with size > 0 after registration", 0, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener has old collections with size > 0 after registration", 0, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener has new collections with size > 0 after registration", 0, newResults.get(1).size());
+    assertEquals("CloudCollectionsListener has new collections with size > 0 after registration", 0, newResults.get(2).size());
+
+    // Creating old state format collection
+
+    CollectionAdminRequest.createCollection("testcollection1", "config", 4, 1)
+        .setStateFormat(1)
+        .processAndWait(client, MAX_WAIT_TIMEOUT);
+    client.waitForState("testcollection1", MAX_WAIT_TIMEOUT, TimeUnit.SECONDS,
+        (n, c) -> DocCollection.isFullyActive(n, c, 4, 1));
+
+    assertEquals("CloudCollectionsListener has old collections with size > 0 after collection created with old stateFormat", 0, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener has old collections with size > 0 after collection created with old stateFormat", 0, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener not updated with created collection with old stateFormat", 1, newResults.get(1).size());
+    assertTrue("CloudCollectionsListener not updated with created collection with old stateFormat", newResults.get(1).contains("testcollection1"));
+    assertEquals("CloudCollectionsListener not updated with created collection with old stateFormat", 1, newResults.get(2).size());
+    assertTrue("CloudCollectionsListener not updated with created collection with old stateFormat", newResults.get(2).contains("testcollection1"));
+
+    // Creating new state format collection
+
+    CollectionAdminRequest.createCollection("testcollection2", "config", 4, 1)
+        .processAndWait(client, MAX_WAIT_TIMEOUT);
+    client.waitForState("testcollection2", MAX_WAIT_TIMEOUT, TimeUnit.SECONDS,
+        (n, c) -> DocCollection.isFullyActive(n, c, 4, 1));
+
+    assertEquals("CloudCollectionsListener has incorrect old collections after collection created with new stateFormat", 1, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener has incorrect old collections after collection created with new stateFormat", 1, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener not updated with created collection with new stateFormat", 2, newResults.get(1).size());
+    assertTrue("CloudCollectionsListener not updated with created collection with new stateFormat", newResults.get(1).contains("testcollection2"));
+    assertEquals("CloudCollectionsListener not updated with created collection with new stateFormat", 2, newResults.get(2).size());
+    assertTrue("CloudCollectionsListener not updated with created collection with new stateFormat", newResults.get(2).contains("testcollection2"));
+
+    client.getZkStateReader().removeCloudCollectionsListener(watcher2);
+
+    // Creating old state format collection
+
+    CollectionAdminRequest.createCollection("testcollection3", "config", 4, 1)
+        .setStateFormat(1)
+        .processAndWait(client, MAX_WAIT_TIMEOUT);
+    client.waitForState("testcollection1", MAX_WAIT_TIMEOUT, TimeUnit.SECONDS,
+        (n, c) -> DocCollection.isFullyActive(n, c, 4, 1));
+
+    assertEquals("CloudCollectionsListener has incorrect old collections after collection created with old stateFormat", 2, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener updated after removal", 1, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener not updated with created collection with old stateFormat", 3, newResults.get(1).size());
+    assertTrue("CloudCollectionsListener not updated with created collection with old stateFormat", newResults.get(1).contains("testcollection3"));
+    assertEquals("CloudCollectionsListener updated after removal", 2, newResults.get(2).size());
+    assertFalse("CloudCollectionsListener updated after removal", newResults.get(2).contains("testcollection3"));
+
+    // Adding back listener
+    client.getZkStateReader().registerCloudCollectionsListener(watcher2);
+
+    assertEquals("CloudCollectionsListener has old collections after registration", 0, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener doesn't have all collections after registration", 3, newResults.get(2).size());
+
+    // Deleting old state format collection
+
+    CollectionAdminRequest.deleteCollection("testcollection1").processAndWait(client, MAX_WAIT_TIMEOUT);
+
+    assertEquals("CloudCollectionsListener doesn't have all old collections after collection removal", 3, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener doesn't have all old collections after collection removal", 3, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener doesn't have correct new collections after collection removal", 2, newResults.get(1).size());
+    assertEquals("CloudCollectionsListener doesn't have correct new collections after collection removal", 2, newResults.get(2).size());
+    assertFalse("CloudCollectionsListener not updated with deleted collection with old stateFormat", newResults.get(1).contains("testcollection1"));
+    assertFalse("CloudCollectionsListener not updated with deleted collection with old stateFormat", newResults.get(2).contains("testcollection1"));
+
+    CollectionAdminRequest.deleteCollection("testcollection2").processAndWait(client, MAX_WAIT_TIMEOUT);
+
+    assertEquals("CloudCollectionsListener doesn't have all old collections after collection removal", 2, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener doesn't have all old collections after collection removal", 2, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener doesn't have correct new collections after collection removal", 1, newResults.get(1).size());
+    assertEquals("CloudCollectionsListener doesn't have correct new collections after collection removal", 1, newResults.get(2).size());
+    assertFalse("CloudCollectionsListener not updated with deleted collection with new stateFormat", newResults.get(1).contains("testcollection2"));
+    assertFalse("CloudCollectionsListener not updated with deleted collection with new stateFormat", newResults.get(2).contains("testcollection2"));
+
+    client.getZkStateReader().removeCloudCollectionsListener(watcher1);
+
+    CollectionAdminRequest.deleteCollection("testcollection3").processAndWait(client, MAX_WAIT_TIMEOUT);
+
+    assertEquals("CloudCollectionsListener updated after removal", 2, oldResults.get(1).size());
+    assertEquals("CloudCollectionsListener doesn't have all old collections after collection removal", 1, oldResults.get(2).size());
+    assertEquals("CloudCollectionsListener updated after removal", 1, newResults.get(1).size());
+    assertEquals("CloudCollectionsListener doesn't have correct new collections after collection removal", 0, newResults.get(2).size());
+    assertTrue("CloudCollectionsListener updated after removal", newResults.get(1).contains("testcollection3"));
+    assertFalse("CloudCollectionsListener not updated with deleted collection with old stateFormat", newResults.get(2).contains("testcollection3"));
+
+    client.getZkStateReader().removeCloudCollectionsListener(watcher2);
+  }
+
+}


[20/40] lucene-solr:jira/solr-11833: Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/lucene-solr

Posted by ab...@apache.org.
Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/lucene-solr


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/48e071f3
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/48e071f3
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/48e071f3

Branch: refs/heads/jira/solr-11833
Commit: 48e071f350c76cd8783839199ef2b1c372919ec8
Parents: 493bdec 5ef43e9
Author: Karl Wright <Da...@gmail.com>
Authored: Fri Apr 20 03:30:33 2018 -0400
Committer: Karl Wright <Da...@gmail.com>
Committed: Fri Apr 20 03:30:33 2018 -0400

----------------------------------------------------------------------
 solr/CHANGES.txt                                |   4 +
 .../cloud/autoscaling/NodeLostTriggerTest.java  |   1 +
 .../autoscaling/sim/TestTriggerIntegration.java |   1 +
 .../solr/handler/TestReplicationHandler.java    |  36 +-
 solr/solr-ref-guide/src/css/customstyles.css    |   2 +-
 ...tting-up-an-external-zookeeper-ensemble.adoc | 335 ++++++++++++++-----
 6 files changed, 276 insertions(+), 103 deletions(-)
----------------------------------------------------------------------



[11/40] lucene-solr:jira/solr-11833: Add Log and Run URPs to example OpenNLP NER URP chain

Posted by ab...@apache.org.
Add Log and Run URPs to example OpenNLP NER URP chain


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/29cbd031
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/29cbd031
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/29cbd031

Branch: refs/heads/jira/solr-11833
Commit: 29cbd031c9431a060c7747a95f16d87a851b2d09
Parents: 79ed3bd
Author: Steve Rowe <sa...@apache.org>
Authored: Wed Apr 18 14:38:04 2018 -0400
Committer: Steve Rowe <sa...@apache.org>
Committed: Wed Apr 18 14:58:54 2018 -0400

----------------------------------------------------------------------
 .../OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java         | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/29cbd031/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java b/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
index a3df416..aa6a97b 100644
--- a/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
+++ b/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
@@ -166,6 +166,8 @@ import org.slf4j.LoggerFactory;
  *     &lt;str name="source"&gt;summary&lt;/str&gt;
  *     &lt;str name="dest"&gt;summary_{EntityType}_s&lt;/str&gt;
  *   &lt;/processor&gt;
+ *   &lt;processor class="solr.LogUpdateProcessorFactory" /&gt;
+ *   &lt;processor class="solr.RunUpdateProcessorFactory" /&gt;
  * &lt;/updateRequestProcessorChain&gt;
  * </pre>
  *


[24/40] lucene-solr:jira/solr-11833: SOLR-12252: Fix jira issue in CHANGES.txt

Posted by ab...@apache.org.
SOLR-12252: Fix jira issue in CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a4b335c9
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/a4b335c9
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/a4b335c9

Branch: refs/heads/jira/solr-11833
Commit: a4b335c942cb46a61cb4022c567a0977b5cdc229
Parents: 86b34fe
Author: Shalin Shekhar Mangar <sh...@apache.org>
Authored: Fri Apr 20 20:10:40 2018 +0530
Committer: Shalin Shekhar Mangar <sh...@apache.org>
Committed: Fri Apr 20 20:10:40 2018 +0530

----------------------------------------------------------------------
 solr/CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a4b335c9/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index f5808ec..ed36d79 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -239,7 +239,7 @@ Other Changes
 
 * SOLR-12142: EmbeddedSolrServer should use req.getContentWriter (noble)
 
-* SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework. (shalin)
+* SOLR-12252: Fix minor compiler and intellij warnings in autoscaling policy framework. (shalin)
 
 ==================  7.3.1 ==================
 


[27/40] lucene-solr:jira/solr-11833: SOLR-11646: more v2 examples; redesign Implicit Handler page to add v2 api paths where they exist

Posted by ab...@apache.org.
SOLR-11646: more v2 examples; redesign Implicit Handler page to add v2 api paths where they exist


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/b99e07c7
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/b99e07c7
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/b99e07c7

Branch: refs/heads/jira/solr-11833
Commit: b99e07c7531f1fe61e9d33dfa17b33600f12a00c
Parents: d08e62d
Author: Cassandra Targett <ct...@apache.org>
Authored: Fri Apr 20 14:27:34 2018 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Fri Apr 20 14:28:31 2018 -0500

----------------------------------------------------------------------
 solr/solr-ref-guide/src/about-this-guide.adoc   |   2 +
 solr/solr-ref-guide/src/blob-store-api.adoc     |  96 ++++-
 solr/solr-ref-guide/src/config-api.adoc         |   6 +-
 solr/solr-ref-guide/src/config-sets.adoc        |  36 +-
 solr/solr-ref-guide/src/configsets-api.adoc     | 244 +++++++-----
 .../src/configuring-solrconfig-xml.adoc         |  42 ++-
 .../src/implicit-requesthandlers.adoc           | 374 ++++++++++++++++---
 .../src/requestdispatcher-in-solrconfig.adoc    |   2 +-
 8 files changed, 629 insertions(+), 173 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/about-this-guide.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/about-this-guide.adoc b/solr/solr-ref-guide/src/about-this-guide.adoc
index 3d7fc24..956d361 100644
--- a/solr/solr-ref-guide/src/about-this-guide.adoc
+++ b/solr/solr-ref-guide/src/about-this-guide.adoc
@@ -60,6 +60,8 @@ Throughout this Guide, we have added examples of both styles with sections label
 
 The section <<v2-api.adoc#v2-api,V2 API>> provides more information about how to work with the new API structure, including how to disable it if you choose to do so.
 
+All APIs return a response header that includes the status of the request and the time to process it. Some APIs will also include the parameters used for the request. Many of the examples in this Guide omit this header information, which you can do locally by adding the parameter `omitHeader=true` to any request.
+
 == Special Inline Notes
 
 Special notes are included throughout these pages. There are several types of notes:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/blob-store-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blob-store-api.adoc b/solr/solr-ref-guide/src/blob-store-api.adoc
index dd92141..77cb2c4 100644
--- a/solr/solr-ref-guide/src/blob-store-api.adoc
+++ b/solr/solr-ref-guide/src/blob-store-api.adoc
@@ -34,7 +34,7 @@ The BlobHandler is automatically registered in the .system collection. The `solr
 
 If you do not use the `-shards` or `-replicationFactor` options, then defaults of numShards=1 and replicationFactor=3 (or maximum nodes in the cluster) will be used.
 
-You can create the `.system` collection with the <<collections-api.adoc#collections-api,Collections API>>, as in this example:
+You can create the `.system` collection with the <<collections-api.adoc#create,CREATE command>> of the Collections API, as in this example:
 
 [.dynamic-tabs]
 --
@@ -44,8 +44,10 @@ You can create the `.system` collection with the <<collections-api.adoc#collecti
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&replicationFactor=2
+curl http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&replicationFactor=2&numShards=2
 ----
+
+Note that this example will create the .system collection across 2 shards with a replication factor of 2; you may need to customize this for your Solr implementation.
 ====
 
 [example.tab-pane#v2create]
@@ -54,8 +56,10 @@ curl http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&rep
 
 [source,bash]
 ----
-curl -X POST -H 'Content-type: application/json' -d '{"create":{"name":".system", "replicationFactor": 2}}' http://localhost:8983/api/collections
+curl -X POST -H 'Content-type: application/json' -d '{"create": {"name": ".system", "numShards": "2", "replicationFactor": "2"}}' http://localhost:8983/api/collections
 ----
+
+Note that this example will create the .system collection across 2 shards with a replication factor of 2; you may need to customize this for your Solr implementation.
 ====
 --
 
@@ -65,6 +69,12 @@ IMPORTANT: The `bin/solr` script cannot be used to create the `.system` collecti
 
 After the `.system` collection has been created, files can be uploaded to the blob store with a request similar to the following:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#v1upload]
+====
+[.tab-label]*V1 API*
+
 [source,bash]
 ----
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @{filename} http://localhost:8983/solr/.system/blob/{blobname}
@@ -76,14 +86,48 @@ For example, to upload a file named "test1.jar" as a blob named "test", you woul
 ----
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @test1.jar http://localhost:8983/solr/.system/blob/test
 ----
+====
+
+[example.tab-pane#v2upload]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @{filename} http://localhost:8983/api/collections/.system/blob/{blobname}
+----
+
+For example, to upload a file named "test1.jar" as a blob named "test", you would make a POST request like:
+
+[source,bash]
+----
+curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @test1.jar http://localhost:8983/api/collections/.system/blob/test
+----
+====
+--
 
 A GET request will return the list of blobs and other details:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#v1getblob]
+====
+[.tab-label]*V1 API*
+
+For all blobs:
+
 [source,bash]
 ----
 curl http://localhost:8983/solr/.system/blob?omitHeader=true
 ----
 
+For a single blob:
+
+[source,bash]
+----
+curl http://localhost:8983/solr/.system/blob/test?omitHeader=true
+----
+
 Output:
 
 [source,json]
@@ -100,19 +144,24 @@ Output:
   }
 }
 ----
+====
+
+[example.tab-pane#v2getblob]
+====
+[.tab-label]*V2 API*
 
-Details on individual blobs can be accessed with a request similar to:
+For all blobs:
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/.system/blob/{blobname}
+curl http://localhost:8983/api/collections/.system/blob?omitHeader=true
 ----
 
-For example, this request will return only the blob named 'test':
+For a single blob:
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/.system/blob/test?omitHeader=true
+curl http://localhost:8983/api/collections/.system/blob/test?omitHeader=true
 ----
 
 Output:
@@ -131,20 +180,49 @@ Output:
   }
 }
 ----
+====
+--
+
+The filestream response writer can retrieve a blob for download, as in:
 
-The filestream response writer can return a particular version of a blob for download, as in:
+[.dynamic-tabs]
+--
+[example.tab-pane#v1retrieveblob]
+====
+[.tab-label]*V1 API*
 
+For a specific version of a blob, include the version to the request:
 [source,bash]
 ----
 curl http://localhost:8983/solr/.system/blob/{blobname}/{version}?wt=filestream > {outputfilename}
 ----
 
-For the latest version of a blob, the \{version} can be omitted,
+For the latest version of a blob, the `\{version}` can be omitted:
 
 [source,bash]
 ----
 curl http://localhost:8983/solr/.system/blob/{blobname}?wt=filestream > {outputfilename}
 ----
+====
+
+[example.tab-pane#v2retrieveblob]
+====
+[.tab-label]*V2 API*
+For a specific version of a blob, include the version to the request:
+
+[source,bash]
+----
+curl http://localhost:8983/api/collections/.system/blob/{blobname}/{version}?wt=filestream > {outputfilename}
+----
+
+For the latest version of a blob, the `\{version}` can be omitted:
+
+[source,bash]
+----
+curl http://localhost:8983/api/collections/.system/blob/{blobname}?wt=filestream > {outputfilename}
+----
+====
+--
 
 == Use a Blob in a Handler or Component
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/config-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/config-api.adoc b/solr/solr-ref-guide/src/config-api.adoc
index 48106c7..f6295cc 100644
--- a/solr/solr-ref-guide/src/config-api.adoc
+++ b/solr/solr-ref-guide/src/config-api.adoc
@@ -57,7 +57,8 @@ http://localhost:8983/api/collections/techproducts/config
 ====
 --
 
-The response will be the total Solr configuration resulting from merging settings in `configoverlay.json` with those in `solrconfig.xml` and those configured implicitly (by default) by Solr out of the box.
+The response will be the Solr configuration resulting from merging settings in `configoverlay.json` with those in `solrconfig.xml`.
+
 
 It's possible to restrict the returned config to a top-level section, such as, `query`, `requestHandler` or `updateHandler`. To do this, append the name of the section to the `config` endpoint. For example, to retrieve configuration for all request handlers:
 
@@ -70,7 +71,6 @@ It's possible to restrict the returned config to a top-level section, such as, `
 [source,bash]
 ----
 http://localhost:8983/solr/techproducts/config/requestHandler
-
 ----
 ====
 
@@ -85,7 +85,7 @@ http://localhost:8983/api/collections/techproducts/config/requestHandler
 ====
 --
 
-The output will be details of each request handler defined in `solrconfig.xml`, all  <<implicit-requesthandlers.adoc#implicit-requesthandlers,defined implicitly>> by Solr, and all defined with this Config API stored in `configoverlay.json`.
+The output will be details of each request handler defined in `solrconfig.xml`, all  <<implicit-requesthandlers.adoc#implicit-requesthandlers,defined implicitly>> by Solr, and all defined with this Config API stored in `configoverlay.json`. To see the configuration for implicit request handlers, add `expandParams=true` to the request. See the documentation for the implicit request handlers for examples using this command.
 
 The available top-level sections that can be added as path parameters are: `query`, `requestHandler`, `searchComponent`, `updateHandler`, `queryResponseWriter`, `initParams`, `znodeVersion`, `listener`, `directoryFactory`, `indexConfig`, and `codecFactory`.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/config-sets.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/config-sets.adoc b/solr/solr-ref-guide/src/config-sets.adoc
index 12c6cd14..b61be18 100644
--- a/solr/solr-ref-guide/src/config-sets.adoc
+++ b/solr/solr-ref-guide/src/config-sets.adoc
@@ -18,7 +18,15 @@
 
 On a multicore Solr instance, you may find that you want to share configuration between a number of different cores. You can achieve this using named configsets, which are essentially shared configuration directories stored under a configurable configset base directory.
 
-To create a configset, simply add a new directory under the configset base directory. The configset will be identified by the name of this directory. Then into this copy the config directory you want to share. The structure should look something like this:
+Configsets are made up of the configuration files used in a Solr installation: inclduding `solrconfig.xml`, the schema, language-files, `synonyms.txt`, DIH-related configuration, and others as needed for your implementation.
+
+Solr ships with two example configsets located in `server/solr/configsets`, which can be used as a base for your own. These example configsets are named `_default` and `sample_techproducts_configs`.
+
+== Configsets in Standalone Mode
+
+If you are using Solr in standalone mode, configsets are created on the filesystem.
+
+To create a configset, add a new directory under the configset base directory. The configset will be identified by the name of this directory. Then into this copy the config directory you want to share. The structure should look something like this:
 
 [source,bash]
 ----
@@ -33,25 +41,39 @@ To create a configset, simply add a new directory under the configset base direc
             /solrconfig.xml
 ----
 
-The default base directory is `$SOLR_HOME/configsets`, and it can be configured in `solr.xml`.
+The default base directory is `$SOLR_HOME/configsets`. This path can be configured in `solr.xml` (see <<format-of-solr-xml.adoc#format-of-solr-xml,Format of solr.xml>> for details).
 
 To create a new core using a configset, pass `configSet` as one of the core properties. For example, if you do this via the CoreAdmin API:
 
 [.dynamic-tabs]
 --
 
-[example.tab-pane#v1api]
+[example.tab-pane#v1use-configset]
 ====
 [.tab-label]*V1 API*
 
-[source,text]
+[source,bash]
+----
 curl http://localhost:8983/admin/cores?action=CREATE&name=mycore&instanceDir=path/to/instance&configSet=configset2
+----
 ====
 
-[example.tab-pane#v2api]
+[example.tab-pane#v2use-configset]
 ====
 [.tab-label]*V2 API*
-[source,text]
-curl -v -X POST -H 'Content-type: application/json' -d '{"create":[{"name":"mycore", "instanceDir":"path/to/instance", "configSet":"configSet2"}]}' http://localhost:8983/api/cores
+
+[source,bash]
+----
+curl -v -X POST -H 'Content-type: application/json' -d '{
+  "create":[{
+    "name": "mycore",
+    "instanceDir": "path/to/instance",
+    "configSet": "configSet2"}]}'
+    http://localhost:8983/api/cores
+----
 ====
 --
+
+== Configsets in SolrCloud Mode
+
+In SolrCloud mode, you can use the <<configsets-api.adoc#configsets-api,Configsets API>> to manage your configsets.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/configsets-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/configsets-api.adoc b/solr/solr-ref-guide/src/configsets-api.adoc
index 59b4925..ba42b79 100644
--- a/solr/solr-ref-guide/src/configsets-api.adoc
+++ b/solr/solr-ref-guide/src/configsets-api.adoc
@@ -1,4 +1,4 @@
-= ConfigSets API
+= Configsets API
 :page-toclevels: 1
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
@@ -17,94 +17,155 @@
 // specific language governing permissions and limitations
 // under the License.
 
-The ConfigSets API enables you to create, delete, and otherwise manage ConfigSets.
+The Configsets API enables you to upload new configsets to ZooKeeper, create, and delete configsets when Solr is running SolrCloud mode.
 
-To use a ConfigSet created with this API as the configuration for a collection, use the <<collections-api.adoc#collections-api,Collections API>>.
+Configsets are a collection of configuration files such as `solrconfig.xml`, `synonyms.txt`, the schema, language-specific files, DIH-related configuration, and other collection-level configuration files (everything that normally lives in the `conf` directory). Solr ships with two example configsets (`_default` and `sample_techproducts_configs`) which can be used when creating collections. Using the same concept, you can create your own configsets and make them available when creating collections.
 
-This API can only be used with Solr running in SolrCloud mode. If you are not running Solr in SolrCloud mode but would still like to use shared configurations, please see the section <<config-sets.adoc#config-sets,Config Sets>>.
+This API provides a way to upload configuration files to ZooKeeper and share the same set of configuration files between two or more collections.
 
-== ConfigSets API Entry Points
+Once a configset has been uploaded to ZooKeeper, use the configset name when creating the collection with the <<collections-api.adoc#collections-api,Collections API>> and the collection will use your configuration files.
 
-The base URL for all API calls is `\http://<hostname>:<port>/solr`.
+Configsets do not have to be shared between collections if they are uploaded with this API, but this API makes it easier to do so if you wish. An alternative to uploading your configsets in advance would be to put the configuration files into a directory under `server/solr/configsets` and using the directory name as the `-d` parameter when using `bin/solr create` to create a collection.
 
-* `/admin/configs?action=CREATE`: <<configsets-create,create>> a ConfigSet, based on an existing ConfigSet
-* `/admin/configs?action=DELETE`: <<configsets-delete,delete>> a ConfigSet
-* `/admin/configs?action=LIST`: <<configsets-list,list>> all ConfigSets
-* `/admin/configs?action=UPLOAD`: <<configsets-upload,upload>> a ConfigSet
+NOTE: This API can only be used with Solr running in SolrCloud mode. If you are not running Solr in SolrCloud mode but would still like to use shared configurations, please see the section <<config-sets.adoc#config-sets,Config Sets>>.
 
-[[configsets-create]]
-== Create a ConfigSet
+The API works by passing commands to the `configs` endpoint. The path to the endpoint varies depending on the API being used: the v1 API uses `solr/admin/configs`, while the v2 API uses `api/cluster/configs`. Examples of both types are provided below.
 
-`/admin/configs?action=CREATE&name=_name_&baseConfigSet=_baseConfigSet_`
+[[configsets-list]]
+== List Configsets
 
-Create a ConfigSet, based on an existing ConfigSet.
+The `list` command fetches the names of the configsets that are available for use during collection creation.
 
-=== Create ConfigSet Parameters
+[.dynamic-tabs]
+--
+[example.tab-pane#v1listconfigset]
+====
+[.tab-label]*V1 API*
 
-The following parameters are supported when creating a ConfigSet.
+With the v1 API, the `list` command must be capitalized as `LIST`:
 
-name::
-The ConfigSet to be created. This parameter is required.
+[source,bash]
+----
+http://localhost:8983/solr/admin/configs?action=LIST&omitHeader=true
 
-baseConfigSet::
-The ConfigSet to copy as a base. This parameter is required.
+----
+====
+
+[example.tab-pane#v2listconfigset]
+====
+[.tab-label]*V2 API*
+
+With the v2 API, the `list` command is implied when there is no data sent with the request.
+
+[source,bash]
+----
+http://localhost:8983/api/cluster/configs?omitHeader=true
+----
+====
+--
+
+The output will look like:
+
+[source,json]
+----
+{
+  "configSets": [
+    "_default",
+    "techproducts",
+    "gettingstarted"
+  ]
+}
+----
 
-configSetProp._name_=_value_::
-Any ConfigSet property from base to override.
+[[configsets-upload]]
+== Upload a Configset
 
-=== Create ConfigSet Response
+Upload a configset, which is sent as a zipped file.
 
-The response will include the status of the request. If the status is anything other than "success", an error message will explain why the request failed.
+A configset is uploaded in a "trusted" mode if authentication is enabled and the upload operation is performed as an authenticated request. Without authentication, a configset is uploaded in an "untrusted" mode. Upon creation of a collection using an "untrusted" configset, the following functionality will not work:
 
-=== Create ConfigSet Examples
+* If specified in the configset, the DataImportHandler's ScriptTransformer will not initialize.
+* The XSLT transformer (`tr` parameter) cannot be used at request processing time.
+* If specified in the configset, the StatelessScriptUpdateProcessor will not initialize.
 
-*Input*
+If you use any of these parameters or features, you must have enabled security features in your Solr installation and you must upload the configset as an authenticated user.
 
-Create a ConfigSet named 'myConfigSet' based on a 'predefinedTemplate' ConfigSet, overriding the immutable property to false.
+The `upload` command takes one parameter:
+
+name::
+The configset to be created when the upload is complete. This parameter is required.
 
-[source,text]
+The body of the request should be a zip file that contains the configset. The zip file must be created from within the `conf` directory (i.e., `solrconfig.xml` must be the top level entry in the zip file).
+
+Here is an example on how to create the zip file named "myconfig.zip" and upload it as a config set named "myConfigSet":
+
+[source,bash]
 ----
-http://localhost:8983/solr/admin/configs?action=CREATE&name=myConfigSet&baseConfigSet=predefinedTemplate&configSetProp.immutable=false&wt=xml
+$ (cd solr/server/solr/configsets/sample_techproducts_configs/conf && zip -r - *) > myconfigset.zip
+
+$ curl -X POST --header "Content-Type:application/octet-stream" --data-binary @myconfigset.zip "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=myConfigSet"
 ----
 
-*Output*
+The same can be achieved using a Unix pipe with a single request as follows:
 
-[source,xml]
+[source,bash]
 ----
-<response>
-  <lst name="responseHeader">
-    <int name="status">0</int>
-    <int name="QTime">323</int>
-  </lst>
-</response>
+$ (cd server/solr/configsets/sample_techproducts_configs/conf && zip -r - *) | curl -X POST --header "Content-Type:application/octet-stream" --data-binary @- "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=myConfigSet"
 ----
 
-[[configsets-delete]]
-== Delete a ConfigSet
+NOTE: The `UPLOAD` command does not yet have a v2 equivalent API.
 
-`/admin/configs?action=DELETE&name=_name_`
+[[configsets-create]]
+== Create a Configset
+
+The `create` command creates a new configset based on a configset that has been previously uploaded.
 
-Delete a ConfigSet
+If you have not yet uploaded any configsets, see the <<Upload a Configset>> command above.
 
-=== Delete ConfigSet Parameters
+The following parameters are supported when creating a configset.
 
 name::
-The ConfigSet to be deleted. This parameter is required.
+The configset to be created. This parameter is required.
 
-=== Delete ConfigSet Response
+baseConfigSet::
+The name of the configset to copy as a base. This parameter is required.
 
-The output will include the status of the request. If the status is anything other than "success", an error message will explain why the request failed.
+configSetProp._property_=_value_::
+A configset property from the base configset to override in the copied configset.
 
-=== Delete ConfigSet Examples
+For example, to create a configset named "myConfigset" based on a previously defined "predefinedTemplate" configset, overriding the immutable property to false.
 
-*Input*
+[.dynamic-tabs]
+--
+[example.tab-pane#v1createconfigset]
+====
+[.tab-label]*V1 API*
 
-Delete ConfigSet 'myConfigSet'
+With the v1 API, the `create` command must be capitalized as `CREATE`:
 
-[source,text]
+[source,bash]
 ----
-http://localhost:8983/solr/admin/configs?action=DELETE&name=myConfigSet&wt=xml
+http://localhost:8983/solr/admin/configs?action=CREATE&name=myConfigSet&baseConfigSet=predefinedTemplate&configSetProp.immutable=false&wt=xml&omitHeader=true
 ----
+====
+
+[example.tab-pane#v2createconfigset]
+====
+[.tab-label]*V2 API*
+
+With the v2 API, the `create` command is provided as part of the JSON data that contains the required parameters:
+
+[source,bash]
+----
+curl -X POST -H 'Content-type: application/json' -d '{
+  "create":{
+    "name": "myConfigSet",
+    "baseConfigSet": "predefinedTemplate",
+    "configSetProp.immutable": "false"}}'
+    http://localhost:8983/api/cluster/configs?omitHeader=true
+----
+====
+--
 
 *Output*
 
@@ -113,75 +174,56 @@ http://localhost:8983/solr/admin/configs?action=DELETE&name=myConfigSet&wt=xml
 <response>
   <lst name="responseHeader">
     <int name="status">0</int>
-    <int name="QTime">170</int>
+    <int name="QTime">323</int>
   </lst>
 </response>
 ----
 
-[[configsets-list]]
-== List ConfigSets
+[[configsets-delete]]
+== Delete a Configset
 
-`/admin/configs?action=LIST`
+The `delete` command removes a configset. It does not remove any collections that were created with the configset.
 
-Fetch the names of the ConfigSets in the cluster.
+name::
+The configset to be deleted. This parameter is required.
 
-=== List ConfigSet Examples
+To delete a configset named "myConfigSet":
 
-*Input*
+[.dynamic-tabs]
+--
+[example.tab-pane#v1deleteconfigset]
+====
+[.tab-label]*V1 API*
 
-[source,text]
-----
-http://localhost:8983/solr/admin/configs?action=LIST
-----
-
-*Output*
+With the v1 API, the `delete` command must be capitalized as `DELETE`. The name of the configset to delete is provided with the `name` parameter:
 
-[source,json]
+[source,bash]
 ----
-{
-  "responseHeader":{
-    "status":0,
-    "QTime":203},
-  "configSets":["myConfigSet1",
-    "myConfig2"]}
+http://localhost:8983/solr/admin/configs?action=DELETE&name=myConfigSet&omitHeader=true
 ----
+====
 
-[[configsets-upload]]
-== Upload a ConfigSet
-
-`/admin/configs?action=UPLOAD&name=_name_`
-
-Upload a ConfigSet, sent in as a zipped file. Please note that a ConfigSet is uploaded in a "trusted" mode if authentication is enabled and this upload operation is performed as an authenticated request. Without authentication, a ConfigSet is uploaded in an "untrusted" mode. Upon creation of a collection using an "untrusted" ConfigSet, the following functionality would not work:
-
- * DataImportHandler's ScriptTransformer does not initialize, if specified in the ConfigSet.
- * XSLT transformer (tr parameter) cannot be used at request processing time.
- * StatelessScriptUpdateProcessor does not initialize, if specified in the ConfigSet.
+[example.tab-pane#v2deleteconfigset]
+====
+[.tab-label]*V2 API*
 
-=== Upload ConfigSet Parameters
+With the v2 API, the `delete` command is provided as the request method, as in `-X DELETE`. The name of the configset to delete is provided as a path parameter:
 
-name::
-The ConfigSet to be created when the upload is complete. This parameter is required.
-
-The body of the request should contain a zipped config set.
-
-=== Upload ConfigSet Response
-
-The output will include the status of the request. If the status is anything other than "success", an error message will explain why the request failed.
-
-=== Upload ConfigSet Examples
-
-Create a config set named 'myConfigSet' from the zipped file myconfigset.zip. The zip file must be created from within the `conf` directory (i.e., `solrconfig.xml` must be the top level entry in the zip file). Here is an example on how to create the zip file and upload it:
-
-[source,text]
+[source,bash]
 ----
-$ (cd solr/server/solr/configsets/sample_techproducts_configs/conf && zip -r - *) > myconfigset.zip
-
-$ curl -X POST --header "Content-Type:application/octet-stream" --data-binary @myconfigset.zip "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=myConfigSet"
+curl -X DELETE http://localhost:8983/api/cluster/configs/myConfigSet?omitHeader=true
 ----
+====
+--
 
-The same can be achieved using a Unix pipe, without creating an intermediate zip file, as follows:
+*Output*
 
-[source,text]
+[source,xml]
 ----
-$ (cd server/solr/configsets/sample_techproducts_configs/conf && zip -r - *) | curl -X POST --header "Content-Type:application/octet-stream" --data-binary @- "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=myConfigSet"
+<response>
+  <lst name="responseHeader">
+    <int name="status">0</int>
+    <int name="QTime">170</int>
+  </lst>
+</response>
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc b/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc
index 75da8b7..83febaf 100644
--- a/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc
+++ b/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc
@@ -81,21 +81,21 @@ The <<config-api.adoc#config-api,Config API>> allows you to use an API to modify
 
 [source,json]
 ----
-{"userProps":{
-    "dih.db.url":"jdbc:oracle:thin:@localhost:1521",
-    "dih.db.user":"username",
-    "dih.db.pass":"password"}}
+{"userProps": {
+    "dih.db.url": "jdbc:oracle:thin:@localhost:1521",
+    "dih.db.user": "username",
+    "dih.db.pass": "password"}}
 ----
 
 For more details, see the section <<config-api.adoc#config-api,Config API>>.
 
 === solrcore.properties
 
-If the configuration directory for a Solr core contains a file named `solrcore.properties` that file can contain any arbitrary user defined property names and values using the Java standard https://en.wikipedia.org/wiki/.properties[properties file format], and those properties can be used as variables in the XML configuration files for that Solr core.
+If the configuration directory for a Solr core contains a file named `solrcore.properties` that file can contain any arbitrary user-defined property names and values using the Java https://en.wikipedia.org/wiki/.properties[properties file format]. Those properties can then be used as variables in other configuration files for that Solr core.
 
 For example, the following `solrcore.properties` file could be created in the `conf/` directory of a collection using one of the example configurations, to override the lockType used.
 
-[source,bash]
+[source,properties]
 ----
 #conf/solrcore.properties
 solr.lock.type=none
@@ -116,15 +116,37 @@ The path and name of the `solrcore.properties` file can be overridden using the
 
 === User-Defined Properties in core.properties
 
-Every Solr core has a `core.properties` file, automatically created when using the APIs. When you create a SolrCloud collection, you can pass through custom parameters to go into each core.properties that will be created, by prefixing the parameter name with "property." as a URL parameter. Example:
+Every Solr core has a `core.properties` file, automatically created when using the APIs. When you create a SolrCloud collection, you can pass through custom parameters by prefixing the parameter name with `_property.name_` as a parameter.
+
+For example, to add a property named "my.custom.prop":
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1customprop]
+====
+[.tab-label]*V1 API*
 
 [source,bash]
+----
 http://localhost:8983/solr/admin/collections?action=CREATE&name=gettingstarted&numShards=1&property.my.custom.prop=edismax
+----
+====
 
-That would create a `core.properties` file that has at least the following properties (others omitted for brevity):
+[example.tab-pane#v2]
+====
+[.tab-label]*V2 API*
 
 [source,bash]
 ----
+curl -X POST -H 'Content-type: application/json' -d '{"create": {"name": "gettingstarted", "numShards": "1", "property.my.custom.prop": "edismax"}}' http://localhost:8983/api/collections
+----
+====
+--
+
+This will create a `core.properties` file that has at least the following properties (others omitted for brevity):
+
+[source,properties]
+----
 #core.properties
 name=gettingstarted
 my.custom.prop=edismax
@@ -143,7 +165,9 @@ The `my.custom.prop` property can then be used as a variable, such as in `solrco
 
 === Implicit Core Properties
 
-Several attributes of a Solr core are available as "implicit" properties that can be used in variable substitution, independent of where or how they underlying value is initialized. For example: regardless of whether the name for a particular Solr core is explicitly configured in `core.properties` or inferred from the name of the instance directory, the implicit property `solr.core.name` is available for use as a variable in that core's configuration file...
+Several attributes of a Solr core are available as "implicit" properties that can be used in variable substitution, independent of where or how the underlying value is initialized.
+
+For example, regardless of whether the name for a particular Solr core is explicitly configured in `core.properties` or inferred from the name of the instance directory, the implicit property `solr.core.name` is available for use as a variable in that core's configuration file:
 
 [source,xml]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
index b58eee7..622fbd5 100644
--- a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
+++ b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
@@ -16,64 +16,352 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr ships with many out-of-the-box RequestHandlers, which are called implicit because they are not configured in `solrconfig.xml`.
+Solr ships with many out-of-the-box RequestHandlers, which are called implicit because they do not need to be  configured in `solrconfig.xml` before you are able to use them.
 
 These handlers have pre-defined default parameters, known as _paramsets_, which can be modified if necessary.
 
-== List of Implicitly Available Endpoints
+== Available Implicit Endpoints
 
-// TODO 7.1 - this doesn't look great in the PDF, redesign the presentation
+NOTE: All endpoint paths listed below should be placed after Solr's host and port (if a port is used) to construct a URL.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+=== Admin Handlers
 
-[cols="15,20,15,50",options="header"]
+Many of these handlers are used throughout the Admin UI to show information about Solr.
+
+[horizontal]
+File:: Returns content of files in `${solr.home}/conf/`. This handler must have a collection name in the path to the endpoint.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/admin/file` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/ShowFileRequestHandler.html[ShowFileRequestHandler] |`_ADMIN_FILE`
+|===
+
+Logging:: Retrieve and modify registered loggers.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoints |Class & Javadocs |Paramset
+|v1: `solr/admin/info/logging`
+
+v2: `api/node/logging` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/ShowFileRequestHandler.html[LoggingHandler] |`_ADMIN_LOGGING`
+|===
+
+Luke:: Expose the internal lucene index. This handler must have a collection name in the path to the endpoint.
++
+*Documentation*: http://wiki.apache.org/solr/LukeRequestHandler
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/admin/luke` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/LukeRequestHandler.html[LukeRequestHandler] |`_ADMIN_LUKE`
+|===
+
+
+MBeans:: Provide info about all registered {solr-javadocs}/solr-core/org/apache/solr/core/SolrInfoBean.html[SolrInfoMBeans]. This handler must have a collection name in the path to the endpoint.
++
+*Documentation*: <<mbean-request-handler.adoc#mbean-request-handler,MBean Request Handler>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/admin/mbeans` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/SolrInfoMBeanHandler.html[SolrInfoMBeanHandler] |`_ADMIN_MBEANS`
+|===
+
+Ping:: Health check. This handler must have a collection name in the path to the endpoint.
++
+*Documentation*: <<ping.adoc#ping,Ping>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/admin/ping` |{solr-javadocs}/solr-core/org/apache/solr/handler/PingRequestHandler.html[PingRequestHandler] |`_ADMIN_PING`
+|===
+
+Plugins:: Return info about all registered plugins. This handler must have a collection name in the path to the endpoint.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/admin/plugins` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/PluginInfoHandler.html[PluginInfoHandler] | None.
+|===
+
+System Properties:: Return JRE system properties.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoints |Class & Javadocs |Paramset
+|v1: `solr/admin/info/properties`
+
+v2: `api/node/properties` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/PropertiesRequestHandler.html[PropertiesRequestHandler] |`_ADMIN_PROPERTIES`
+|===
+
+Segments:: Return info on last commit generation Lucene index segments.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/admin/segments` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/SegmentsInfoRequestHandler.html[SegmentsInfoRequestHandler] |`_ADMIN_SEGMENTS`
+|===
+
+System Settings:: Return server statistics and settings.
++
+*Documentation*: https://wiki.apache.org/solr/SystemInformationRequestHandlers#SystemInfoHandler
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoints |Class & Javadocs |Paramset
+|v1: `solr/admin/info/system`
+
+v2: `api/node/system` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/SystemInfoHandler.html[SystemInfoHandler] |`_ADMIN_SYSTEM`
+|===
++
+This endpoint can also take the collection or core name in the path (`solr/<collection>/admin/system` or `solr/<core>/admin/system`) which will include all of the system-level information and additional information about the specific core that served the request.
+
+Threads:: Return info on all JVM threads.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoints |Class & Javadocs |Paramset
+|v1: `solr/admin/info/threads`
+
+v2: `api/node/threads` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/ThreadDumpHandler.html[ThreadDumpHandler] |`_ADMIN_THREADS`
+|===
+
+=== Analysis Handlers
+
+[horizontal]
+Document Analysis:: Return a breakdown of the analysis process of the given document.
++
+*Documentation*: https://wiki.apache.org/solr/AnalysisRequestHandler
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/analysis/document` |{solr-javadocs}/solr-core/org/apache/solr/handler/DocumentAnalysisRequestHandler.html[DocumentAnalysisRequestHandler] |`_ANALYSIS_DOCUMENT`
+|===
+
+Field Analysis:: Return index- and query-time analysis over the given field(s)/field type(s). This handler drives the <<analysis-screen.adoc#analysis-screen,Analysis screen>> in Solr's Admin UI.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/analysis/field` |{solr-javadocs}/solr-core/org/apache/solr/handler/FieldAnalysisRequestHandler.html[FieldAnalysisRequestHandler] |`_ANALYSIS_FIELD`
+|===
+
+=== Handlers for Configuration
+
+[horizontal]
+Config API:: Retrieve and modify Solr configuration.
++
+*Documentation*: <<config-api.adoc#config-api,Config API>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|v1: `solr/<collection>/config`
+
+v2: `api/collections/<collection>/config` |{solr-javadocs}/solr-core/org/apache/solr/handler/SolrConfigHandler.html[SolrConfigHandler] |`_CONFIG`
+|===
+
+Dump:: Echo the request contents back to the client.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/debug/dump` |{solr-javadocs}/solr-core/org/apache/solr/handler/DumpRequestHandler.html[DumpRequestHandler] |`_DEBUG_DUMP`
+|===
+
+Replication:: Replicate indexes for SolrCloud recovery and Master/Slave index distribution. This handler must have a core name in the path to the endpoint.
++
+[cols="3*.",frame=none,grid=cols,options="header"]
 |===
-|Endpoint |Request Handler class |Paramset |Description
-|`/admin/file` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/ShowFileRequestHandler.html[ShowFileRequestHandler] |`_ADMIN_FILE` |Returns content of files in `${solr.home}` `/conf/`.
-|`/admin/logging` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/ShowFileRequestHandler.html[LoggingHandler] |`_ADMIN_LOGGING` |Retrieve/modify registered loggers.
-|http://wiki.apache.org/solr/LukeRequestHandler[`/admin/luke`] |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/LukeRequestHandler.html[LukeRequestHandler] |`_ADMIN_LUKE` |Expose the internal lucene index.
-|<<mbean-request-handler.adoc#mbean-request-handler,`/admin/mbeans`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/SolrInfoMBeanHandler.html[SolrInfoMBeanHandler] |`_ADMIN_MBEANS` |Provide info about all registered {solr-javadocs}/solr-core/org/apache/solr/core/SolrInfoBean.html[SolrInfoMBeans].
-|<<ping.adoc#ping,`/admin/ping`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/PingRequestHandler.html[PingRequestHandler] |`_ADMIN_PING` |Health check.
-|`/admin/plugins` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/PluginInfoHandler.html[PluginInfoHandler] |N/A |Return info about all registered plugins.
-|`/admin/properties` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/PropertiesRequestHandler.html[PropertiesRequestHandler] |`_ADMIN_PROPERTIES` |Return JRE system properties.
-|`/admin/segments` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/SegmentsInfoRequestHandler.html[SegmentsInfoRequestHandler] |`_ADMIN_SEGMENTS` |Return info on last commit generation Lucene index segments.
-|https://wiki.apache.org/solr/SystemInformationRequestHandlers#SystemInfoHandler[`/admin/system`] |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/SystemInfoHandler.html[SystemInfoHandler] |`_ADMIN_SYSTEM` |Return server statistics and settings
-|https://wiki.apache.org/solr/SystemInformationRequestHandlers#ThreadDumpHandler[`/admin/threads`] |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/ThreadDumpHandler.html[ThreadDumpHandler] |`_ADMIN_THREADS` |Return info on all JVM threads.
-|https://wiki.apache.org/solr/AnalysisRequestHandler[`/analysis/document`] |{solr-javadocs}/solr-core/org/apache/solr/handler/DocumentAnalysisRequestHandler.html[DocumentAnalysisRequestHandler] |`_ANALYSIS_DOCUMENT` |Return a breakdown of the analysis process of the given document.
-|`/analysis/field` |{solr-javadocs}/solr-core/org/apache/solr/handler/FieldAnalysisRequestHandler.html[FieldAnalysisRequestHandler] |`_ANALYSIS_FIELD` |Return index- and query-time analysis over the given field(s)/field type(s).
-|<<config-api.adoc#config-api,`/config`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/SolrConfigHandler.html[SolrConfigHandler] |`_CONFIG` |Retrieve/modify Solr configuration.
-|`/debug/dump` |{solr-javadocs}/solr-core/org/apache/solr/handler/DumpRequestHandler.html[DumpRequestHandler] |`_DEBUG_DUMP` |Echo the request contents back to the client.
-|<<exporting-result-sets.adoc#exporting-result-sets,`/export`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/ExportHandler.html[ExportHandler] |`_EXPORT` |Export full sorted result sets.
-|<<realtime-get.adoc#realtime-get,`/get`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/RealTimeGetHandler.html[RealTimeGetHandler] |`_GET` |Real-time get: low-latency retrieval of the latest version of a document.
-|<<graph-traversal.adoc#exporting-graphml-to-support-graph-visualization,`/graph`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/GraphHandler.html[GraphHandler] |`_ADMIN_GRAPH` |Return http://graphml.graphdrawing.org/[GraphML] formatted output from a <<graph-traversal.adoc#graph-traversal,`gather` `Nodes` streaming expression>>.
-|<<index-replication.adoc#index-replication,`/replication`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/ReplicationHandler.html[ReplicationHandler] |`_REPLICATION` |Replicate indexes for SolrCloud recovery and Master/Slave index distribution.
-|<<schema-api.adoc#schema-api,`/schema`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/SchemaHandler.html[SchemaHandler] |`_SCHEMA` |Retrieve/modify Solr schema.
-|<<parallel-sql-interface.adoc#sql-request-handler,`/sql`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/SQLHandler.html[SQLHandler] |`_SQL` |Front end of the Parallel SQL interface.
-|<<streaming-expressions.adoc#streaming-requests-and-responses,`/stream`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/StreamHandler.html[StreamHandler] |`_STREAM` |Distributed stream processing.
-|<<the-terms-component.adoc#using-the-terms-component-in-a-request-handler,`/terms`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/component/SearchHandler.html[SearchHandler] |`_TERMS` |Return a field's indexed terms and the number of documents containing each term.
-|<<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,`/update`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE` |Add, delete and update indexed documents formatted as SolrXML, CSV, SolrJSON or javabin.
-|<<uploading-data-with-index-handlers.adoc#csv-update-convenience-paths,`/update/csv`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_CSV` |Add and update CSV-formatted documents.
-|<<uploading-data-with-index-handlers.adoc#csv-update-convenience-paths,`/update/json`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_JSON` |Add, delete and update SolrJSON-formatted documents.
-|<<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,`/update/json/docs`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_JSON_DOCS` |Add and update custom JSON-formatted documents.
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<core>/replication` |{solr-javadocs}/solr-core/org/apache/solr/handler/ReplicationHandler.html[ReplicationHandler] |`_REPLICATION`
 |===
 
-== How to View the Configuration
+Schema API:: Retrieve and modify the Solr schema.
++
+*Documentation*: <<schema-api.adoc#schema-api,Schema API>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|v1: `solr/<collection>/schema`, `solr/<core>/schema`
 
-You can see configuration for all request handlers, including the implicit request handlers, via the <<config-api.adoc#config-api,Config API>>. For the `gettingstarted` collection:
+v2: `api/collections/<collection>/schema`, `api/cores/<core>/schema` |{solr-javadocs}/solr-core/org/apache/solr/handler/SchemaHandler.html[SchemaHandler] |`_SCHEMA`
+|===
 
-[source,text]
-curl http://localhost:8983/solr/gettingstarted/config/requestHandler
+=== Query Handlers
 
-To restrict the results to the configuration for a particular request handler, use the `componentName` request parameter. To see just the configuration for the `/export` request handler:
+[horizontal]
+Export:: Export full sorted result sets.
++
+*Documentation*: <<exporting-result-sets.adoc#exporting-result-sets,Exporting Result Sets>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/export` |{solr-javadocs}/solr-core/org/apache/solr/handler/ExportHandler.html[ExportHandler] |`_EXPORT`
+|===
 
-[source,text]
-curl "http://localhost:8983/solr/gettingstarted/config/requestHandler?componentName=/export"
+RealTimeGet:: Low-latency retrieval of the latest version of a document.
++
+*Documentation*: <<realtime-get.adoc#realtime-get,RealTime Get>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/get` |{solr-javadocs}/solr-core/org/apache/solr/handler/RealTimeGetHandler.html[RealTimeGetHandler] |`_GET`
+|===
+
+Graph Traversal:: Return http://graphml.graphdrawing.org/[GraphML] formatted output from a `gatherNodes` streaming expression.
++
+*Documentation*: <<graph-traversal.adoc#graph-traversal,Graph Traversal>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/graph` |{solr-javadocs}/solr-core/org/apache/solr/handler/GraphHandler.html[GraphHandler] |`_ADMIN_GRAPH`
+|===
+
+SQL:: Front end of the Parallel SQL interface.
++
+*Documentation*: <<parallel-sql-interface.adoc#sql-request-handler,SQL Request Handler>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/sql` |{solr-javadocs}/solr-core/org/apache/solr/handler/SQLHandler.html[SQLHandler] |`_SQL`
+|===
+
+Streaming Expressions:: Distributed stream processing.
++
+*Documentation*: <<streaming-expressions.adoc#streaming-requests-and-responses,Streaming Requests and Responses>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/stream` |{solr-javadocs}/solr-core/org/apache/solr/handler/StreamHandler.html[StreamHandler] |`_STREAM`
+|===
+
+Terms:: Return a field's indexed terms and the number of documents containing each term.
++
+*Documentation*: <<the-terms-component.adoc#using-the-terms-component-in-a-request-handler,Using the Terms Component in a Request Handler>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/terms` |{solr-javadocs}/solr-core/org/apache/solr/handler/component/SearchHandler.html[SearchHandler] |`_TERMS`
+|===
+
+=== Update Handlers
+
+[horizontal]
+Update:: Add, delete and update indexed documents formatted as SolrXML, CSV, SolrJSON or javabin.
++
+*Documentation*: <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/update` |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE`
+|===
+
+CSV Updates:: Add and update CSV-formatted documents.
++
+*Documentation*: <<uploading-data-with-index-handlers.adoc#csv-update-convenience-paths,CSV Update Convenience Paths>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/update/csv` |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_CSV`
+|===
+
+JSON Updates:: Add, delete and update SolrJSON-formatted documents.
++
+*Documentation*: <<uploading-data-with-index-handlers.adoc#json-update-convenience-paths,JSON Update Convenience Paths>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/update/json` |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_JSON`
+|===
+
+Custom JSON Updates:: Add and update custom JSON-formatted documents.
++
+*Documentation*: <<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>
++
+[cols="3*.",frame=none,grid=cols,options="header"]
+|===
+|API Endpoint |Class & Javadocs |Paramset
+|`solr/<collection>/update/json/docs` |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_JSON_DOCS`
+|===
+
+== How to View Implicit Handler Paramsets
+
+You can see configuration for all request handlers, including the implicit request handlers, via the <<config-api.adoc#config-api,Config API>>.
 
 To include the expanded paramset in the response, as well as the effective parameters from merging the paramset parameters with the built-in parameters, use the `expandParams` request param. For the `/export` request handler, you can make a request like this:
 
-[source,text]
-curl "http://localhost:8983/solr/gettingstarted/config/requestHandler?componentName=/export&expandParams=true"
 
-== How to Edit the Configuration
+[.dynamic-tabs]
+--
+[example.tab-pane#v1expandparams]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+http://localhost:8983/solr/gettingstarted/config/requestHandler?componentName=/export&expandParams=true
+----
+====
+
+[example.tab-pane#v2expandparams]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+http://localhost:8983/api/collections/gettingstarted/config/requestHandler?componentName=/export&expandParams=true
+----
+====
+--
+
+The response will look similar to:
+
+[source,json]
+----
+{
+  "config": {
+    "requestHandler": {
+      "/export": {
+        "class": "solr.ExportHandler",
+        "useParams": "_EXPORT",
+        "components": ["query"],
+        "defaults": {
+          "wt": "json"
+        },
+        "invariants": {
+          "rq": "{!xport}",
+          "distrib": false
+        },
+        "name": "/export",
+        "_useParamsExpanded_": {
+          "_EXPORT": "[NOT AVAILABLE]"
+        },
+        "_effectiveParams_": {
+          "distrib": "false",
+          "omitHeader": "true",
+          "wt": "json",
+          "rq": "{!xport}"
+        }
+      }
+    }
+  }
+}
+----
+
+== How to Edit Implicit Handler Paramsets
 
-Because implicit request handlers are not present in `solrconfig.xml`, configuration of their associated `default`, `invariant` and `appends` parameters may be edited via<<request-parameters-api.adoc#request-parameters-api, Request Parameters API>> using the paramset listed in the above table. However, other parameters, including SearchHandler components, may not be modified. The invariants and appends specified in the implicit configuration cannot be overridden.
+Because implicit request handlers are not present in `solrconfig.xml`, configuration of their associated `default`, `invariant` and `appends` parameters may be edited via the <<request-parameters-api.adoc#request-parameters-api, Request Parameters API>> using the paramset listed in the above table. However, other parameters, including SearchHandler components, may not be modified. The invariants and appends specified in the implicit configuration cannot be overridden.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/b99e07c7/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
index e60f52f..c2008fa 100644
--- a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
@@ -68,7 +68,7 @@ This attribute can be used to indicate that the original `HttpServletRequest` ob
                 addHttpRequestToContext="false" />
 ----
 
-The below command is an example of how to enable RemoteStreaming and BodyStreaming through the <<config-api.adoc#creating-and-updating-common-properties,Config API>>:
+The below command is an example of how to enable RemoteStreaming and BodyStreaming through the <<config-api.adoc#commands-for-common-properties,Config API>>:
 
 [.dynamic-tabs]
 --


[39/40] lucene-solr:jira/solr-11833: SOLR-11914: Deprecated some SolrParams methods. * toSolrParams(nl) moved to a NamedList method, which is more natural.

Posted by ab...@apache.org.
SOLR-11914: Deprecated some SolrParams methods.
* toSolrParams(nl) moved to a NamedList method, which is more natural.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/1409ab8f
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/1409ab8f
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/1409ab8f

Branch: refs/heads/jira/solr-11833
Commit: 1409ab8f84ab0949b1da095f03dc94d3b74db5cf
Parents: e167e91
Author: David Smiley <ds...@apache.org>
Authored: Mon Apr 23 13:26:49 2018 -0400
Committer: David Smiley <ds...@apache.org>
Committed: Mon Apr 23 13:26:49 2018 -0400

----------------------------------------------------------------------
 solr/CHANGES.txt                                |   4 +
 .../carrot2/CarrotClusteringEngine.java         |   2 +-
 ...anguageIdentifierUpdateProcessorFactory.java |   8 +-
 ...OpenNLPLangDetectUpdateProcessorFactory.java |   8 +-
 ...anguageIdentifierUpdateProcessorFactory.java |   8 +-
 .../apache/solr/core/HdfsDirectoryFactory.java  |  13 +-
 .../apache/solr/core/MMapDirectoryFactory.java  |   2 +-
 .../solr/core/NRTCachingDirectoryFactory.java   |   2 +-
 .../src/java/org/apache/solr/core/SolrCore.java |  24 +++-
 .../apache/solr/handler/CdcrRequestHandler.java |   8 +-
 .../apache/solr/handler/RequestHandlerBase.java |   4 +-
 .../solr/handler/UpdateRequestHandler.java      |   2 +-
 .../solr/handler/admin/CollectionsHandler.java  | 142 ++++++++++---------
 .../solr/handler/admin/ConfigSetsHandler.java   |  18 +--
 .../handler/admin/MetricsCollectorHandler.java  |   2 +-
 .../component/QueryElevationComponent.java      |   2 +-
 .../solr/highlight/HighlightingPluginBase.java  |   2 +-
 .../solr/request/LocalSolrQueryRequest.java     |  10 +-
 .../solr/response/XSLTResponseWriter.java       |   2 +-
 .../org/apache/solr/schema/IndexSchema.java     |   2 +-
 .../solr/schema/ManagedIndexSchemaFactory.java  |   2 +-
 .../solr/spelling/DirectSolrSpellChecker.java   |   2 +-
 .../ClassificationUpdateProcessorFactory.java   |   2 +-
 ...oreCommitOptimizeUpdateProcessorFactory.java |   2 +-
 .../processor/LogUpdateProcessorFactory.java    |   2 +-
 .../processor/RegexpBoostProcessorFactory.java  |   2 +-
 .../SignatureUpdateProcessorFactory.java        |   2 +-
 .../processor/URLClassifyProcessorFactory.java  |   2 +-
 .../org/apache/solr/BasicFunctionalityTest.java |   2 +-
 .../solr/handler/admin/TestCollectionAPIs.java  |  23 +++
 .../solrj/io/graph/GatherNodesStream.java       |  20 +--
 .../client/solrj/io/stream/FacetStream.java     |   6 +-
 .../solr/client/solrj/io/stream/SqlStream.java  |   3 +-
 .../solrj/io/stream/TimeSeriesStream.java       |   4 +-
 .../request/JavaBinUpdateRequestCodec.java      |   6 +-
 .../java/org/apache/solr/common/MapWriter.java  |   9 +-
 .../apache/solr/common/params/SolrParams.java   |  22 ++-
 .../org/apache/solr/common/util/NamedList.java  |  30 ++++
 .../solr/common/params/SolrParamTest.java       |  38 +----
 39 files changed, 260 insertions(+), 184 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index ff0ea2c..f131c07 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -249,6 +249,10 @@ Other Changes
 
 * SOLR-12252: Fix minor compiler and intellij warnings in autoscaling policy framework. (shalin)
 
+* SOLR-11914: The following SolrParams methods are now deprecated: toSolrParams (use NamedList.toSolrParams instead),
+  toMap, toMultiMap, toFilteredSolrParams, getAll.  The latter ones have no direct replacement but are easy to
+  implement yourself as-needed. (David Smiley)
+
 ==================  7.3.1 ==================
 
 Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/contrib/clustering/src/java/org/apache/solr/handler/clustering/carrot2/CarrotClusteringEngine.java
----------------------------------------------------------------------
diff --git a/solr/contrib/clustering/src/java/org/apache/solr/handler/clustering/carrot2/CarrotClusteringEngine.java b/solr/contrib/clustering/src/java/org/apache/solr/handler/clustering/carrot2/CarrotClusteringEngine.java
index 33cbb64..4c28916 100644
--- a/solr/contrib/clustering/src/java/org/apache/solr/handler/clustering/carrot2/CarrotClusteringEngine.java
+++ b/solr/contrib/clustering/src/java/org/apache/solr/handler/clustering/carrot2/CarrotClusteringEngine.java
@@ -123,7 +123,7 @@ public class CarrotClusteringEngine extends SearchClusteringEngine {
     this.core = core;
 
     String result = super.init(config, core);
-    final SolrParams initParams = SolrParams.toSolrParams(config);
+    final SolrParams initParams = config.toSolrParams();
 
     // Initialization attributes for Carrot2 controller.
     HashMap<String, Object> initAttributes = new HashMap<>();

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java
index 59663ce..a140807 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java
@@ -82,17 +82,17 @@ public class LangDetectLanguageIdentifierUpdateProcessorFactory extends
       Object o;
       o = args.get("defaults");
       if (o != null && o instanceof NamedList) {
-        defaults = SolrParams.toSolrParams((NamedList) o);
+        defaults = ((NamedList) o).toSolrParams();
       } else {
-        defaults = SolrParams.toSolrParams(args);
+        defaults = args.toSolrParams();
       }
       o = args.get("appends");
       if (o != null && o instanceof NamedList) {
-        appends = SolrParams.toSolrParams((NamedList) o);
+        appends = ((NamedList) o).toSolrParams();
       }
       o = args.get("invariants");
       if (o != null && o instanceof NamedList) {
-        invariants = SolrParams.toSolrParams((NamedList) o);
+        invariants = ((NamedList) o).toSolrParams();
       }
     }
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java
index dfbdcbd..ffe11aa 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java
@@ -65,17 +65,17 @@ public class OpenNLPLangDetectUpdateProcessorFactory extends UpdateRequestProces
       Object o;
       o = args.get("defaults");
       if (o != null && o instanceof NamedList) {
-        defaults = SolrParams.toSolrParams((NamedList) o);
+        defaults = ((NamedList) o).toSolrParams();
       } else {
-        defaults = SolrParams.toSolrParams(args);
+        defaults = args.toSolrParams();
       }
       o = args.get("appends");
       if (o != null && o instanceof NamedList) {
-        appends = SolrParams.toSolrParams((NamedList) o);
+        appends = ((NamedList) o).toSolrParams();
       }
       o = args.get("invariants");
       if (o != null && o instanceof NamedList) {
-        invariants = SolrParams.toSolrParams((NamedList) o);
+        invariants = ((NamedList) o).toSolrParams();
       }
 
       // Look for model filename in invariants, then in args, then defaults

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java
index 5d5acd1..838311b 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java
@@ -65,17 +65,17 @@ public class TikaLanguageIdentifierUpdateProcessorFactory extends
       Object o;
       o = args.get("defaults");
       if (o != null && o instanceof NamedList) {
-        defaults = SolrParams.toSolrParams((NamedList) o);
+        defaults = ((NamedList) o).toSolrParams();
       } else {
-        defaults = SolrParams.toSolrParams(args);
+        defaults = args.toSolrParams();
       }
       o = args.get("appends");
       if (o != null && o instanceof NamedList) {
-        appends = SolrParams.toSolrParams((NamedList) o);
+        appends = ((NamedList) o).toSolrParams();
       }
       o = args.get("invariants");
       if (o != null && o instanceof NamedList) {
-        invariants = SolrParams.toSolrParams((NamedList) o);
+        invariants = ((NamedList) o).toSolrParams();
       }
     }
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java b/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
index e4b06b7..d17d3d6 100644
--- a/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
+++ b/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
@@ -16,8 +16,6 @@
  */
 package org.apache.solr.core;
 
-import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
-
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.invoke.MethodHandles;
@@ -31,6 +29,10 @@ import java.util.Set;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.cache.CacheBuilder;
+import com.google.common.cache.RemovalListener;
+import com.google.common.cache.RemovalNotification;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileContext;
 import org.apache.hadoop.fs.FileStatus;
@@ -67,10 +69,7 @@ import org.apache.solr.util.plugin.SolrCoreAware;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.RemovalListener;
-import com.google.common.cache.RemovalNotification;
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
 
 public class HdfsDirectoryFactory extends CachingDirectoryFactory implements SolrCoreAware, SolrMetricProducer {
   private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
@@ -151,7 +150,7 @@ public class HdfsDirectoryFactory extends CachingDirectoryFactory implements Sol
   @Override
   public void init(NamedList args) {
     super.init(args);
-    params = SolrParams.toSolrParams(args);
+    params = args.toSolrParams();
     this.hdfsDataDir = getConfig(HDFS_HOME, null);
     if (this.hdfsDataDir != null && this.hdfsDataDir.length() == 0) {
       this.hdfsDataDir = null;

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/core/MMapDirectoryFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/core/MMapDirectoryFactory.java b/solr/core/src/java/org/apache/solr/core/MMapDirectoryFactory.java
index e9fbce7..0c1875b 100644
--- a/solr/core/src/java/org/apache/solr/core/MMapDirectoryFactory.java
+++ b/solr/core/src/java/org/apache/solr/core/MMapDirectoryFactory.java
@@ -49,7 +49,7 @@ public class MMapDirectoryFactory extends StandardDirectoryFactory {
   @Override
   public void init(NamedList args) {
     super.init(args);
-    SolrParams params = SolrParams.toSolrParams( args );
+    SolrParams params = args.toSolrParams();
     maxChunk = params.getInt("maxChunkSize", MMapDirectory.DEFAULT_MAX_CHUNK_SIZE);
     if (maxChunk <= 0){
       throw new IllegalArgumentException("maxChunk must be greater than 0");

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/core/NRTCachingDirectoryFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/core/NRTCachingDirectoryFactory.java b/solr/core/src/java/org/apache/solr/core/NRTCachingDirectoryFactory.java
index 4ecc44c..789ffdb 100644
--- a/solr/core/src/java/org/apache/solr/core/NRTCachingDirectoryFactory.java
+++ b/solr/core/src/java/org/apache/solr/core/NRTCachingDirectoryFactory.java
@@ -38,7 +38,7 @@ public class NRTCachingDirectoryFactory extends StandardDirectoryFactory {
   @Override
   public void init(NamedList args) {
     super.init(args);
-    SolrParams params = SolrParams.toSolrParams(args);
+    SolrParams params = args.toSolrParams();
     maxMergeSizeMB = params.getDouble("maxMergeSizeMB", DEFAULT_MAX_MERGE_SIZE_MB);
     if (maxMergeSizeMB <= 0){
       throw new IllegalArgumentException("maxMergeSizeMB must be greater than 0");

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/core/SolrCore.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/core/SolrCore.java b/solr/core/src/java/org/apache/solr/core/SolrCore.java
index 83dd2df..7cf264b 100644
--- a/solr/core/src/java/org/apache/solr/core/SolrCore.java
+++ b/solr/core/src/java/org/apache/solr/core/SolrCore.java
@@ -38,6 +38,7 @@ import java.util.Collections;
 import java.util.Date;
 import java.util.HashMap;
 import java.util.HashSet;
+import java.util.Iterator;
 import java.util.LinkedHashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -59,6 +60,7 @@ import java.util.concurrent.locks.ReentrantLock;
 import com.codahale.metrics.Counter;
 import com.codahale.metrics.MetricRegistry;
 import com.codahale.metrics.Timer;
+import com.google.common.collect.Iterators;
 import com.google.common.collect.MapMaker;
 import org.apache.commons.io.FileUtils;
 import org.apache.lucene.analysis.util.ResourceLoader;
@@ -2540,7 +2542,27 @@ public final class SolrCore implements SolrInfoBean, SolrMetricProducer, Closeab
     if (lpList == null) {
       toLog.add("params", "{" + req.getParamString() + "}");
     } else if (lpList.length() > 0) {
-      toLog.add("params", "{" + params.toFilteredSolrParams(Arrays.asList(lpList.split(","))).toString() + "}");
+
+      // Filter params by those in LOG_PARAMS_LIST so that we can then call toString
+      HashSet<String> lpSet = new HashSet<>(Arrays.asList(lpList.split(",")));
+      SolrParams filteredParams = new SolrParams() {
+        @Override
+        public Iterator<String> getParameterNamesIterator() {
+          return Iterators.filter(params.getParameterNamesIterator(), lpSet::contains);
+        }
+
+        @Override
+        public String get(String param) { // assume param is in lpSet
+          return params.get(param);
+        } //assume in lpSet
+
+        @Override
+        public String[] getParams(String param) { // assume param is in lpSet
+          return params.getParams(param);
+        } // assume in lpSet
+      };
+
+      toLog.add("params", "{" + filteredParams + "}");
     }
   }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java b/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
index 430237e..1453841 100644
--- a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
@@ -134,19 +134,19 @@ public class CdcrRequestHandler extends RequestHandlerBase implements SolrCoreAw
       // Configuration of the Update Log Synchronizer
       Object updateLogSynchonizerParam = args.get(CdcrParams.UPDATE_LOG_SYNCHRONIZER_PARAM);
       if (updateLogSynchonizerParam != null && updateLogSynchonizerParam instanceof NamedList) {
-        updateLogSynchronizerConfiguration = SolrParams.toSolrParams((NamedList) updateLogSynchonizerParam);
+        updateLogSynchronizerConfiguration = ((NamedList) updateLogSynchonizerParam).toSolrParams();
       }
 
       // Configuration of the Replicator
       Object replicatorParam = args.get(CdcrParams.REPLICATOR_PARAM);
       if (replicatorParam != null && replicatorParam instanceof NamedList) {
-        replicatorConfiguration = SolrParams.toSolrParams((NamedList) replicatorParam);
+        replicatorConfiguration = ((NamedList) replicatorParam).toSolrParams();
       }
 
       // Configuration of the Buffer
       Object bufferParam = args.get(CdcrParams.BUFFER_PARAM);
       if (bufferParam != null && bufferParam instanceof NamedList) {
-        bufferConfiguration = SolrParams.toSolrParams((NamedList) bufferParam);
+        bufferConfiguration = ((NamedList) bufferParam).toSolrParams();
       }
 
       // Configuration of the Replicas
@@ -154,7 +154,7 @@ public class CdcrRequestHandler extends RequestHandlerBase implements SolrCoreAw
       List replicas = args.getAll(CdcrParams.REPLICA_PARAM);
       for (Object replica : replicas) {
         if (replica != null && replica instanceof NamedList) {
-          SolrParams params = SolrParams.toSolrParams((NamedList) replica);
+          SolrParams params = ((NamedList) replica).toSolrParams();
           if (!replicasConfiguration.containsKey(params.get(CdcrParams.SOURCE_COLLECTION_PARAM))) {
             replicasConfiguration.put(params.get(CdcrParams.SOURCE_COLLECTION_PARAM), new ArrayList<>());
           }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/handler/RequestHandlerBase.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/RequestHandlerBase.java b/solr/core/src/java/org/apache/solr/handler/RequestHandlerBase.java
index d4cfd99..28b91e7 100644
--- a/solr/core/src/java/org/apache/solr/handler/RequestHandlerBase.java
+++ b/solr/core/src/java/org/apache/solr/handler/RequestHandlerBase.java
@@ -124,7 +124,7 @@ public abstract class RequestHandlerBase implements SolrRequestHandler, SolrInfo
    * @see #handleRequest(org.apache.solr.request.SolrQueryRequest, org.apache.solr.response.SolrQueryResponse)
    * @see #handleRequestBody(org.apache.solr.request.SolrQueryRequest, org.apache.solr.response.SolrQueryResponse)
    * @see org.apache.solr.util.SolrPluginUtils#setDefaults(org.apache.solr.request.SolrQueryRequest, org.apache.solr.common.params.SolrParams, org.apache.solr.common.params.SolrParams, org.apache.solr.common.params.SolrParams)
-   * @see SolrParams#toSolrParams(org.apache.solr.common.util.NamedList)
+   * @see NamedList#toSolrParams()
    *
    * See also the example solrconfig.xml located in the Solr codebase (example/solr/conf).
    */
@@ -166,7 +166,7 @@ public abstract class RequestHandlerBase implements SolrRequestHandler, SolrInfo
   public static SolrParams getSolrParamsFromNamedList(NamedList args, String key) {
     Object o = args.get(key);
     if (o != null && o instanceof NamedList) {
-      return  SolrParams.toSolrParams((NamedList) o);
+      return ((NamedList) o).toSolrParams();
     }
     return null;
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/handler/UpdateRequestHandler.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/UpdateRequestHandler.java b/solr/core/src/java/org/apache/solr/handler/UpdateRequestHandler.java
index fd7a754..3c7ffda 100644
--- a/solr/core/src/java/org/apache/solr/handler/UpdateRequestHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/UpdateRequestHandler.java
@@ -136,7 +136,7 @@ public class UpdateRequestHandler extends ContentStreamHandlerBase implements Pe
   protected Map<String,ContentStreamLoader> createDefaultLoaders(NamedList args) {
     SolrParams p = null;
     if(args!=null) {
-      p = SolrParams.toSolrParams(args);
+      p = args.toSolrParams();
     }
     Map<String,ContentStreamLoader> registry = new HashMap<>();
     registry.put("application/xml", new XMLLoader().init(p) );

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
index c02271e..ff47c8b 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
@@ -22,6 +22,7 @@ import java.net.URI;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Collections;
 import java.util.Iterator;
 import java.util.LinkedHashMap;
 import java.util.List;
@@ -30,7 +31,6 @@ import java.util.Map;
 import java.util.Optional;
 import java.util.Set;
 import java.util.concurrent.TimeUnit;
-import java.util.function.BiConsumer;
 import java.util.stream.Collectors;
 
 import com.google.common.collect.ImmutableSet;
@@ -449,9 +449,9 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
      * as well as specific replicas= options
      */
     CREATE_OP(CREATE, (req, rsp, h) -> {
-      Map<String, Object> props = req.getParams().required().getAll(null, NAME);
+      Map<String, Object> props = copy(req.getParams().required(), null, NAME);
       props.put("fromApi", "true");
-      req.getParams().getAll(props,
+      copy(req.getParams(), props,
           REPLICATION_FACTOR,
           COLL_CONF,
           NUM_SLICES,
@@ -491,9 +491,9 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       return copyPropertiesWithPrefix(req.getParams(), props, "router.");
 
     }),
-    DELETE_OP(DELETE, (req, rsp, h) -> req.getParams().required().getAll(null, NAME)),
+    DELETE_OP(DELETE, (req, rsp, h) -> copy(req.getParams().required(), null, NAME)),
 
-    RELOAD_OP(RELOAD, (req, rsp, h) -> req.getParams().required().getAll(null, NAME)),
+    RELOAD_OP(RELOAD, (req, rsp, h) -> copy(req.getParams().required(), null, NAME)),
 
     SYNCSHARD_OP(SYNCSHARD, (req, rsp, h) -> {
       String collection = req.getParams().required().get("collection");
@@ -522,35 +522,37 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       String alias = req.getParams().get(NAME);
       SolrIdentifierValidator.validateAliasName(alias);
       String collections = req.getParams().get("collections");
-      Map<String, Object> result = req.getParams().getAll(null, REQUIRED_ROUTER_PARAMS);
-      req.getParams().getAll(result, OPTIONAL_ROUTER_PARAMS);
+      Map<String, Object> result = copy(req.getParams(), null, REQUIRED_ROUTER_PARAMS);
+      copy(req.getParams(), result, OPTIONAL_ROUTER_PARAMS);
       if (collections != null) {
         if (result.size() > 1) { // (NAME should be there, and if it's not we will fail below)
           throw new SolrException(BAD_REQUEST, "Collections cannot be specified when creating a time routed alias.");
         }
         // regular alias creation...
-        return req.getParams().required().getAll(null, NAME, "collections");
+        return copy(req.getParams().required(), null, NAME, "collections");
       }
 
       // Ok so we are creating a time routed alias from here
 
       // for validation....
-      req.getParams().required().getAll(null, REQUIRED_ROUTER_PARAMS);
+      copy(req.getParams().required(), null, REQUIRED_ROUTER_PARAMS);
       ModifiableSolrParams createCollParams = new ModifiableSolrParams(); // without prefix
 
       // add to result params that start with "create-collection.".
       //   Additionally, save these without the prefix to createCollParams
-      forEach(req.getParams(), (p, v) -> {
-          if (p.startsWith(CREATE_COLLECTION_PREFIX)) {
-            // This is what SolrParams#getAll(Map, Collection)} does
-            if (v.length == 1) {
-              result.put(p, v[0]);
-            } else {
-              result.put(p, v);
-            }
-            createCollParams.set(p.substring(CREATE_COLLECTION_PREFIX.length()), v);
+      for (Map.Entry<String, String[]> entry : req.getParams()) {
+        final String p = entry.getKey();
+        if (p.startsWith(CREATE_COLLECTION_PREFIX)) {
+          // This is what SolrParams#getAll(Map, Collection)} does
+          final String[] v = entry.getValue();
+          if (v.length == 1) {
+            result.put(p, v[0]);
+          } else {
+            result.put(p, v);
           }
-        });
+          createCollParams.set(p.substring(CREATE_COLLECTION_PREFIX.length()), v);
+        }
+      }
 
       // Verify that the create-collection prefix'ed params appear to be valid.
       if (createCollParams.get(NAME) != null) {
@@ -568,13 +570,13 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       return result;
     }),
 
-    DELETEALIAS_OP(DELETEALIAS, (req, rsp, h) -> req.getParams().required().getAll(null, NAME)),
+    DELETEALIAS_OP(DELETEALIAS, (req, rsp, h) -> copy(req.getParams().required(), null, NAME)),
 
     /**
      * Change properties for an alias (use CREATEALIAS_OP to change the actual value of the alias)
      */
     ALIASPROP_OP(ALIASPROP, (req, rsp, h) -> {
-      Map<String, Object> params = req.getParams().required().getAll(null, NAME);
+      Map<String, Object> params = copy(req.getParams().required(), null, NAME);
 
       // Note: success/no-op in the event of no properties supplied is intentional. Keeps code simple and one less case
       // for api-callers to check for.
@@ -621,7 +623,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
             "Only one of 'ranges' or 'split.key' should be specified");
       }
 
-      Map<String, Object> map = req.getParams().getAll(null,
+      Map<String, Object> map = copy(req.getParams(), null,
           COLLECTION_PROP,
           SHARD_ID_PROP,
           "split.key",
@@ -630,10 +632,10 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       return copyPropertiesWithPrefix(req.getParams(), map, COLL_PROP_PREFIX);
     }),
     DELETESHARD_OP(DELETESHARD, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null,
+      Map<String, Object> map = copy(req.getParams().required(), null,
           COLLECTION_PROP,
           SHARD_ID_PROP);
-      req.getParams().getAll(map,
+      copy(req.getParams(), map,
           DELETE_INDEX,
           DELETE_DATA_DIR,
           DELETE_INSTANCE_DIR);
@@ -644,24 +646,24 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       return null;
     }),
     CREATESHARD_OP(CREATESHARD, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null,
+      Map<String, Object> map = copy(req.getParams().required(), null,
           COLLECTION_PROP,
           SHARD_ID_PROP);
       ClusterState clusterState = h.coreContainer.getZkController().getClusterState();
       final String newShardName = SolrIdentifierValidator.validateShardName(req.getParams().get(SHARD_ID_PROP));
       if (!ImplicitDocRouter.NAME.equals(((Map) clusterState.getCollection(req.getParams().get(COLLECTION_PROP)).get(DOC_ROUTER)).get(NAME)))
         throw new SolrException(ErrorCode.BAD_REQUEST, "shards can be added only to 'implicit' collections");
-      req.getParams().getAll(map,
+      copy(req.getParams(), map,
           REPLICATION_FACTOR,
           CREATE_NODE_SET,
           WAIT_FOR_FINAL_STATE);
       return copyPropertiesWithPrefix(req.getParams(), map, COLL_PROP_PREFIX);
     }),
     DELETEREPLICA_OP(DELETEREPLICA, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null,
+      Map<String, Object> map = copy(req.getParams().required(), null,
           COLLECTION_PROP);
 
-      return req.getParams().getAll(map,
+      return copy(req.getParams(), map,
           DELETE_INDEX,
           DELETE_DATA_DIR,
           DELETE_INSTANCE_DIR,
@@ -670,17 +672,17 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
           ONLY_IF_DOWN);
     }),
     MIGRATE_OP(MIGRATE, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null, COLLECTION_PROP, "split.key", "target.collection");
-      return req.getParams().getAll(map, "forward.timeout");
+      Map<String, Object> map = copy(req.getParams().required(), null, COLLECTION_PROP, "split.key", "target.collection");
+      return copy(req.getParams(), map, "forward.timeout");
     }),
     ADDROLE_OP(ADDROLE, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null, "role", "node");
+      Map<String, Object> map = copy(req.getParams().required(), null, "role", "node");
       if (!KNOWN_ROLES.contains(map.get("role")))
         throw new SolrException(ErrorCode.BAD_REQUEST, "Unknown role. Supported roles are ," + KNOWN_ROLES);
       return map;
     }),
     REMOVEROLE_OP(REMOVEROLE, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null, "role", "node");
+      Map<String, Object> map = copy(req.getParams().required(), null, "role", "node");
       if (!KNOWN_ROLES.contains(map.get("role")))
         throw new SolrException(ErrorCode.BAD_REQUEST, "Unknown role. Supported roles are ," + KNOWN_ROLES);
       return map;
@@ -776,7 +778,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       }
     }),
     ADDREPLICA_OP(ADDREPLICA, (req, rsp, h) -> {
-      Map<String, Object> props = req.getParams().getAll(null,
+      Map<String, Object> props = copy(req.getParams(), null,
           COLLECTION_PROP,
           "node",
           SHARD_ID_PROP,
@@ -809,7 +811,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
      * Can return status per specific collection/shard or per all collections.
      */
     CLUSTERSTATUS_OP(CLUSTERSTATUS, (req, rsp, h) -> {
-      Map<String, Object> all = req.getParams().getAll(null,
+      Map<String, Object> all = copy(req.getParams(), null,
           COLLECTION_PROP,
           SHARD_ID_PROP,
           _ROUTE_);
@@ -818,16 +820,16 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       return null;
     }),
     UTILIZENODE_OP(UTILIZENODE, (req, rsp, h) -> {
-      return req.getParams().required().getAll(null, AutoScalingParams.NODE);
+      return copy(req.getParams().required(), null, AutoScalingParams.NODE);
     }),
     ADDREPLICAPROP_OP(ADDREPLICAPROP, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null,
+      Map<String, Object> map = copy(req.getParams().required(), null,
           COLLECTION_PROP,
           PROPERTY_PROP,
           SHARD_ID_PROP,
           REPLICA_PROP,
           PROPERTY_VALUE_PROP);
-      req.getParams().getAll(map, SHARD_UNIQUE);
+      copy(req.getParams(), map, SHARD_UNIQUE);
       String property = (String) map.get(PROPERTY_PROP);
       if (!property.startsWith(COLL_PROP_PREFIX)) {
         property = COLL_PROP_PREFIX + property;
@@ -848,15 +850,15 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       return map;
     }),
     DELETEREPLICAPROP_OP(DELETEREPLICAPROP, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null,
+      Map<String, Object> map = copy(req.getParams().required(), null,
           COLLECTION_PROP,
           PROPERTY_PROP,
           SHARD_ID_PROP,
           REPLICA_PROP);
-      return req.getParams().getAll(map, PROPERTY_PROP);
+      return copy(req.getParams(), map, PROPERTY_PROP);
     }),
     BALANCESHARDUNIQUE_OP(BALANCESHARDUNIQUE, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null,
+      Map<String, Object> map = copy(req.getParams().required(), null,
           COLLECTION_PROP,
           PROPERTY_PROP);
       Boolean shardUnique = Boolean.parseBoolean(req.getParams().get(SHARD_UNIQUE));
@@ -871,24 +873,24 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
             " Property: " + prop + " shardUnique: " + Boolean.toString(shardUnique));
       }
 
-      return req.getParams().getAll(map, ONLY_ACTIVE_NODES, SHARD_UNIQUE);
+      return copy(req.getParams(), map, ONLY_ACTIVE_NODES, SHARD_UNIQUE);
     }),
     REBALANCELEADERS_OP(REBALANCELEADERS, (req, rsp, h) -> {
       new RebalanceLeaders(req, rsp, h).execute();
       return null;
     }),
     MODIFYCOLLECTION_OP(MODIFYCOLLECTION, (req, rsp, h) -> {
-      Map<String, Object> m = req.getParams().getAll(null, MODIFIABLE_COLL_PROPS);
+      Map<String, Object> m = copy(req.getParams(), null, MODIFIABLE_COLL_PROPS);
       if (m.isEmpty()) throw new SolrException(ErrorCode.BAD_REQUEST,
           formatString("no supported values provided rule, snitch, maxShardsPerNode, replicationFactor, collection.configName"));
-      req.getParams().required().getAll(m, COLLECTION_PROP);
+      copy(req.getParams().required(), m, COLLECTION_PROP);
       addMapObject(m, RULE);
       addMapObject(m, SNITCH);
       for (String prop : MODIFIABLE_COLL_PROPS) DocCollection.verifyProp(m, prop);
       verifyRuleParams(h.coreContainer, m);
       return m;
     }),
-    MIGRATESTATEFORMAT_OP(MIGRATESTATEFORMAT, (req, rsp, h) -> req.getParams().required().getAll(null, COLLECTION_PROP)),
+    MIGRATESTATEFORMAT_OP(MIGRATESTATEFORMAT, (req, rsp, h) -> copy(req.getParams().required(), null, COLLECTION_PROP)),
 
     BACKUP_OP(BACKUP, (req, rsp, h) -> {
       req.getParams().required().check(NAME, COLLECTION_PROP);
@@ -929,7 +931,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
         throw new SolrException(ErrorCode.BAD_REQUEST, "Unknown index backup strategy " + strategy);
       }
 
-      Map<String, Object> params = req.getParams().getAll(null, NAME, COLLECTION_PROP, CoreAdminParams.COMMIT_NAME);
+      Map<String, Object> params = copy(req.getParams(), null, NAME, COLLECTION_PROP, CoreAdminParams.COMMIT_NAME);
       params.put(CoreAdminParams.BACKUP_LOCATION, location);
       params.put(CollectionAdminParams.INDEX_BACKUP_STRATEGY, strategy);
       return params;
@@ -977,10 +979,10 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
         );
       }
 
-      Map<String, Object> params = req.getParams().getAll(null, NAME, COLLECTION_PROP);
+      Map<String, Object> params = copy(req.getParams(), null, NAME, COLLECTION_PROP);
       params.put(CoreAdminParams.BACKUP_LOCATION, location);
       // from CREATE_OP:
-      req.getParams().getAll(params, COLL_CONF, REPLICATION_FACTOR, MAX_SHARDS_PER_NODE, STATE_FORMAT,
+      copy(req.getParams(), params, COLL_CONF, REPLICATION_FACTOR, MAX_SHARDS_PER_NODE, STATE_FORMAT,
           AUTO_ADD_REPLICAS, CREATE_NODE_SET, CREATE_NODE_SET_SHUFFLE);
       copyPropertiesWithPrefix(req.getParams(), params, COLL_PROP_PREFIX);
       return params;
@@ -1002,7 +1004,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
                 + collectionName + "', no action taken.");
       }
 
-      Map<String, Object> params = req.getParams().getAll(null, COLLECTION_PROP, CoreAdminParams.COMMIT_NAME);
+      Map<String, Object> params = copy(req.getParams(), null, COLLECTION_PROP, CoreAdminParams.COMMIT_NAME);
       return params;
     }),
     DELETESNAPSHOT_OP(DELETESNAPSHOT, (req, rsp, h) -> {
@@ -1014,7 +1016,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
         throw new SolrException(ErrorCode.BAD_REQUEST, "Collection '" + collectionName + "' does not exist, no action taken.");
       }
 
-      Map<String, Object> params = req.getParams().getAll(null, COLLECTION_PROP, CoreAdminParams.COMMIT_NAME);
+      Map<String, Object> params = copy(req.getParams(), null, COLLECTION_PROP, CoreAdminParams.COMMIT_NAME);
       return params;
     }),
     LISTSNAPSHOTS_OP(LISTSNAPSHOTS, (req, rsp, h) -> {
@@ -1037,7 +1039,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
       return null;
     }),
     REPLACENODE_OP(REPLACENODE, (req, rsp, h) -> {
-      return req.getParams().getAll(null,
+      return copy(req.getParams(), null,
           "source", //legacy
           "target",//legacy
           WAIT_FOR_FINAL_STATE,
@@ -1045,10 +1047,10 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
           CollectionParams.TARGET_NODE);
     }),
     MOVEREPLICA_OP(MOVEREPLICA, (req, rsp, h) -> {
-      Map<String, Object> map = req.getParams().required().getAll(null,
+      Map<String, Object> map = copy(req.getParams().required(), null,
           COLLECTION_PROP);
 
-      return req.getParams().getAll(map,
+      return copy(req.getParams(), map,
           CollectionParams.FROM_NODE,
           CollectionParams.SOURCE_NODE,
           CollectionParams.TARGET_NODE,
@@ -1057,7 +1059,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
           "replica",
           "shard");
     }),
-    DELETENODE_OP(DELETENODE, (req, rsp, h) -> req.getParams().required().getAll(null, "node"));
+    DELETENODE_OP(DELETENODE, (req, rsp, h) -> copy(req.getParams().required(), null, "node"));
 
     /**
      * Places all prefixed properties in the sink map (or a new map) using the prefix as the key and a map of
@@ -1318,18 +1320,28 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
     return Boolean.TRUE;
   }
 
-  /**
-   * Calls the consumer for each parameter and with all values.
-   * This may be more convenient than using the iterator.
-   */
-  //TODO put on SolrParams, or maybe SolrParams should implement Iterable<Map.Entry<String,String[]>
-  private static void forEach(SolrParams params, BiConsumer<String, String[]> consumer) {
-    //TODO do we add a predicate for the parameter as a filter? It would avoid calling getParams
-    final Iterator<String> iterator = params.getParameterNamesIterator();
-    while (iterator.hasNext()) {
-      String param = iterator.next();
-      String[] values = params.getParams(param);
-      consumer.accept(param, values);
+  // These "copy" methods were once SolrParams.getAll but were moved here as there is no universal way that
+  //  a SolrParams can be represented in a Map; there are various choices.
+
+  /**Copy all params to the given map or if the given map is null create a new one */
+  static Map<String, Object> copy(SolrParams source, Map<String, Object> sink, Collection<String> paramNames) {
+    if (sink == null) sink = new LinkedHashMap<>();
+    for (String param : paramNames) {
+      String[] v = source.getParams(param);
+      if (v != null && v.length > 0) {
+        if (v.length == 1) {
+          sink.put(param, v[0]);
+        } else {
+          sink.put(param, v);
+        }
+      }
     }
+    return sink;
   }
+
+  /**Copy all params to the given map or if the given map is null create a new one */
+  static Map<String, Object> copy(SolrParams source, Map<String, Object> sink, String... paramNames){
+    return copy(source, sink, paramNames == null ? Collections.emptyList() : Arrays.asList(paramNames));
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandler.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandler.java
index 34313d0..49de07b 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandler.java
@@ -55,17 +55,17 @@ import org.apache.solr.security.AuthorizationContext;
 import org.apache.solr.security.PermissionNameProvider;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
-import static org.apache.solr.cloud.OverseerConfigSetMessageHandler.BASE_CONFIGSET;
-import static org.apache.solr.cloud.OverseerConfigSetMessageHandler.CONFIGSETS_ACTION_PREFIX;
-import static org.apache.solr.cloud.OverseerConfigSetMessageHandler.PROPERTY_PREFIX;
-import static org.apache.solr.common.params.CommonParams.NAME;
-import static org.apache.solr.common.params.ConfigSetParams.ConfigSetAction.*;
-
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-
 import static org.apache.solr.cloud.Overseer.QUEUE_OPERATION;
+import static org.apache.solr.cloud.OverseerConfigSetMessageHandler.BASE_CONFIGSET;
+import static org.apache.solr.cloud.OverseerConfigSetMessageHandler.CONFIGSETS_ACTION_PREFIX;
+import static org.apache.solr.cloud.OverseerConfigSetMessageHandler.PROPERTY_PREFIX;
+import static org.apache.solr.common.params.CommonParams.NAME;
+import static org.apache.solr.common.params.ConfigSetParams.ConfigSetAction.CREATE;
+import static org.apache.solr.common.params.ConfigSetParams.ConfigSetAction.DELETE;
+import static org.apache.solr.common.params.ConfigSetParams.ConfigSetAction.LIST;
 
 /**
  * A {@link org.apache.solr.request.SolrRequestHandler} for ConfigSets API requests.
@@ -258,14 +258,14 @@ public class ConfigSetsHandler extends RequestHandlerBase implements PermissionN
     CREATE_OP(CREATE) {
       @Override
       Map<String, Object> call(SolrQueryRequest req, SolrQueryResponse rsp, ConfigSetsHandler h) throws Exception {
-        Map<String, Object> props = req.getParams().required().getAll(null, NAME, BASE_CONFIGSET);
+        Map<String, Object> props = CollectionsHandler.copy(req.getParams().required(), null, NAME, BASE_CONFIGSET);
         return copyPropertiesWithPrefix(req.getParams(), props, PROPERTY_PREFIX + ".");
       }
     },
     DELETE_OP(DELETE) {
       @Override
       Map<String, Object> call(SolrQueryRequest req, SolrQueryResponse rsp, ConfigSetsHandler h) throws Exception {
-        return req.getParams().required().getAll(null, NAME);
+        return CollectionsHandler.copy(req.getParams().required(), null, NAME);
       }
     },
     LIST_OP(LIST) {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/handler/admin/MetricsCollectorHandler.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/MetricsCollectorHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/MetricsCollectorHandler.java
index 3d8b6e0..7de3ac2 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/MetricsCollectorHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/MetricsCollectorHandler.java
@@ -96,7 +96,7 @@ public class MetricsCollectorHandler extends RequestHandlerBase {
   public void init(NamedList initArgs) {
     super.init(initArgs);
     if (initArgs != null) {
-      params = SolrParams.toSolrParams(initArgs);
+      params = initArgs.toSolrParams();
     } else {
       params = new ModifiableSolrParams();
     }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java b/solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
index 6511c67..d7b8474 100644
--- a/solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
+++ b/solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
@@ -167,7 +167,7 @@ public class QueryElevationComponent extends SearchComponent implements SolrCore
 
   @Override
   public void init(NamedList args) {
-    this.initArgs = SolrParams.toSolrParams(args);
+    this.initArgs = args.toSolrParams();
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/highlight/HighlightingPluginBase.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/highlight/HighlightingPluginBase.java b/solr/core/src/java/org/apache/solr/highlight/HighlightingPluginBase.java
index bed4a1d..5cbf123 100644
--- a/solr/core/src/java/org/apache/solr/highlight/HighlightingPluginBase.java
+++ b/solr/core/src/java/org/apache/solr/highlight/HighlightingPluginBase.java
@@ -44,7 +44,7 @@ public abstract class HighlightingPluginBase implements SolrInfoBean, SolrMetric
     if( args != null ) {
       Object o = args.get("defaults");
       if (o != null && o instanceof NamedList ) {
-        defaults = SolrParams.toSolrParams((NamedList)o);
+        defaults = ((NamedList) o).toSolrParams();
       }
     }
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/request/LocalSolrQueryRequest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/request/LocalSolrQueryRequest.java b/solr/core/src/java/org/apache/solr/request/LocalSolrQueryRequest.java
index 3421126..889877a 100644
--- a/solr/core/src/java/org/apache/solr/request/LocalSolrQueryRequest.java
+++ b/solr/core/src/java/org/apache/solr/request/LocalSolrQueryRequest.java
@@ -16,16 +16,16 @@
  */
 package org.apache.solr.request;
 
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
 import org.apache.solr.common.params.CommonParams;
 import org.apache.solr.common.params.MultiMapSolrParams;
 import org.apache.solr.common.params.SolrParams;
 import org.apache.solr.common.util.NamedList;
 import org.apache.solr.core.SolrCore;
 
-import java.util.Map;
-import java.util.HashMap;
-import java.util.Iterator;
-
 // With the addition of SolrParams, this class isn't needed for much anymore... it's currently
 // retained more for backward compatibility.
 
@@ -56,7 +56,7 @@ public class LocalSolrQueryRequest extends SolrQueryRequestBase {
   }
 
   public LocalSolrQueryRequest(SolrCore core, NamedList args) {
-    super(core, SolrParams.toSolrParams(args));
+    super(core, args.toSolrParams());
   }
 
   public LocalSolrQueryRequest(SolrCore core, Map<String,String[]> args) {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/response/XSLTResponseWriter.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/response/XSLTResponseWriter.java b/solr/core/src/java/org/apache/solr/response/XSLTResponseWriter.java
index 7b6cf00..d9cb939 100644
--- a/solr/core/src/java/org/apache/solr/response/XSLTResponseWriter.java
+++ b/solr/core/src/java/org/apache/solr/response/XSLTResponseWriter.java
@@ -58,7 +58,7 @@ public class XSLTResponseWriter implements QueryResponseWriter {
   
   @Override
   public void init(NamedList n) {
-      final SolrParams p = SolrParams.toSolrParams(n);
+    final SolrParams p = n.toSolrParams();
       xsltCacheLifetimeSeconds = p.getInt(XSLT_CACHE_PARAM,XSLT_CACHE_DEFAULT);
       log.info("xsltCacheLifetimeSeconds=" + xsltCacheLifetimeSeconds);
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/schema/IndexSchema.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/schema/IndexSchema.java b/solr/core/src/java/org/apache/solr/schema/IndexSchema.java
index 7fb4f79..336d3da 100644
--- a/solr/core/src/java/org/apache/solr/schema/IndexSchema.java
+++ b/solr/core/src/java/org/apache/solr/schema/IndexSchema.java
@@ -971,7 +971,7 @@ public class IndexSchema {
         // configure a factory, get a similarity back
         final NamedList<Object> namedList = DOMUtil.childNodesToNamedList(node);
         namedList.add(SimilarityFactory.CLASS_NAME, classArg);
-        SolrParams params = SolrParams.toSolrParams(namedList);
+        SolrParams params = namedList.toSolrParams();
         similarityFactory = (SimilarityFactory)obj;
         similarityFactory.init(params);
       } else {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchemaFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchemaFactory.java b/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchemaFactory.java
index d4a10bd..b9f9645 100644
--- a/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchemaFactory.java
+++ b/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchemaFactory.java
@@ -67,7 +67,7 @@ public class ManagedIndexSchemaFactory extends IndexSchemaFactory implements Sol
 
   @Override
   public void init(NamedList args) {
-    SolrParams params = SolrParams.toSolrParams(args);
+    SolrParams params = args.toSolrParams();
     isMutable = params.getBool("mutable", true);
     args.remove("mutable");
     managedSchemaResourceName = params.get(MANAGED_SCHEMA_RESOURCE_NAME, DEFAULT_MANAGED_SCHEMA_RESOURCE_NAME);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/spelling/DirectSolrSpellChecker.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/spelling/DirectSolrSpellChecker.java b/solr/core/src/java/org/apache/solr/spelling/DirectSolrSpellChecker.java
index a1f8df8..9188f54 100644
--- a/solr/core/src/java/org/apache/solr/spelling/DirectSolrSpellChecker.java
+++ b/solr/core/src/java/org/apache/solr/spelling/DirectSolrSpellChecker.java
@@ -94,7 +94,7 @@ public class DirectSolrSpellChecker extends SolrSpellChecker {
   @Override
   public String init(NamedList config, SolrCore core) {
 
-    SolrParams params = SolrParams.toSolrParams(config);
+    SolrParams params = config.toSolrParams();
 
     LOG.info("init: " + config);
     String name = super.init(config, core);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java
index 0cde7e9..d252d69 100644
--- a/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java
+++ b/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java
@@ -65,7 +65,7 @@ public class ClassificationUpdateProcessorFactory extends UpdateRequestProcessor
   @Override
   public void init(final NamedList args) {
     if (args != null) {
-      params = SolrParams.toSolrParams(args);
+      params = args.toSolrParams();
       classificationParams = new ClassificationUpdateProcessorParams();
 
       String fieldNames = params.get(INPUT_FIELDS_PARAM);// must be a comma separated list of fields

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/update/processor/IgnoreCommitOptimizeUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/processor/IgnoreCommitOptimizeUpdateProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/IgnoreCommitOptimizeUpdateProcessorFactory.java
index 6559cc7..315d6cd 100644
--- a/solr/core/src/java/org/apache/solr/update/processor/IgnoreCommitOptimizeUpdateProcessorFactory.java
+++ b/solr/core/src/java/org/apache/solr/update/processor/IgnoreCommitOptimizeUpdateProcessorFactory.java
@@ -50,7 +50,7 @@ public class IgnoreCommitOptimizeUpdateProcessorFactory extends UpdateRequestPro
 
   @Override
   public void init(final NamedList args) {
-    SolrParams params = (args != null) ? SolrParams.toSolrParams(args) : null;
+    SolrParams params = (args != null) ? args.toSolrParams() : null;
     if (params == null) {
       errorCode = ErrorCode.FORBIDDEN; // default is 403 error
       responseMsg = DEFAULT_RESPONSE_MSG;

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/update/processor/LogUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/processor/LogUpdateProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/LogUpdateProcessorFactory.java
index 66ec93f..06057f2 100644
--- a/solr/core/src/java/org/apache/solr/update/processor/LogUpdateProcessorFactory.java
+++ b/solr/core/src/java/org/apache/solr/update/processor/LogUpdateProcessorFactory.java
@@ -54,7 +54,7 @@ public class LogUpdateProcessorFactory extends UpdateRequestProcessorFactory imp
   @Override
   public void init( final NamedList args ) {
     if( args != null ) {
-      SolrParams params = SolrParams.toSolrParams( args );
+      SolrParams params = args.toSolrParams();
       maxNumToLog = params.getInt( "maxNumToLog", maxNumToLog );
       slowUpdateThresholdMillis = params.getInt("slowUpdateThresholdMillis", slowUpdateThresholdMillis);
     }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/update/processor/RegexpBoostProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/processor/RegexpBoostProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/RegexpBoostProcessorFactory.java
index d39023a..263111d 100644
--- a/solr/core/src/java/org/apache/solr/update/processor/RegexpBoostProcessorFactory.java
+++ b/solr/core/src/java/org/apache/solr/update/processor/RegexpBoostProcessorFactory.java
@@ -39,7 +39,7 @@ public class RegexpBoostProcessorFactory extends UpdateRequestProcessorFactory {
     @Override
     public void init(@SuppressWarnings("rawtypes") final NamedList args) {
         if (args != null) {
-            this.params = SolrParams.toSolrParams(args);
+          this.params = args.toSolrParams();
         }
     }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/update/processor/SignatureUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/processor/SignatureUpdateProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/SignatureUpdateProcessorFactory.java
index 40ef398..7257fd7 100644
--- a/solr/core/src/java/org/apache/solr/update/processor/SignatureUpdateProcessorFactory.java
+++ b/solr/core/src/java/org/apache/solr/update/processor/SignatureUpdateProcessorFactory.java
@@ -52,7 +52,7 @@ public class SignatureUpdateProcessorFactory
   @Override
   public void init(final NamedList args) {
     if (args != null) {
-      SolrParams params = SolrParams.toSolrParams(args);
+      SolrParams params = args.toSolrParams();
       boolean enabled = params.getBool("enabled", true);
       this.enabled = enabled;
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/java/org/apache/solr/update/processor/URLClassifyProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/processor/URLClassifyProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/URLClassifyProcessorFactory.java
index ec85e9b..418eba3 100644
--- a/solr/core/src/java/org/apache/solr/update/processor/URLClassifyProcessorFactory.java
+++ b/solr/core/src/java/org/apache/solr/update/processor/URLClassifyProcessorFactory.java
@@ -32,7 +32,7 @@ public class URLClassifyProcessorFactory extends UpdateRequestProcessorFactory {
   @Override
   public void init(@SuppressWarnings("rawtypes") final NamedList args) {
     if (args != null) {
-      this.params = SolrParams.toSolrParams(args);
+      this.params = args.toSolrParams();
     }
   }
   

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java b/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
index 2321188..ed1f663 100644
--- a/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
+++ b/solr/core/src/test/org/apache/solr/BasicFunctionalityTest.java
@@ -661,7 +661,7 @@ public class BasicFunctionalityTest extends SolrTestCaseJ4 {
     more.add("s", "ccc");
     more.add("ss","YYY");
     more.add("xx","XXX");
-    p = SolrParams.wrapAppended(p, SolrParams.toSolrParams(more));
+    p = SolrParams.wrapAppended(p, more.toSolrParams());
     assertEquals(3, p.getParams("s").length);
     assertEquals("bbb", p.getParams("s")[0]);
     assertEquals("aaa", p.getParams("s")[1]);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/core/src/test/org/apache/solr/handler/admin/TestCollectionAPIs.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/handler/admin/TestCollectionAPIs.java b/solr/core/src/test/org/apache/solr/handler/admin/TestCollectionAPIs.java
index c08328c..3601347 100644
--- a/solr/core/src/test/org/apache/solr/handler/admin/TestCollectionAPIs.java
+++ b/solr/core/src/test/org/apache/solr/handler/admin/TestCollectionAPIs.java
@@ -29,8 +29,10 @@ import org.apache.solr.SolrTestCaseJ4;
 import org.apache.solr.api.Api;
 import org.apache.solr.api.ApiBag;
 import org.apache.solr.client.solrj.SolrRequest;
+import org.apache.solr.common.SolrException;
 import org.apache.solr.common.cloud.ZkNodeProps;
 import org.apache.solr.common.params.CollectionParams;
+import org.apache.solr.common.params.ModifiableSolrParams;
 import org.apache.solr.common.params.MultiMapSolrParams;
 import org.apache.solr.common.params.SolrParams;
 import org.apache.solr.common.util.CommandOperation;
@@ -42,6 +44,7 @@ import org.apache.solr.request.LocalSolrQueryRequest;
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.response.SolrQueryResponse;
 import org.apache.solr.servlet.SolrRequestParsers;
+import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -56,6 +59,26 @@ import static org.apache.solr.common.util.Utils.fromJSONString;
 public class TestCollectionAPIs extends SolrTestCaseJ4 {
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
+  @Test
+  public void testCopyParamsToMap() {
+    ModifiableSolrParams params = new ModifiableSolrParams();
+    params.add("x", "X1");
+    params.add("x", "X2");
+    params.add("y", "Y");
+    Map<String, Object> m = CollectionsHandler.copy(params, null, "x", "y");
+    String[] x = (String[]) m.get("x");
+    assertEquals(2, x.length);
+    assertEquals("X1", x[0]);
+    assertEquals("X2", x[1]);
+    assertEquals("Y", m.get("y"));
+    try {
+      CollectionsHandler.copy(params.required(), null, "z");
+      fail("Error expected");
+    } catch (SolrException e) {
+      assertEquals(e.code(), SolrException.ErrorCode.BAD_REQUEST.code);
+
+    }
+  }
 
   public void testCommands() throws Exception {
     MockCollectionsHandler collectionsHandler = new MockCollectionsHandler();

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/client/solrj/io/graph/GatherNodesStream.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/graph/GatherNodesStream.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/graph/GatherNodesStream.java
index 9f4efac..c0fd054 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/io/graph/GatherNodesStream.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/graph/GatherNodesStream.java
@@ -18,37 +18,38 @@
 package org.apache.solr.client.solrj.io.graph;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Locale;
 import java.util.Map;
 import java.util.Set;
-import java.util.HashSet;
-import java.util.ArrayList;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Future;
 import java.util.stream.Collectors;
 
-import org.apache.solr.client.solrj.io.eq.MultipleFieldEqualitor;
-import org.apache.solr.client.solrj.io.stream.*;
-import org.apache.solr.client.solrj.io.stream.metrics.*;
 import org.apache.solr.client.solrj.io.Tuple;
 import org.apache.solr.client.solrj.io.comp.StreamComparator;
 import org.apache.solr.client.solrj.io.eq.FieldEqualitor;
+import org.apache.solr.client.solrj.io.eq.MultipleFieldEqualitor;
+import org.apache.solr.client.solrj.io.stream.CloudSolrStream;
+import org.apache.solr.client.solrj.io.stream.StreamContext;
+import org.apache.solr.client.solrj.io.stream.TupleStream;
+import org.apache.solr.client.solrj.io.stream.UniqueStream;
 import org.apache.solr.client.solrj.io.stream.expr.Explanation;
+import org.apache.solr.client.solrj.io.stream.expr.Explanation.ExpressionType;
 import org.apache.solr.client.solrj.io.stream.expr.Expressible;
 import org.apache.solr.client.solrj.io.stream.expr.StreamExplanation;
 import org.apache.solr.client.solrj.io.stream.expr.StreamExpression;
 import org.apache.solr.client.solrj.io.stream.expr.StreamExpressionNamedParameter;
 import org.apache.solr.client.solrj.io.stream.expr.StreamExpressionValue;
 import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
-import org.apache.solr.client.solrj.io.stream.expr.Explanation.ExpressionType;
+import org.apache.solr.client.solrj.io.stream.metrics.Metric;
 import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.params.SolrParams;
 import org.apache.solr.common.util.ExecutorUtil;
-import org.apache.solr.common.util.NamedList;
 import org.apache.solr.common.util.SolrjNamedThreadFactory;
 
 import static org.apache.solr.common.params.CommonParams.SORT;
@@ -451,7 +452,8 @@ public class GatherNodesStream extends TupleStream implements Expressible {
         }
       }
       
-      ModifiableSolrParams joinSParams = new ModifiableSolrParams(SolrParams.toMultiMap(new NamedList(queryParams)));
+      ModifiableSolrParams joinSParams = new ModifiableSolrParams();
+      queryParams.forEach(joinSParams::add);
       joinSParams.set("fl", buf.toString());
       joinSParams.set("qt", "/export");
       joinSParams.set(SORT, gather + " asc,"+traverseTo +" asc");

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/FacetStream.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/FacetStream.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/FacetStream.java
index b191085..4010ff42 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/FacetStream.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/FacetStream.java
@@ -18,13 +18,14 @@ package org.apache.solr.client.solrj.io.stream;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Locale;
 import java.util.Map;
-import java.util.Optional;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.stream.Collectors;
 
 import org.apache.solr.client.solrj.impl.CloudSolrClient;
@@ -303,9 +304,8 @@ public class FacetStream extends TupleStream implements Expressible  {
     
     child.setImplementingClass("Solr/Lucene");
     child.setExpressionType(ExpressionType.DATASTORE);
-    ModifiableSolrParams tmpParams = new ModifiableSolrParams(SolrParams.toMultiMap(params.toNamedList()));
 
-    child.setExpression(tmpParams.getMap().entrySet().stream().map(e -> String.format(Locale.ROOT, "%s=%s", e.getKey(), e.getValue())).collect(Collectors.joining(",")));
+    child.setExpression(params.stream().map(e -> String.format(Locale.ROOT, "%s=%s", e.getKey(), Arrays.toString(e.getValue()))).collect(Collectors.joining(",")));
     
     explanation.addChild(child);
     

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/SqlStream.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/SqlStream.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/SqlStream.java
index 100d722..2fb2aa6 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/SqlStream.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/SqlStream.java
@@ -124,8 +124,7 @@ public class SqlStream extends TupleStream implements Expressible {
 
     // parameters
 
-    ModifiableSolrParams mParams = new ModifiableSolrParams(SolrParams.toMultiMap(params.toNamedList()));
-    for (Entry<String, String[]> param : mParams.getMap().entrySet()) {
+    for (Entry<String, String[]> param : params) {
       String value = String.join(",", param.getValue());
 
       // SOLR-8409: This is a special case where the params contain a " character

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TimeSeriesStream.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TimeSeriesStream.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TimeSeriesStream.java
index 610a6df..cb743e9 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TimeSeriesStream.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TimeSeriesStream.java
@@ -21,6 +21,7 @@ import java.time.LocalDateTime;
 import java.time.ZoneOffset;
 import java.time.format.DateTimeFormatter;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Locale;
@@ -256,9 +257,8 @@ public class TimeSeriesStream extends TupleStream implements Expressible  {
 
     child.setImplementingClass("Solr/Lucene");
     child.setExpressionType(ExpressionType.DATASTORE);
-    ModifiableSolrParams tmpParams = new ModifiableSolrParams(SolrParams.toMultiMap(params.toNamedList()));
 
-    child.setExpression(tmpParams.getMap().entrySet().stream().map(e -> String.format(Locale.ROOT, "%s=%s", e.getKey(), e.getValue())).collect(Collectors.joining(",")));
+    child.setExpression(params.stream().map(e -> String.format(Locale.ROOT, "%s=%s", e.getKey(), Arrays.toString(e.getValue()))).collect(Collectors.joining(",")));
 
     explanation.addChild(child);
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
index 5759a6c..dde6dba 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
@@ -146,7 +146,7 @@ public class JavaBinUpdateRequestCodec {
 
       private List readOuterMostDocIterator(DataInputInputStream fis) throws IOException {
         NamedList params = (NamedList) namedList[0].get("params");
-        updateRequest.setParams(new ModifiableSolrParams(SolrParams.toSolrParams(params)));
+        updateRequest.setParams(new ModifiableSolrParams(params.toSolrParams()));
         if (handler == null) return super.readIterator(fis);
         Integer commitWithin = null;
         Boolean overwrite = null;
@@ -165,7 +165,7 @@ public class JavaBinUpdateRequestCodec {
             sdoc = listToSolrInputDocument((List<NamedList>) o);
           } else if (o instanceof NamedList)  {
             UpdateRequest req = new UpdateRequest();
-            req.setParams(new ModifiableSolrParams(SolrParams.toSolrParams((NamedList) o)));
+            req.setParams(new ModifiableSolrParams(((NamedList) o).toSolrParams()));
             handler.update(null, req, null, null);
           } else if (o instanceof Map.Entry){
             sdoc = (SolrInputDocument) ((Map.Entry) o).getKey();
@@ -200,7 +200,7 @@ public class JavaBinUpdateRequestCodec {
     if(updateRequest.getParams()==null) {
       NamedList params = (NamedList) namedList[0].get("params");
       if(params!=null) {
-        updateRequest.setParams(new ModifiableSolrParams(SolrParams.toSolrParams(params)));
+        updateRequest.setParams(new ModifiableSolrParams(params.toSolrParams()));
       }
     }
     delById = (List<String>) namedList[0].get("delById");

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/common/MapWriter.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/MapWriter.java b/solr/solrj/src/java/org/apache/solr/common/MapWriter.java
index 71e084c..1fba397 100644
--- a/solr/solrj/src/java/org/apache/solr/common/MapWriter.java
+++ b/solr/solrj/src/java/org/apache/solr/common/MapWriter.java
@@ -29,13 +29,14 @@ import org.apache.solr.common.util.Utils;
 /**
  * Use this class to push all entries of a Map into an output.
  * This avoids creating map instances and is supposed to be memory efficient.
- * If the entries are primitives, unnecessary boxing is also avoided
+ * If the entries are primitives, unnecessary boxing is also avoided.
  */
 public interface MapWriter extends MapSerializable {
 
   default String jsonStr(){
     return Utils.toJSONString(this);
   }
+
   @Override
   default Map toMap(Map<String, Object> map) {
     try {
@@ -64,6 +65,8 @@ public interface MapWriter extends MapSerializable {
             v = map;
           }
           map.put(k, v);
+          // note: It'd be nice to assert that there is no previous value at 'k' but it's possible the passed in
+          // map is already populated and the intention is to overwrite.
           return this;
         }
 
@@ -77,7 +80,9 @@ public interface MapWriter extends MapSerializable {
   void writeMap(EntryWriter ew) throws IOException;
 
   /**
-   * An interface to push one entry at a time to the output
+   * An interface to push one entry at a time to the output.
+   * The order of the keys is not defined, but we assume they are distinct -- don't call {@code put} more than once
+   * for the same key.
    */
   interface EntryWriter {
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/common/params/SolrParams.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/params/SolrParams.java b/solr/solrj/src/java/org/apache/solr/common/params/SolrParams.java
index b78c652..08022b2 100644
--- a/solr/solrj/src/java/org/apache/solr/common/params/SolrParams.java
+++ b/solr/solrj/src/java/org/apache/solr/common/params/SolrParams.java
@@ -457,6 +457,7 @@ public abstract class SolrParams implements Serializable, MapWriter, Iterable<Ma
   }
 
   /** Create a Map&lt;String,String&gt; from a NamedList given no keys are repeated */
+  @Deprecated // Doesn't belong here (no SolrParams).  Just remove.
   public static Map<String,String> toMap(NamedList params) {
     HashMap<String,String> map = new HashMap<>();
     for (int i=0; i<params.size(); i++) {
@@ -466,6 +467,7 @@ public abstract class SolrParams implements Serializable, MapWriter, Iterable<Ma
   }
 
   /** Create a Map&lt;String,String[]&gt; from a NamedList */
+  @Deprecated // Doesn't belong here (no SolrParams).  Just remove.
   public static Map<String,String[]> toMultiMap(NamedList params) {
     HashMap<String,String[]> map = new HashMap<>();
     for (int i=0; i<params.size(); i++) {
@@ -487,14 +489,19 @@ public abstract class SolrParams implements Serializable, MapWriter, Iterable<Ma
     return map;
   }
 
-  /** Create SolrParams from NamedList. */
+  /**
+   * Create SolrParams from NamedList.
+   * @deprecated Use {@link NamedList#toSolrParams()}.
+   */
+  @Deprecated //move to NamedList to allow easier flow
   public static SolrParams toSolrParams(NamedList params) {
-    // always use MultiMap for easier processing further down the chain
-    return new MultiMapSolrParams(toMultiMap(params));
+    return params.toSolrParams();
   }
 
-  /** Create filtered SolrParams. */
+  @Deprecated
   public SolrParams toFilteredSolrParams(List<String> names) {
+    // TODO do this better somehow via a view that filters?  See SolrCore.preDecorateResponse.
+    //   ... and/or add some optional predicates to iterator()?
     NamedList<String> nl = new NamedList<>();
     for (Iterator<String> it = getParameterNamesIterator(); it.hasNext();) {
       final String name = it.next();
@@ -505,7 +512,7 @@ public abstract class SolrParams implements Serializable, MapWriter, Iterable<Ma
         }
       }
     }
-    return toSolrParams(nl);
+    return nl.toSolrParams();
   }
 
   /**
@@ -528,6 +535,10 @@ public abstract class SolrParams implements Serializable, MapWriter, Iterable<Ma
     return result;
   }
 
+  // Deprecated because there isn't a universal way to deal with multi-values (always
+  //  String[] or only for > 1 or always 1st value).  And what to do with nulls or empty string.
+  //  And SolrParams now implements MapWriter.toMap(Map) (a default method).  So what do we do?
+  @Deprecated
   public Map<String, Object> getAll(Map<String, Object> sink, Collection<String> params) {
     if (sink == null) sink = new LinkedHashMap<>();
     for (String param : params) {
@@ -547,6 +558,7 @@ public abstract class SolrParams implements Serializable, MapWriter, Iterable<Ma
   /**Copy all params to the given map or if the given map is null
    * create a new one
    */
+  @Deprecated
   public Map<String, Object> getAll(Map<String, Object> sink, String... params){
     return getAll(sink, params == null ? Collections.emptyList() : Arrays.asList(params));
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java b/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java
index d34d8e7..1650602 100644
--- a/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java
+++ b/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java
@@ -21,6 +21,7 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Iterator;
 import java.util.LinkedHashMap;
 import java.util.List;
@@ -29,6 +30,8 @@ import java.util.Set;
 import java.util.function.BiConsumer;
 
 import org.apache.solr.common.SolrException;
+import org.apache.solr.common.params.MultiMapSolrParams;
+import org.apache.solr.common.params.SolrParams;
 
 /**
  * A simple container class for modeling an ordered list of name/value pairs.
@@ -514,6 +517,33 @@ public class NamedList<T> implements Cloneable, Serializable, Iterable<Map.Entry
     }
     return result;
   }
+  /**
+   * Create SolrParams from NamedList.  Values must be {@code String[]} or {@code List}
+   * (with toString()-appropriate entries), or otherwise have a toString()-appropriate value.
+   * Nulls are retained as such in arrays/lists but otherwise will NPE.
+   */
+  public SolrParams toSolrParams() {
+    HashMap<String,String[]> map = new HashMap<>();
+    for (int i=0; i<this.size(); i++) {
+      String name = this.getName(i);
+      Object val = this.getVal(i);
+      if (val instanceof String[]) {
+        MultiMapSolrParams.addParam(name, (String[]) val, map);
+      } else if (val instanceof List) {
+        List l = (List) val;
+        String[] s = new String[l.size()];
+        for (int j = 0; j < l.size(); j++) {
+          s[j] = l.get(j) == null ? null : l.get(j).toString();
+        }
+        MultiMapSolrParams.addParam(name, s, map);
+      } else {
+        //TODO: we NPE if val is null; yet we support val members above. A bug?
+        MultiMapSolrParams.addParam(name, val.toString(), map);
+      }
+    }
+    // always use MultiMap for easier processing further down the chain
+    return new MultiMapSolrParams(map);
+  }
 
   /**
    * 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1409ab8f/solr/solrj/src/test/org/apache/solr/common/params/SolrParamTest.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/test/org/apache/solr/common/params/SolrParamTest.java b/solr/solrj/src/test/org/apache/solr/common/params/SolrParamTest.java
index a2cb9bd..f079828 100644
--- a/solr/solrj/src/test/org/apache/solr/common/params/SolrParamTest.java
+++ b/solr/solrj/src/test/org/apache/solr/common/params/SolrParamTest.java
@@ -16,16 +16,14 @@
  */
 package org.apache.solr.common.params;
 
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.List;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
 
 import org.apache.lucene.util.LuceneTestCase;
 import org.apache.solr.common.SolrException;
-import org.apache.solr.common.util.NamedList;
 
 /**
  */
@@ -127,36 +125,6 @@ public class SolrParamTest extends LuceneTestCase {
 
   }
 
-  public void testMultiValues() {
-    NamedList nl = new NamedList();
-    nl.add("x", "X1");
-    nl.add("x", "X2");
-    nl.add("x", new String[]{"X3", "X4"});
-    Map<String, String[]> m = SolrParams.toMultiMap(nl);
-    String[] r = m.get("x");
-    assertTrue(Arrays.asList(r).containsAll(Arrays.asList(new String[]{"X1", "X2", "X3", "X4"})));
-  }
-
-  public void testGetAll() {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.add("x", "X1");
-    params.add("x", "X2");
-    params.add("y", "Y");
-    Map<String, Object> m = params.getAll(null, "x", "y");
-    String[] x = (String[]) m.get("x");
-    assertEquals(2, x.length);
-    assertEquals("X1", x[0]);
-    assertEquals("X2", x[1]);
-    assertEquals("Y", m.get("y"));
-    try {
-      params.required().getAll(null, "z");
-      fail("Error expected");
-    } catch (SolrException e) {
-      assertEquals(e.code(), SolrException.ErrorCode.BAD_REQUEST.code);
-
-    }
-  }
-
   public void testModParamAddParams() {
 
     ModifiableSolrParams aaa = new ModifiableSolrParams();


[13/40] lucene-solr:jira/solr-11833: SOLR-12142: EmbeddedSolrServer should use req.getContentWriter

Posted by ab...@apache.org.
SOLR-12142: EmbeddedSolrServer should use req.getContentWriter


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/1c8ab330
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/1c8ab330
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/1c8ab330

Branch: refs/heads/jira/solr-11833
Commit: 1c8ab330d66557a289dd5398576726a43964c9e8
Parents: d09c765
Author: noble <no...@apache.org>
Authored: Thu Apr 19 13:37:31 2018 +1000
Committer: noble <no...@apache.org>
Committed: Thu Apr 19 13:37:31 2018 +1000

----------------------------------------------------------------------
 solr/CHANGES.txt                                |  2 ++
 .../solrj/embedded/EmbeddedSolrServer.java      | 24 ++++++++++++++++----
 2 files changed, 22 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1c8ab330/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index e771990..298abad 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -230,6 +230,8 @@ Other Changes
 * SOLR-12134: ref-guide 'bare-bones html' validation is now part of 'ant documentation' and validates
   javadoc links locally. (hossman)
 
+* SOLR-12142: EmbeddedSolrServer should use req.getContentWriter (noble)
+
 ==================  7.3.1 ==================
 
 Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1c8ab330/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java b/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
index 0c7ea25..90eb0d1 100644
--- a/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
+++ b/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
@@ -16,23 +16,27 @@
  */
 package org.apache.solr.client.solrj.embedded;
 
+import java.io.ByteArrayInputStream;
 import java.io.IOException;
 import java.io.InputStream;
 import java.nio.file.Path;
+import java.util.Collections;
 
 import com.google.common.base.Strings;
-
 import org.apache.commons.io.output.ByteArrayOutputStream;
 import org.apache.solr.client.solrj.SolrClient;
 import org.apache.solr.client.solrj.SolrRequest;
 import org.apache.solr.client.solrj.SolrServerException;
 import org.apache.solr.client.solrj.StreamingResponseCallback;
+import org.apache.solr.client.solrj.impl.BinaryRequestWriter;
+import org.apache.solr.client.solrj.impl.BinaryRequestWriter.BAOS;
 import org.apache.solr.common.SolrDocument;
 import org.apache.solr.common.SolrDocumentList;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.params.CommonParams;
 import org.apache.solr.common.params.ModifiableSolrParams;
 import org.apache.solr.common.params.SolrParams;
+import org.apache.solr.common.util.ContentStreamBase;
 import org.apache.solr.common.util.JavaBinCodec;
 import org.apache.solr.common.util.NamedList;
 import org.apache.solr.core.CoreContainer;
@@ -126,7 +130,19 @@ public class EmbeddedSolrServer extends SolrClient {
     SolrRequestHandler handler = coreContainer.getRequestHandler(path);
     if (handler != null) {
       try {
-        SolrQueryRequest req = _parser.buildRequestFrom(null, request.getParams(), request.getContentStreams());
+        SolrQueryRequest req = _parser.buildRequestFrom(null, request.getParams(), Collections.singleton(new ContentStreamBase() {
+          @Override
+          public InputStream getStream() throws IOException {
+            BAOS baos = new BAOS();
+            new BinaryRequestWriter().write(request, baos);
+            return new ByteArrayInputStream(baos.getbuf());
+          }
+          @Override
+          public String getContentType() {
+            return CommonParams.JAVABIN_MIME;
+
+          }
+        }));
         req.getContext().put(PATH, path);
         SolrQueryResponse resp = new SolrQueryResponse();
         handler.handleRequest(req, resp);
@@ -201,10 +217,10 @@ public class EmbeddedSolrServer extends SolrClient {
               };
 
 
-          try(ByteArrayOutputStream out = new ByteArrayOutputStream()) {
+          try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
             createJavaBinCodec(callback, resolver).setWritableDocFields(resolver).marshal(rsp.getValues(), out);
 
-            try(InputStream in = out.toInputStream()){
+            try (InputStream in = out.toInputStream()) {
               return (NamedList<Object>) new JavaBinCodec(resolver).unmarshal(in);
             }
           }


[14/40] lucene-solr:jira/solr-11833: LUCENE-8258: Tighten rejection of travel planes that are too close to an edge. Note: this may cause failures in some cases; haven't seen it, but if that happens, the logic will need to change instead of just the cuto

Posted by ab...@apache.org.
LUCENE-8258: Tighten rejection of travel planes that are too close to an edge.  Note: this may cause failures in some cases; haven't seen it, but if that happens, the logic will need to change instead of just the cutoff.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a033759f
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/a033759f
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/a033759f

Branch: refs/heads/jira/solr-11833
Commit: a033759f127cec8137351a47dc4f6703941eab01
Parents: 1c8ab33
Author: Karl Wright <Da...@gmail.com>
Authored: Thu Apr 19 08:46:54 2018 -0400
Committer: Karl Wright <Da...@gmail.com>
Committed: Thu Apr 19 08:46:54 2018 -0400

----------------------------------------------------------------------
 .../spatial3d/geom/GeoComplexPolygon.java       | 27 +++++++++++---------
 .../lucene/spatial3d/geom/GeoPolygonTest.java   | 20 +++++++++++++++
 2 files changed, 35 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a033759f/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
----------------------------------------------------------------------
diff --git a/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java b/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
index e925d31..744646a 100644
--- a/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
+++ b/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
@@ -59,6 +59,8 @@ class GeoComplexPolygon extends GeoBasePolygon {
   private final GeoPoint[] edgePoints;
   private final Edge[] shapeStartEdges;
   
+  private final static double NEAR_EDGE_CUTOFF = -10.0 * Vector.MINIMUM_RESOLUTION;
+  
   /**
    * Create a complex polygon from multiple lists of points, and a single point which is known to be in or out of
    * set.
@@ -81,37 +83,37 @@ class GeoComplexPolygon extends GeoBasePolygon {
     this.testPointFixedZPlane = new Plane(0.0, 0.0, 1.0, -testPoint.z);
     
     Plane fixedYAbovePlane = new Plane(testPointFixedYPlane, true);
-    if (fixedYAbovePlane.D - planetModel.getMaximumYValue() > 0.0 || planetModel.getMinimumYValue() - fixedYAbovePlane.D > 0.0) {
+    if (fixedYAbovePlane.D - planetModel.getMaximumYValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumYValue() - fixedYAbovePlane.D > NEAR_EDGE_CUTOFF) {
         fixedYAbovePlane = null;
     }
     this.testPointFixedYAbovePlane = fixedYAbovePlane;
     
     Plane fixedYBelowPlane = new Plane(testPointFixedYPlane, false);
-    if (fixedYBelowPlane.D - planetModel.getMaximumYValue() > 0.0 ||  planetModel.getMinimumYValue() - fixedYBelowPlane.D > 0.0) {
+    if (fixedYBelowPlane.D - planetModel.getMaximumYValue() > NEAR_EDGE_CUTOFF ||  planetModel.getMinimumYValue() - fixedYBelowPlane.D > NEAR_EDGE_CUTOFF) {
         fixedYBelowPlane = null;
     }
     this.testPointFixedYBelowPlane = fixedYBelowPlane;
     
     Plane fixedXAbovePlane = new Plane(testPointFixedXPlane, true);
-    if (fixedXAbovePlane.D - planetModel.getMaximumXValue() > 0.0 || planetModel.getMinimumXValue() - fixedXAbovePlane.D > 0.0) {
+    if (fixedXAbovePlane.D - planetModel.getMaximumXValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumXValue() - fixedXAbovePlane.D > NEAR_EDGE_CUTOFF) {
         fixedXAbovePlane = null;
     }
     this.testPointFixedXAbovePlane = fixedXAbovePlane;
     
     Plane fixedXBelowPlane = new Plane(testPointFixedXPlane, false);
-    if (fixedXBelowPlane.D - planetModel.getMaximumXValue() > 0.0 || planetModel.getMinimumXValue() - fixedXBelowPlane.D > 0.0) {
+    if (fixedXBelowPlane.D - planetModel.getMaximumXValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumXValue() - fixedXBelowPlane.D > NEAR_EDGE_CUTOFF) {
         fixedXBelowPlane = null;
     }
     this.testPointFixedXBelowPlane = fixedXBelowPlane;
     
     Plane fixedZAbovePlane = new Plane(testPointFixedZPlane, true);
-    if (fixedZAbovePlane.D - planetModel.getMaximumZValue() > 0.0 ||planetModel.getMinimumZValue() - fixedZAbovePlane.D > 0.0) {
+    if (fixedZAbovePlane.D - planetModel.getMaximumZValue() > NEAR_EDGE_CUTOFF ||planetModel.getMinimumZValue() - fixedZAbovePlane.D > NEAR_EDGE_CUTOFF) {
         fixedZAbovePlane = null;
     }
     this.testPointFixedZAbovePlane = fixedZAbovePlane;
     
     Plane fixedZBelowPlane = new Plane(testPointFixedZPlane, false);
-    if (fixedZBelowPlane.D - planetModel.getMaximumZValue() > 0.0 || planetModel.getMinimumZValue() - fixedZBelowPlane.D > 0.0) {
+    if (fixedZBelowPlane.D - planetModel.getMaximumZValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumZValue() - fixedZBelowPlane.D > NEAR_EDGE_CUTOFF) {
         fixedZBelowPlane = null;
     }
     this.testPointFixedZBelowPlane = fixedZBelowPlane;
@@ -234,32 +236,32 @@ class GeoComplexPolygon extends GeoBasePolygon {
       final Plane travelPlaneFixedZ = new Plane(0.0, 0.0, 1.0, -z);
 
       Plane fixedYAbovePlane = new Plane(travelPlaneFixedY, true);
-      if (fixedYAbovePlane.D - planetModel.getMaximumYValue() > 0.0 || planetModel.getMinimumYValue() - fixedYAbovePlane.D > 0.0) {
+      if (fixedYAbovePlane.D - planetModel.getMaximumYValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumYValue() - fixedYAbovePlane.D > NEAR_EDGE_CUTOFF) {
           fixedYAbovePlane = null;
       }
       
       Plane fixedYBelowPlane = new Plane(travelPlaneFixedY, false);
-      if (fixedYBelowPlane.D - planetModel.getMaximumYValue() > 0.0 || planetModel.getMinimumYValue() - fixedYBelowPlane.D > 0.0) {
+      if (fixedYBelowPlane.D - planetModel.getMaximumYValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumYValue() - fixedYBelowPlane.D > NEAR_EDGE_CUTOFF) {
           fixedYBelowPlane = null;
       }
       
       Plane fixedXAbovePlane = new Plane(travelPlaneFixedX, true);
-      if (fixedXAbovePlane.D - planetModel.getMaximumXValue() > 0.0 || planetModel.getMinimumXValue() - fixedXAbovePlane.D > 0.0) {
+      if (fixedXAbovePlane.D - planetModel.getMaximumXValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumXValue() - fixedXAbovePlane.D > NEAR_EDGE_CUTOFF) {
           fixedXAbovePlane = null;
       }
       
       Plane fixedXBelowPlane = new Plane(travelPlaneFixedX, false);
-      if (fixedXBelowPlane.D - planetModel.getMaximumXValue() > 0.0 || planetModel.getMinimumXValue() - fixedXBelowPlane.D > 0.0) {
+      if (fixedXBelowPlane.D - planetModel.getMaximumXValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumXValue() - fixedXBelowPlane.D > NEAR_EDGE_CUTOFF) {
           fixedXBelowPlane = null;
       }
       
       Plane fixedZAbovePlane = new Plane(travelPlaneFixedZ, true);
-      if (fixedZAbovePlane.D - planetModel.getMaximumZValue() > 0.0 || planetModel.getMinimumZValue() - fixedZAbovePlane.D > 0.0) {
+      if (fixedZAbovePlane.D - planetModel.getMaximumZValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumZValue() - fixedZAbovePlane.D > NEAR_EDGE_CUTOFF) {
           fixedZAbovePlane = null;
       }
       
       Plane fixedZBelowPlane = new Plane(travelPlaneFixedZ, false);
-      if (fixedZBelowPlane.D - planetModel.getMaximumZValue() > 0.0 || planetModel.getMinimumZValue() - fixedZBelowPlane.D > 0.0) {
+      if (fixedZBelowPlane.D - planetModel.getMaximumZValue() > NEAR_EDGE_CUTOFF || planetModel.getMinimumZValue() - fixedZBelowPlane.D > NEAR_EDGE_CUTOFF) {
           fixedZBelowPlane = null;
       }
 
@@ -1248,6 +1250,7 @@ class GeoComplexPolygon extends GeoBasePolygon {
         final GeoPoint insideInsidePoint = pickProximate(insideInsidePoints);
         
         // Get the outside-outside intersection point
+        //System.out.println("Computing outside-outside intersection");
         final GeoPoint[] outsideOutsidePoints = testPointOutsidePlane.findIntersections(planetModel, travelOutsidePlane);  //these don't add anything: , checkPointCutoffPlane, testPointCutoffPlane);
         final GeoPoint outsideOutsidePoint = pickProximate(outsideOutsidePoints);
         

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a033759f/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
----------------------------------------------------------------------
diff --git a/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java b/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
index d841cbd..09ae776 100755
--- a/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
+++ b/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
@@ -1606,5 +1606,25 @@ shape:
     final GeoPoint point = new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-9.638811778842766E-12), Geo3DUtil.fromDegrees(-179.99999999999997));
     assertTrue(polygon.isWithin(point) == largePolygon.isWithin(point));
   }
+
+  @Test
+  public void testLUCENE8258() {
+    //POLYGON((0.004541088101890366 2.457524007073783E-4,0.003771467014711204 0.0011493732122651466,0.003975546116981415 0.002208372357731988,0.0010780690991920934 0.0014120274287707404,0.0 2.8E-322,7.486881020702663E-4 -3.4191957123300967E-4,-8.981008225032098E-4 -0.0032334745041058812,0.004541088101890366 2.457524007073783E-4))
+    final List<GeoPoint> points = new ArrayList<>();
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(2.457524007073783E-4), Geo3DUtil.fromDegrees(0.004541088101890366)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(0.0011493732122651466), Geo3DUtil.fromDegrees(0.003771467014711204)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(0.002208372357731988), Geo3DUtil.fromDegrees(0.003975546116981415)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(0.0014120274287707404), Geo3DUtil.fromDegrees(0.0010780690991920934)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(2.8E-322), Geo3DUtil.fromDegrees(0.0)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(-3.4191957123300967E-4), Geo3DUtil.fromDegrees(7.486881020702663E-4)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(-0.0032334745041058812), Geo3DUtil.fromDegrees(-8.981008225032098E-4)));
+    final GeoPolygonFactory.PolygonDescription description = new GeoPolygonFactory.PolygonDescription(points);
+    final GeoPolygon polygon = GeoPolygonFactory.makeGeoPolygon(PlanetModel.SPHERE, description);
+    final GeoPolygon largePolygon = GeoPolygonFactory.makeLargeGeoPolygon(PlanetModel.SPHERE, Collections.singletonList(description));
+
+    //POINT(1.413E-321 2.104316138623836E-4)
+    final GeoPoint point = new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(2.104316138623836E-4), Geo3DUtil.fromDegrees(1.413E-321));
+    assertTrue(polygon.isWithin(point) == largePolygon.isWithin(point));
+  }
   
 }


[25/40] lucene-solr:jira/solr-11833: SOLR-12159: Add memset Stream Evaluator

Posted by ab...@apache.org.
SOLR-12159: Add memset Stream Evaluator


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/f0d1e117
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/f0d1e117
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/f0d1e117

Branch: refs/heads/jira/solr-11833
Commit: f0d1e11796419d45051f4384f47cf83b0fb8044b
Parents: a4b335c
Author: Joel Bernstein <jb...@apache.org>
Authored: Fri Apr 20 11:11:28 2018 -0400
Committer: Joel Bernstein <jb...@apache.org>
Committed: Fri Apr 20 11:11:48 2018 -0400

----------------------------------------------------------------------
 .../org/apache/solr/client/solrj/io/Lang.java   |   1 +
 .../client/solrj/io/eval/MemsetEvaluator.java   | 167 +++++++++++++++++++
 .../solr/client/solrj/io/stream/LetStream.java  |  11 +-
 .../apache/solr/client/solrj/io/TestLang.java   |   2 +-
 .../solrj/io/stream/MathExpressionTest.java     | 106 ++++++++++++
 5 files changed, 284 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f0d1e117/solr/solrj/src/java/org/apache/solr/client/solrj/io/Lang.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/Lang.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/Lang.java
index fdbb875..067bc84 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/io/Lang.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/Lang.java
@@ -234,6 +234,7 @@ public class Lang {
         .withFunctionName("matrixMult", MatrixMultiplyEvaluator.class)
         .withFunctionName("bicubicSpline", BicubicSplineEvaluator.class)
         .withFunctionName("valueAt", ValueAtEvaluator.class)
+        .withFunctionName("memset", MemsetEvaluator.class)
 
             // Boolean Stream Evaluators
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f0d1e117/solr/solrj/src/java/org/apache/solr/client/solrj/io/eval/MemsetEvaluator.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/eval/MemsetEvaluator.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/eval/MemsetEvaluator.java
new file mode 100644
index 0000000..e8ad940
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/eval/MemsetEvaluator.java
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.client.solrj.io.eval;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Locale;
+
+import org.apache.solr.client.solrj.io.Tuple;
+import org.apache.solr.client.solrj.io.stream.StreamContext;
+import org.apache.solr.client.solrj.io.stream.TupleStream;
+import org.apache.solr.client.solrj.io.stream.expr.Expressible;
+import org.apache.solr.client.solrj.io.stream.expr.StreamExpression;
+import org.apache.solr.client.solrj.io.stream.expr.StreamExpressionNamedParameter;
+import org.apache.solr.client.solrj.io.stream.expr.StreamExpressionValue;
+import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
+
+
+/**
+ * The MemsetEvaluator reads a TupleStream and copies the values from specific
+ * fields into arrays that are bound to variable names in a map. The LetStream looks specifically
+ * for the MemsetEvaluator and makes the variables visible to other functions.
+ **/
+
+public class MemsetEvaluator extends RecursiveEvaluator {
+  protected static final long serialVersionUID = 1L;
+
+  private TupleStream in;
+  private String[] cols;
+  private String[] vars;
+  private int size = -1;
+
+  public MemsetEvaluator(StreamExpression expression, StreamFactory factory) throws IOException {
+    super(expression, factory);
+
+    /*
+    * Instantiate and validate all the parameters
+    */
+
+    List<StreamExpression> streamExpressions = factory.getExpressionOperandsRepresentingTypes(expression, Expressible.class, TupleStream.class);
+    StreamExpressionNamedParameter colsExpression = factory.getNamedOperand(expression, "cols");
+    StreamExpressionNamedParameter varsExpression = factory.getNamedOperand(expression, "vars");
+    StreamExpressionNamedParameter sizeExpression = factory.getNamedOperand(expression, "size");
+
+    if(1 != streamExpressions.size()){
+      throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - expecting a single stream but found %d",expression, streamExpressions.size()));
+    }
+
+    if(null == colsExpression || !(colsExpression.getParameter() instanceof StreamExpressionValue)){
+      throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - expecting single 'cols' parameter listing fields to sort over but didn't find one",expression));
+    }
+
+    if(null == varsExpression || !(varsExpression.getParameter() instanceof StreamExpressionValue)){
+      throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - expecting single 'vars' parameter listing fields to sort over but didn't find one",expression));
+    }
+
+    if(null != sizeExpression) {
+      StreamExpressionValue sizeExpressionValue = (StreamExpressionValue)sizeExpression.getParameter();
+      String sizeString = sizeExpressionValue.getValue();
+      size = Integer.parseInt(sizeString);
+    }
+
+    in = factory.constructStream(streamExpressions.get(0));
+
+    StreamExpressionValue colsExpressionValue = (StreamExpressionValue)colsExpression.getParameter();
+    StreamExpressionValue varsExpressionValue = (StreamExpressionValue)varsExpression.getParameter();
+    String colsString = colsExpressionValue.getValue();
+    String varsString = varsExpressionValue.getValue();
+
+    vars = varsString.split(",");
+    cols = colsString.split(",");
+
+    if(cols.length != vars.length) {
+      throw new IOException("The cols and vars lists must be the same size");
+    }
+
+    for(int i=0; i<cols.length; i++) {
+      cols[i]  = cols[i].trim();
+      vars[i]  = vars[i].trim();
+    }
+  }
+
+  public MemsetEvaluator(StreamExpression expression, StreamFactory factory, List<String> ignoredNamedParameters) throws IOException {
+    super(expression, factory, ignoredNamedParameters);
+  }
+
+  public void setStreamContext(StreamContext streamContext) {
+    this.streamContext = streamContext;
+  }
+
+  @Override
+  public Object evaluate(Tuple tuple) throws IOException {
+
+    /*
+    * Read all the tuples from the underlying stream and
+    * load specific fields into arrays. Then return
+    * a map with the variables names bound to the arrays.
+    */
+
+    try {
+      in.setStreamContext(streamContext);
+      in.open();
+      Map<String, List<Number>> arrays = new HashMap();
+
+      //Initialize the variables
+      for(String var : vars) {
+        if(size > -1) {
+          arrays.put(var, new ArrayList(size));
+        } else {
+          arrays.put(var, new ArrayList());
+        }
+      }
+
+      int count = 0;
+
+      while (true) {
+        Tuple t = in.read();
+        if (t.EOF) {
+          break;
+        }
+
+        if(size == -1 || count < size) {
+          for (int i = 0; i < cols.length; i++) {
+            String col = cols[i];
+            String var = vars[i];
+            List<Number> array = arrays.get(var);
+            Number number = (Number) t.get(col);
+            array.add(number);
+          }
+        }
+        ++count;
+      }
+
+      return arrays;
+    } catch (UncheckedIOException e) {
+      throw e.getCause();
+    } finally {
+      in.close();
+    }
+  }
+
+  @Override
+  public Object doWork(Object... values) throws IOException {
+    // Nothing to do here
+    throw new IOException("This call should never occur");
+  }
+}
+

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f0d1e117/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/LetStream.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/LetStream.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/LetStream.java
index 8bb12a5..e88eaf6 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/LetStream.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/LetStream.java
@@ -27,6 +27,7 @@ import java.util.HashSet;
 
 import org.apache.solr.client.solrj.io.Tuple;
 import org.apache.solr.client.solrj.io.comp.StreamComparator;
+import org.apache.solr.client.solrj.io.eval.MemsetEvaluator;
 import org.apache.solr.client.solrj.io.eval.StreamEvaluator;
 import org.apache.solr.client.solrj.io.stream.expr.Explanation;
 import org.apache.solr.client.solrj.io.stream.expr.Explanation.ExpressionType;
@@ -183,12 +184,18 @@ public class LetStream extends TupleStream implements Expressible {
         }
       } else {
         //Add the data from the StreamContext to a tuple.
-        //Let the evaluator work from this tuple.
+        //Let the evaluator works from this tuple.
         //This will allow columns to be created from tuples already in the StreamContext.
         Tuple eTuple = new Tuple(lets);
         StreamEvaluator evaluator = (StreamEvaluator)o;
+        evaluator.setStreamContext(streamContext);
         Object eo = evaluator.evaluate(eTuple);
-        lets.put(name, eo);
+        if(evaluator instanceof MemsetEvaluator) {
+          Map mem = (Map)eo;
+          lets.putAll(mem);
+        } else {
+          lets.put(name, eo);
+        }
       }
     }
     stream.open();

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f0d1e117/solr/solrj/src/test/org/apache/solr/client/solrj/io/TestLang.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/io/TestLang.java b/solr/solrj/src/test/org/apache/solr/client/solrj/io/TestLang.java
index 87f5c46..a98db51 100644
--- a/solr/solrj/src/test/org/apache/solr/client/solrj/io/TestLang.java
+++ b/solr/solrj/src/test/org/apache/solr/client/solrj/io/TestLang.java
@@ -68,7 +68,7 @@ public class TestLang extends LuceneTestCase {
        TemporalEvaluatorEpoch.FUNCTION_NAME, TemporalEvaluatorWeek.FUNCTION_NAME, TemporalEvaluatorQuarter.FUNCTION_NAME,
        TemporalEvaluatorDayOfQuarter.FUNCTION_NAME, "abs", "add", "div", "mult", "sub", "log", "pow",
       "mod", "ceil", "floor", "sin", "asin", "sinh", "cos", "acos", "cosh", "tan", "atan", "tanh", "round", "sqrt",
-      "cbrt", "coalesce", "uuid", "if", "convert", "valueAt"};
+      "cbrt", "coalesce", "uuid", "if", "convert", "valueAt", "memset"};
 
   @Test
   public void testLang() {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f0d1e117/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/MathExpressionTest.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/MathExpressionTest.java b/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/MathExpressionTest.java
index 07570a9..0cf4884 100644
--- a/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/MathExpressionTest.java
+++ b/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/MathExpressionTest.java
@@ -14,6 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.solr.client.solrj.io.stream;
 
 import java.io.IOException;
@@ -206,6 +207,111 @@ public class MathExpressionTest extends SolrCloudTestCase {
   }
 
   @Test
+  public void testMemset() throws Exception {
+    String expr = "let(echo=\"b, c\"," +
+        "              a=memset(list(tuple(field1=val(1), field2=val(10)), tuple(field1=val(2), field2=val(20))), " +
+        "                       cols=\"field1, field2\", " +
+        "                       vars=\"f1, f2\")," +
+        "              b=add(f1)," +
+        "              c=add(f2))";
+    ModifiableSolrParams paramsLoc = new ModifiableSolrParams();
+    paramsLoc.set("expr", expr);
+    paramsLoc.set("qt", "/stream");
+
+    String url = cluster.getJettySolrRunners().get(0).getBaseUrl().toString() + "/" + COLLECTIONORALIAS;
+    TupleStream solrStream = new SolrStream(url, paramsLoc);
+
+    StreamContext context = new StreamContext();
+    solrStream.setStreamContext(context);
+    List<Tuple> tuples = getTuples(solrStream);
+    assertEquals(tuples.size(),  1);
+    Number f1 = (Number)tuples.get(0).get("b");
+    assertEquals(f1.doubleValue(), 3, 0.0);
+
+    Number f2 = (Number)tuples.get(0).get("c");
+    assertEquals(f2.doubleValue(), 30, 0.0);
+  }
+
+  @Test
+  public void testMemsetSize() throws Exception {
+    String expr = "let(echo=\"b, c\"," +
+        "              a=memset(list(tuple(field1=val(1), field2=val(10)), tuple(field1=val(2), field2=val(20))), " +
+        "                       cols=\"field1, field2\", " +
+        "                       vars=\"f1, f2\"," +
+        "                       size=1)," +
+        "              b=add(f1)," +
+        "              c=add(f2))";
+    ModifiableSolrParams paramsLoc = new ModifiableSolrParams();
+    paramsLoc.set("expr", expr);
+    paramsLoc.set("qt", "/stream");
+
+    String url = cluster.getJettySolrRunners().get(0).getBaseUrl().toString() + "/" + COLLECTIONORALIAS;
+    TupleStream solrStream = new SolrStream(url, paramsLoc);
+
+    StreamContext context = new StreamContext();
+    solrStream.setStreamContext(context);
+    List<Tuple> tuples = getTuples(solrStream);
+    assertEquals(tuples.size(),  1);
+    Number f1 = (Number)tuples.get(0).get("b");
+    assertEquals(f1.doubleValue(), 1, 0.0);
+
+    Number f2 = (Number)tuples.get(0).get("c");
+    assertEquals(f2.doubleValue(), 10, 0.0);
+  }
+
+  @Test
+  public void testMemsetTimeSeries() throws Exception {
+    UpdateRequest updateRequest = new UpdateRequest();
+
+    int i=0;
+    while(i<50) {
+      updateRequest.add(id, "id_"+(++i),"test_dt", getDateString("2016", "5", "1"), "price_f", "400.00");
+    }
+
+    while(i<100) {
+      updateRequest.add(id, "id_"+(++i),"test_dt", getDateString("2015", "5", "1"), "price_f", "300.0");
+    }
+
+    while(i<150) {
+      updateRequest.add(id, "id_"+(++i),"test_dt", getDateString("2014", "5", "1"), "price_f", "500.0");
+    }
+
+    while(i<250) {
+      updateRequest.add(id, "id_"+(++i),"test_dt", getDateString("2013", "5", "1"), "price_f", "100.00");
+    }
+
+    updateRequest.commit(cluster.getSolrClient(), COLLECTIONORALIAS);
+
+    String expr = "memset(timeseries("+COLLECTIONORALIAS+", " +
+        "                            q=\"*:*\", " +
+        "                            start=\"2013-01-01T01:00:00.000Z\", " +
+        "                            end=\"2016-12-01T01:00:00.000Z\", " +
+        "                            gap=\"+1YEAR\", " +
+        "                            field=\"test_dt\", " +
+        "                            count(*)), " +
+        "                 cols=\"count(*)\"," +
+        "                 vars=\"a\")";
+
+    ModifiableSolrParams paramsLoc = new ModifiableSolrParams();
+    paramsLoc.set("expr", expr);
+    paramsLoc.set("qt", "/stream");
+
+    String url = cluster.getJettySolrRunners().get(0).getBaseUrl().toString()+"/"+COLLECTIONORALIAS;
+    TupleStream solrStream = new SolrStream(url, paramsLoc);
+
+    StreamContext context = new StreamContext();
+    solrStream.setStreamContext(context);
+    List<Tuple> tuples = getTuples(solrStream);
+    assertTrue(tuples.size() == 1);
+    Map<String, List<Number>> mem = (Map)tuples.get(0).get("return-value");
+    List<Number> array = mem.get("a");
+    assertEquals(array.get(0).intValue(), 100);
+    assertEquals(array.get(1).intValue(), 50);
+    assertEquals(array.get(2).intValue(), 50);
+    assertEquals(array.get(3).intValue(), 50);
+  }
+  
+  @Test
   public void testHist() throws Exception {
     String expr = "hist(sequence(100, 0, 1), 10)";
     ModifiableSolrParams paramsLoc = new ModifiableSolrParams();


[06/40] lucene-solr:jira/solr-11833: SOLR-12187: ZkStateReader.Notification thread should only catch Exception

Posted by ab...@apache.org.
SOLR-12187: ZkStateReader.Notification thread should only catch Exception


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/1d244144
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/1d244144
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/1d244144

Branch: refs/heads/jira/solr-11833
Commit: 1d2441441be5f5d87103ceeec6d852f8f2f6ba85
Parents: 8c60be4
Author: Cao Manh Dat <da...@apache.org>
Authored: Wed Apr 18 08:40:06 2018 +0700
Committer: Cao Manh Dat <da...@apache.org>
Committed: Wed Apr 18 08:40:06 2018 +0700

----------------------------------------------------------------------
 .../src/test/org/apache/solr/cloud/DeleteReplicaTest.java | 10 ++++++----
 .../java/org/apache/solr/common/cloud/ZkStateReader.java  |  4 ++--
 2 files changed, 8 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1d244144/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java b/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
index 8c11713..08e9a37 100644
--- a/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
@@ -28,6 +28,7 @@ import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.Supplier;
 
 import org.apache.solr.client.solrj.embedded.JettySolrRunner;
 import org.apache.solr.client.solrj.request.CollectionAdminRequest;
@@ -309,16 +310,17 @@ public class DeleteReplicaTest extends SolrCloudTestCase {
       ZkContainer.testing_beforeRegisterInZk = null;
     }
 
-    while (true) {
+    TimeOut timeOut = new TimeOut(30, TimeUnit.SECONDS, TimeSource.NANO_TIME);
+    timeOut.waitFor("Timeout adding replica to shard", () -> {
       try {
         CollectionAdminRequest.addReplicaToShard(collectionName, "shard1")
             .process(cluster.getSolrClient());
-        break;
+        return true;
       } catch (Exception e) {
         // expected, when the node is not fully started
-        Thread.sleep(500);
+        return false;
       }
-    }
+    });
     waitForState("Expected 1x2 collections", collectionName, clusterShape(1, 2));
 
     String leaderJettyNodeName = leaderJetty.getNodeName();

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/1d244144/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
index a73e4c1..cfae849 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
@@ -1626,8 +1626,8 @@ public class ZkStateReader implements Closeable {
           if (watcher.onStateChanged(liveNodes, collectionState)) {
             removeCollectionStateWatcher(collection, watcher);
           }
-        } catch (Throwable throwable) {
-          LOG.warn("Error on calling watcher", throwable);
+        } catch (Exception exception) {
+          LOG.warn("Error on calling watcher", exception);
         }
       }
     }


[16/40] lucene-solr:jira/solr-11833: SOLR-12163: Updated and expanded ZK ensemble docs

Posted by ab...@apache.org.
SOLR-12163: Updated and expanded ZK ensemble docs


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/42da6f79
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/42da6f79
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/42da6f79

Branch: refs/heads/jira/solr-11833
Commit: 42da6f795d8cd68891845f20201a902f7da4c579
Parents: aab2c77
Author: Cassandra Targett <ct...@apache.org>
Authored: Thu Apr 19 09:50:09 2018 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Thu Apr 19 09:50:09 2018 -0500

----------------------------------------------------------------------
 ...tting-up-an-external-zookeeper-ensemble.adoc | 335 ++++++++++++++-----
 1 file changed, 252 insertions(+), 83 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/42da6f79/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
index d3934bfc..d46b7f9 100644
--- a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
+++ b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
@@ -16,25 +16,31 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Although Solr comes bundled with http://zookeeper.apache.org[Apache ZooKeeper], you should consider yourself discouraged from using this internal ZooKeeper in production.
+Although Solr comes bundled with http://zookeeper.apache.org[Apache ZooKeeper], you are strongly encouraged to use an external ZooKeeper setup in production.
 
-Shutting down a redundant Solr instance will also shut down its ZooKeeper server, which might not be quite so redundant. Because a ZooKeeper ensemble must have a quorum of more than half its servers running at any given time, this can be a problem.
+While using Solr's embedded ZooKeeper instance is fine for getting started, you shouldn't use this in production because it does not provide any failover: if the Solr instance that hosts ZooKeeper shuts down, ZooKeeper is also shut down.
+Any shards or Solr instances that rely on it will not be able to communicate with it or each other.
 
-The solution to this problem is to set up an external ZooKeeper ensemble. Fortunately, while this process can seem intimidating due to the number of powerful options, setting up a simple ensemble is actually quite straightforward, as described below.
+The solution to this problem is to set up an external ZooKeeper _ensemble_, which is a number of servers running ZooKeeper that communicate with each other to coordinate the activities of the cluster.
+
+== How Many ZooKeeper Nodes?
+
+The first question to answer is the number of ZooKeeper nodes you will run in your ensemble.
+
+When planning how many ZooKeeper nodes to configure, keep in mind that the main principle for a ZooKeeper ensemble is maintaining a majority of servers to serve requests. This majority is called a _quorum_.
 
-.How Many ZooKeepers?
 [quote,ZooKeeper Administrator's Guide,http://zookeeper.apache.org/doc/r{ivy-zookeeper-version}/zookeeperAdmin.html]
 ____
-"For a ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. *To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines*. Thus, a deployment that consists of three machines can handle one failure, and a deployment of five machines can handle two failures. Note that a deployment of six machines can only handle two failures since three machines is not a majority.
-
-For this reason, ZooKeeper deployments are usually made up of an odd number of machines."
+"For a ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. *To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines*.
 ____
 
-When planning how many ZooKeeper nodes to configure, keep in mind that the main principle for a ZooKeeper ensemble is maintaining a majority of servers to serve requests. This majority is also called a _quorum_.
+To properly maintain a quorum, it's highly recommended to have an odd number of ZooKeeper servers in your ensemble, so a majority is maintained.
 
-It is generally recommended to have an odd number of ZooKeeper servers in your ensemble, so a majority is maintained.
+To explain why, think about this scenario: If you have two ZooKeeper nodes and one goes down, this means only 50% of available servers are available. Since this is not a majority, ZooKeeper will no longer serve requests.
 
-For example, if you only have two ZooKeeper nodes and one goes down, 50% of available servers is not a majority, so ZooKeeper will no longer serve requests. However, if you have three ZooKeeper nodes and one goes down, you have 66% of available servers available, and ZooKeeper will continue normally while you repair the one down node. If you have 5 nodes, you could continue operating with two down nodes if necessary.
+However, if you have three ZooKeeper nodes and one goes down, you have 66% of your servers available and ZooKeeper will continue normally while you repair the one down node. If you have 5 nodes, you could continue operating with two down nodes if necessary.
+
+It's not generally recommended to go above 5 nodes. While it may seem that more nodes provide greater fault-tolerance and availability, in practice it becomes less efficient because of the amount of inter-node coordination that occurs. Unless you have a truly massive Solr cluster (on the scale of 1,000s of nodes), try to stay to 3 as a general rule, or maybe 5 if you have a larger cluster.
 
 More information on ZooKeeper clusters is available from the ZooKeeper documentation at http://zookeeper.apache.org/doc/r{ivy-zookeeper-version}/zookeeperAdmin.html#sc_zkMulitServerSetup.
 
@@ -42,22 +48,37 @@ More information on ZooKeeper clusters is available from the ZooKeeper documenta
 
 The first step in setting up Apache ZooKeeper is, of course, to download the software. It's available from http://zookeeper.apache.org/releases.html.
 
-[IMPORTANT]
-====
-When using stand-alone ZooKeeper, you need to take care to keep your version of ZooKeeper updated with the latest version distributed with Solr. Since you are using it as a stand-alone application, it does not get upgraded when you upgrade Solr.
-
 Solr currently uses Apache ZooKeeper v{ivy-zookeeper-version}.
+
+[WARNING]
+====
+When using an external ZooKeeper ensemble, you will need need to keep your local installation up-to-date with the latest version distributed with Solr. Since it is a stand-alone application in this scenario, it does not get upgraded as part of a standard Solr upgrade.
 ====
 
-== Setting Up a Single ZooKeeper
+== ZooKeeper Installation
 
-=== Create the Instance
-Creating the instance is a simple matter of extracting the files into a specific target directory. The actual directory itself doesn't matter, as long as you know where it is, and where you'd like to have ZooKeeper store its internal data.
+Installation consists of extracting the files into a specific target directory where you'd like to have ZooKeeper store its internal data. The actual directory itself doesn't matter, as long as you know where it is.
 
-=== Configure the Instance
-The next step is to configure your ZooKeeper instance. To do that, create the following file: `<ZOOKEEPER_HOME>/conf/zoo.cfg`. To this file, add the following information:
+The command to unpack the ZooKeeper package is:
 
-[source,bash]
+[source,bash,subs="attributes"]
+tar xvf zookeeper-{ivy-zookeeper-version}.tar.gz
+
+This location is the `<ZOOKEEPER_HOME>` for ZooKeeper on this server.
+
+Installing and unpacking ZooKeeper must be repeated on each server where ZooKeeper will be run.
+
+== Configuration for a ZooKeeper Ensemble
+
+After installation, we'll first take a look at the basic configuration for ZooKeeper, then specific parameters for configuring each node to be part of an ensemble.
+
+=== Initial Configuration
+
+To configure your ZooKeeper instance, create a file named `<ZOOKEEPER_HOME>/conf/zoo.cfg`. A sample configuration file is included in your ZooKeeper installation, as `conf/zoo_sample.cfg`. You can edit and rename that file instead of creating it new if you prefer.
+
+The file should have the following information to start:
+
+[source,properties]
 ----
 tickTime=2000
 dataDir=/var/lib/zookeeper
@@ -66,122 +87,270 @@ clientPort=2181
 
 The parameters are as follows:
 
-`tickTime`:: Part of what ZooKeeper does is to determine which servers are up and running at any given time, and the minimum session time out is defined as two "ticks". The `tickTime` parameter specifies, in miliseconds, how long each tick should be.
+`tickTime`:: Part of what ZooKeeper does is determine which servers are up and running at any given time, and the minimum session time out is defined as two "ticks". The `tickTime` parameter specifies in milliseconds how long each tick should be.
 
-`dataDir`:: This is the directory in which ZooKeeper will store data about the cluster. This directory should start out empty.
+`dataDir`:: This is the directory in which ZooKeeper will store data about the cluster. This directory must be empty before starting ZooKeeper for the first time.
 
 `clientPort`:: This is the port on which Solr will access ZooKeeper.
 
-Once this file is in place, you're ready to start the ZooKeeper instance.
+These are the basic parameters that need to be in use on each ZooKeeper node, so this file must be copied to or created on each node.
 
-=== Run the Instance
+Next we'll customize this configuration to work within an ensemble.
 
-To run the instance, you can simply use the `ZOOKEEPER_HOME/bin/zkServer.sh` script provided, as with this command: `zkServer.sh start`
+=== Ensemble Configuration
 
-Again, ZooKeeper provides a great deal of power through additional configurations, but delving into them is beyond the scope of this tutorial. For more information, see the ZooKeeper http://zookeeper.apache.org/doc/r{ivy-zookeeper-version}/zookeeperStarted.html[Getting Started] page. For this example, however, the defaults are fine.
+To complete configuration for an ensemble, we need to set additional parameters so each node knows who it is in the ensemble and where every other node is.
 
-=== Point Solr at the Instance
+Each of the examples below assume you are installing ZooKeeper on different servers with different hostnames.
 
-Pointing Solr at the ZooKeeper instance you've created is a simple matter of using the `-z` parameter when using the bin/solr script. For example, in order to point the Solr instance to the ZooKeeper you've started on port 2181, this is what you'd need to do:
+Once complete, your `zoo.cfg` file might look like this:
 
-Starting `cloud` example with ZooKeeper already running at port 2181 (with all other defaults):
-
-[source,bash]
+[source,properties]
 ----
-bin/solr start -e cloud -z localhost:2181 -noprompt
+tickTime=2000
+dataDir=/var/lib/zookeeper
+clientPort=2181
+
+initLimit=5
+syncLimit=2
+server.1=zoo1:2888:3888
+server.2=zoo2:2888:3888
+server.3=zoo3:2888:3888
+
+autopurge.snapRetainCount=3
+autopurge.purgeInterval=1
 ----
 
-Add a node pointing to an existing ZooKeeper at port 2181:
+We've added these parameters to the three we had already:
+
+`initLimit`:: Amount of time, in ticks, to allow followers to connect and sync to a leader. In this case, you have 5 ticks, each of which is 2000 milliseconds long, so the server will wait as long as 10 seconds to connect and sync with the leader.
 
+`syncLimit`:: Amount of time, in ticks, to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
+
+`server._X_`:: These are the server IDs (the `_X_` part), hostnames (or IP addresses) and ports for all servers in the ensemble. The IDs differentiate each node of the ensemble, and allow each node to know where each of the other node is located. The ports can be any ports you choose; ZooKeeper's default ports are `2888:3888`.
++
+Since we've assigned server IDs to specific hosts/ports, we must also define which server in the list this node is. We do this with a `myid` file stored in the data directory (defined by the `dataDir` parameter). The contents of the `myid` file is only the server ID.
++
+In the case of the configuration example above, you would create the file `/var/lib/zookeeperdata/1/myid` with the content "1" (without quotes), as in this example:
++
 [source,bash]
-----
-bin/solr start -cloud -s <path to solr home for new node> -p 8987 -z localhost:2181
-----
+1
 
-NOTE: When you are not using an example to start solr, make sure you upload the configuration set to ZooKeeper before creating the collection.
+`autopurge.snapRetainCount`:: The number of snapshots and corresponding transaction logs to retain when purging old snapshots and transaction logs.
++
+ZooKeeper automatically keeps a transaction log and writes to it as changes are made. A snapshot of the current state is taken periodically, and this snapshot supersedes transaction logs older than the snapshot. However, ZooKeeper never cleans up neither the old snapshots nor the old transaction logs; over time they will silently fill available disk space on each server.
++
+To avoid this, set the `autopurge.snapRetainCount` and `autopurge.purgeInterval` parameters to enable an automatic clean up (purge) to occur at regular intervals. The `autopurge.snapRetainCount` parameter will keep the set number of snapshots and transaction logs when a clean up occurs. This parameter can be configured higher than `3`, but cannot be set lower than 3.
 
-=== Shut Down ZooKeeper
+`autopurge.purgeInterval`:: The time in hours between purge tasks. The default for this parameter is `0`, so must be set to `1` or higher to enable automatic clean up of snapshots and transaction logs. Setting it as high as `24`, for once a day, is acceptable if preferred.
 
-To shut down ZooKeeper, use the zkServer script with the "stop" command: `zkServer.sh stop`.
+We'll repeat this configuration on each node.
+
+On the second node, update `<ZOOKEEPER_HOME>/conf/zoo.cfg` file so it matches the content on node 1 (particularly the server hosts and ports):
+
+[source,properties]
+----
+tickTime=2000
+dataDir=/var/lib/zookeeper
+clientPort=2181
 
-== Setting up a ZooKeeper Ensemble
+initLimit=5
+syncLimit=2
+server.1=zoo1:2888:3888
+server.2=zoo2:2888:3888
+server.3=zoo3:2888:3888
 
-With an external ZooKeeper ensemble, you need to set things up just a little more carefully as compared to the Getting Started example.
+autopurge.snapRetainCount=3
+autopurge.purgeInterval=1
+----
 
-The difference is that rather than simply starting up the servers, you need to configure them to know about and talk to each other first. So your original `zoo.cfg` file might look like this:
+On the second node, create a `myid` file with the contents "2", and put it in the `/var/lib/zookeeper` directory.
 
 [source,bash]
+2
+
+On the third node, update `<ZOOKEEPER_HOME>/conf/zoo.cfg` file so it matches the content on nodes 1 and 2 (particularly the server hosts and ports):
+
+[source,properties]
 ----
-dataDir=/var/lib/zookeeperdata/1
+tickTime=2000
+dataDir=/var/lib/zookeeper
 clientPort=2181
+
 initLimit=5
 syncLimit=2
-server.1=localhost:2888:3888
-server.2=localhost:2889:3889
-server.3=localhost:2890:3890
+server.1=zoo1:2888:3888
+server.2=zoo2:2888:3888
+server.3=zoo3:2888:3888
+
+autopurge.snapRetainCount=3
+autopurge.purgeInterval=1
 ----
 
-Here you see three new parameters:
+And create the `myid` file in the `/var/lib/zookeeper` directory:
 
-initLimit:: Amount of time, in ticks, to allow followers to connect and sync to a leader. In this case, you have 5 ticks, each of which is 2000 milliseconds long, so the server will wait as long as 10 seconds to connect and sync with the leader.
+[source,bash]
+3
 
-syncLimit:: Amount of time, in ticks, to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
+Repeat this for servers 4 and 5 if you are creating a 5-node ensemble (a rare case).
 
-server.X:: These are the IDs and locations of all servers in the ensemble, the ports on which they communicate with each other. The server ID must additionally stored in the `<dataDir>/myid` file and be located in the `dataDir` of each ZooKeeper instance. The ID identifies each server, so in the case of this first instance, you would create the file `/var/lib/zookeeperdata/1/myid` with the content "1".
 
-Now, whereas with Solr you need to create entirely new directories to run multiple instances, all you need for a new ZooKeeper instance, even if it's on the same machine for testing purposes, is a new configuration file. To complete the example you'll create two more configuration files.
+===  ZooKeeper Environment Configuration
 
-The `<ZOOKEEPER_HOME>/conf/zoo2.cfg` file should have the content:
+To ease troubleshooting in case of problems with the ensemble later, it's recommended to run ZooKeeper with logging enabled and with proper JVM garbage collection (GC) settings.
 
-[source,bash]
+. Create a file named `zookeeper-env.sh` and put it in the `ZOOKEEPER_HOME/conf` directory (the same place you put `zoo.cfg`). This file will need to exist on each server of the ensemble.
+
+. Add the following settings to the file:
++
+[source,properties]
 ----
-tickTime=2000
-dataDir=/var/lib/zookeeperdata/2
-clientPort=2182
-initLimit=5
-syncLimit=2
-server.1=localhost:2888:3888
-server.2=localhost:2889:3889
-server.3=localhost:2890:3890
+ZOO_LOG_DIR="/path/for/log/files"
+ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
+
+SERVER_JVMFLAGS="-Xms2048m -Xmx2048m -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:$ZOO_LOG_DIR/zookeeper_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M"
 ----
++
+The property `ZOO_LOG_DIR` defines the location on the server where ZooKeeper will print its logs. `ZOO_LOG4J_PROP` sets the logging level and log appenders.
++
+With `SERVER_JVMFLAGS`, we've defined several parameters for garbage collection and logging GC-related events. One of the system parameters is `-Xloggc:$ZOO_LOG_DIR/zookeeper_gc.log`, which will put the garbage collection logs in the same directory we've defined for ZooKeeper logs, in a file named `zookeeper_gc.log`.
+
+. Review the default settings in `ZOOKEEPER_HOME/conf/log4j.properties`, especially the `log4j.appender.ROLLINGFILE.MaxFileSize` parameter. This sets the size at which log files will be rolled over, and by default it is 10MB.
 
-You'll also need to create `<ZOOKEEPER_HOME>/conf/zoo3.cfg`:
+. Copy `zookeeper-env.sh` and any changes to `log4j.properties` to each server in the ensemble.
+
+NOTE: The above instructions are for Linux servers only. The default `zkServer.sh` script includes support for a `zookeeper-env.sh` file but the Windows version of the script, `zkServer.cmd`, does not. To make the same configuration on a Windows server, the changes would need to be made directly in the `zkServer.cmd`.
+
+At this point, you are ready to start your ZooKeeper ensemble.
+
+=== More Information about ZooKeeper
+
+ZooKeeper provides a great deal of power through additional configurations, but delving into them is beyond the scope of Solr's documentation. For more information, see the  http://zookeeper.apache.org/doc/r{ivy-zookeeper-version}[ZooKeeper documentation].
+
+== Starting and Stopping ZooKeeper
+
+=== Start ZooKeeper
+
+To start the ensemble, use the `ZOOKEEPER_HOME/bin/zkServer.sh` or `zkServer.cmd` script, as with this command:
+
+.Linux OS
+[source,bash]
+zkServer.sh start
+
+.Windows OS
+[source,text]
+zkServer.cmd start
+
+This command needs to be run on each server that will run ZooKeeper.
+
+TIP: You should see the ZooKeeper logs in the directory where you defined to store them. However, immediately after startup, you may not see the `zookeeper_gc.log` yet, as it likely will not appear until garbage collection has happened the first time.
+
+=== Shut Down ZooKeeper
+
+To shut down ZooKeeper, use the same `zkServer.sh` or `zkServer.cmd` script on each server with the "stop" command:
+
+.Linux OS
+[source,bash]
+zkServer.sh stop
+
+.Windows OS
+[source,text]
+zkServer.cmd stop
+
+== Solr Configuration
+
+When starting Solr, you must provide an address for ZooKeeper or Solr won't know how to use it. This can be done in two ways: by defining the _connect string_, a list of servers where ZooKeeper is running, at every startup on every node of the Solr cluster, or by editing Solr's include file as a permanent system parameter. Both approaches are described below.
+
+When referring to the location of ZooKeeper within Solr, it's best to use the addresses of all the servers in the ensemble. If one happens to be down, Solr will automatically be able to send its request to another server in the list.
+
+=== Using a chroot
+
+If your ensemble is or will be shared among other systems besides Solr, you should consider defining application-specific _znodes_, or a hierarchical namespace that will only include Solr's files.
+
+Once you create a znode for each application, you add it's name, also called a _chroot_, to the end of your connect string whenever you tell Solr where to access ZooKeeper.
+
+Creating a chroot is done with a `bin/solr` command:
+
+[source,text]
+bin/solr zk mkroot /solr -z zk1:2181,zk2:2181,zk3:2181
+
+See the section <<solr-control-script-reference.adoc#create-a-znode-supports-chroot,Create a znode>> for more examples of this command.
+
+Once the znode is created, it behaves in a similar way to a directory on a filesystem: the data stored by Solr in ZooKeeper is nested beneath the main data directory and won't be mixed with data from another system or process that uses the same ZooKeeper ensemble.
+
+=== Using the -z Parameter with bin/solr
+
+Pointing Solr at the ZooKeeper instance you've created is a simple matter of using the `-z` parameter when using the `bin/solr` script.
+
+For example, to point the Solr instance to the ZooKeeper you've started on port 2181 on three servers, this is what you'd need to do:
 
 [source,bash]
 ----
-tickTime=2000
-dataDir=/var/lib/zookeeperdata/3
-clientPort=2183
-initLimit=5
-syncLimit=2
-server.1=localhost:2888:3888
-server.2=localhost:2889:3889
-server.3=localhost:2890:3890
+bin/solr start -e cloud -z zk1:2181,zk2:2181,zk3:2181/solr
 ----
 
-Finally, create your `myid` files in each of the `dataDir` directories so that each server knows which instance it is. The id in the `myid` file on each machine must match the "server.X" definition. So, the ZooKeeper instance (or machine) named "server.1" in the above example, must have a `myid` file containing the value "1". The `myid` file can be any integer between 1 and 255, and must match the server IDs assigned in the `zoo.cfg` file.
+=== Updating Solr's Include Files
 
-To start the servers, you can simply explicitly reference the configuration files:
+If you update Solr's include file (`solr.in.sh` or `solr.in.cmd`), which overrides defaults used with `bin/solr`, you will not have to use the `-z` parameter with `bin/solr` commands.
 
-[source,bash]
+
+[.dynamic-tabs]
+--
+[example.tab-pane#linux1]
+====
+[.tab-label]*Linux: solr.in.sh*
+
+The section to look for will be commented out:
+
+[source,properties]
 ----
-cd <ZOOKEEPER_HOME>
-bin/zkServer.sh start zoo.cfg
-bin/zkServer.sh start zoo2.cfg
-bin/zkServer.sh start zoo3.cfg
+# Set the ZooKeeper connection string if using an external ZooKeeper ensemble
+# e.g. host1:2181,host2:2181/chroot
+# Leave empty if not using SolrCloud
+#ZK_HOST=""
 ----
 
-Once these servers are running, you can reference them from Solr just as you did before:
+Remove the comment marks at the start of the line and enter the ZooKeeper connect string:
 
-[source,bash]
+[source,properties]
 ----
-bin/solr start -e cloud -z localhost:2181,localhost:2182,localhost:2183 -noprompt
+# Set the ZooKeeper connection string if using an external ZooKeeper ensemble
+# e.g. host1:2181,host2:2181/chroot
+# Leave empty if not using SolrCloud
+ZK_HOST="zk1:2181,zk2:2181,zk3:2181/solr"
 ----
+====
+
+[example.tab-pane#zkwindows]
+====
+[.tab-label]*Windows: solr.in.cmd*
+
+The section to look for will be commented out:
+
+[source,bat]
+----
+REM Set the ZooKeeper connection string if using an external ZooKeeper ensemble
+REM e.g. host1:2181,host2:2181/chroot
+REM Leave empty if not using SolrCloud
+REM set ZK_HOST=
+----
+
+Remove the comment marks at the start of the line and enter the ZooKeeper connect string:
+
+[source,bat]
+----
+REM Set the ZooKeeper connection string if using an external ZooKeeper ensemble
+REM e.g. host1:2181,host2:2181/chroot
+REM Leave empty if not using SolrCloud
+set ZK_HOST=zk1:2181,zk2:2181,zk3:2181/solr
+----
+====
+--
+
+Now you will not have to enter the connection string when starting Solr.
 
 == Securing the ZooKeeper Connection
 
 You may also want to secure the communication between ZooKeeper and Solr.
 
-To setup ACL protection of znodes, see <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>.
-
-For more information on getting the most power from your ZooKeeper installation, check out the http://zookeeper.apache.org/doc/r{ivy-zookeeper-version}/zookeeperAdmin.html[ZooKeeper Administrator's Guide].
+To setup ACL protection of znodes, see the section <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>.


[02/40] lucene-solr:jira/solr-11833: SOLR-12187: Replica should watch clusterstate and unload itself if its entry is removed

Posted by ab...@apache.org.
SOLR-12187: Replica should watch clusterstate and unload itself if its entry is removed


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/09db13f4
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/09db13f4
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/09db13f4

Branch: refs/heads/jira/solr-11833
Commit: 09db13f4f459a391896db2a90b2830f9b1fd898d
Parents: f7f12a5
Author: Cao Manh Dat <da...@apache.org>
Authored: Tue Apr 17 20:16:31 2018 +0700
Committer: Cao Manh Dat <da...@apache.org>
Committed: Tue Apr 17 20:16:31 2018 +0700

----------------------------------------------------------------------
 solr/CHANGES.txt                                |   2 +
 .../org/apache/solr/cloud/ZkController.java     | 136 ++++++++++++++-----
 .../java/org/apache/solr/core/ZkContainer.java  |  16 ---
 .../solr/handler/admin/CollectionsHandler.java  |  41 +-----
 .../apache/solr/cloud/DeleteReplicaTest.java    |  84 ++++++++++--
 .../org/apache/solr/cloud/ForceLeaderTest.java  |  75 ----------
 .../org/apache/solr/cloud/MoveReplicaTest.java  |  17 ---
 .../apache/solr/common/cloud/ZkStateReader.java |   8 +-
 8 files changed, 186 insertions(+), 193 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index e010366..1107c56 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -164,6 +164,8 @@ Bug Fixes
 
 * SOLR-10169: PeerSync will hit an NPE on no response errors when looking for fingerprint. (Erick Erickson)
 
+* SOLR-12187: Replica should watch clusterstate and unload itself if its entry is removed (Cao Manh Dat)
+
 Optimizations
 ----------------------
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/core/src/java/org/apache/solr/cloud/ZkController.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/cloud/ZkController.java b/solr/core/src/java/org/apache/solr/cloud/ZkController.java
index 872a8b9..8cd02b6 100644
--- a/solr/core/src/java/org/apache/solr/cloud/ZkController.java
+++ b/solr/core/src/java/org/apache/solr/cloud/ZkController.java
@@ -38,6 +38,7 @@ import java.util.HashSet;
 import java.util.List;
 import java.util.Locale;
 import java.util.Map;
+import java.util.Objects;
 import java.util.Optional;
 import java.util.Set;
 import java.util.concurrent.Callable;
@@ -65,6 +66,7 @@ import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrException.ErrorCode;
 import org.apache.solr.common.cloud.BeforeReconnect;
 import org.apache.solr.common.cloud.ClusterState;
+import org.apache.solr.common.cloud.CollectionStateWatcher;
 import org.apache.solr.common.cloud.DefaultConnectionStrategy;
 import org.apache.solr.common.cloud.DefaultZkACLProvider;
 import org.apache.solr.common.cloud.DefaultZkCredentialsProvider;
@@ -1033,42 +1035,39 @@ public class ZkController {
     try {
       // pre register has published our down state
       final String baseUrl = getBaseUrl();
-      
       final CloudDescriptor cloudDesc = desc.getCloudDescriptor();
       final String collection = cloudDesc.getCollectionName();
-      
-      final String coreZkNodeName = desc.getCloudDescriptor().getCoreNodeName();
+      final String shardId = cloudDesc.getShardId();
+      final String coreZkNodeName = cloudDesc.getCoreNodeName();
       assert coreZkNodeName != null : "we should have a coreNodeName by now";
 
+      // check replica's existence in clusterstate first
+      try {
+        zkStateReader.waitForState(collection, Overseer.isLegacy(zkStateReader) ? 60000 : 100,
+            TimeUnit.MILLISECONDS, (liveNodes, collectionState) -> getReplicaOrNull(collectionState, shardId, coreZkNodeName) != null);
+      } catch (TimeoutException e) {
+        throw new SolrException(ErrorCode.SERVER_ERROR, "Error registering SolrCore, timeout waiting for replica present in clusterstate");
+      }
+      Replica replica = getReplicaOrNull(zkStateReader.getClusterState().getCollectionOrNull(collection), shardId, coreZkNodeName);
+      if (replica == null) {
+        throw new SolrException(ErrorCode.SERVER_ERROR, "Error registering SolrCore, replica is removed from clusterstate");
+      }
+
       ZkShardTerms shardTerms = getShardTerms(collection, cloudDesc.getShardId());
 
       // This flag is used for testing rolling updates and should be removed in SOLR-11812
       boolean isRunningInNewLIR = "new".equals(desc.getCoreProperty("lirVersion", "new"));
-      if (isRunningInNewLIR && cloudDesc.getReplicaType() != Type.PULL) {
+      if (isRunningInNewLIR && replica.getType() != Type.PULL) {
         shardTerms.registerTerm(coreZkNodeName);
       }
-      String shardId = cloudDesc.getShardId();
-      Map<String,Object> props = new HashMap<>();
-      // we only put a subset of props into the leader node
-      props.put(ZkStateReader.BASE_URL_PROP, baseUrl);
-      props.put(ZkStateReader.CORE_NAME_PROP, coreName);
-      props.put(ZkStateReader.NODE_NAME_PROP, getNodeName());
-      
+
       log.debug("Register replica - core:{} address:{} collection:{} shard:{}",
-          coreName, baseUrl, cloudDesc.getCollectionName(), shardId);
-      
-      ZkNodeProps leaderProps = new ZkNodeProps(props);
+          coreName, baseUrl, collection, shardId);
 
       try {
         // If we're a preferred leader, insert ourselves at the head of the queue
-        boolean joinAtHead = false;
-        final DocCollection docCollection = zkStateReader.getClusterState().getCollectionOrNull(collection);
-        Replica replica = (docCollection == null) ? null : docCollection.getReplica(coreZkNodeName);
-        if (replica != null) {
-          joinAtHead = replica.getBool(SliceMutator.PREFERRED_LEADER_PROP, false);
-        }
-        //TODO WHy would replica be null?
-        if (replica == null || replica.getType() != Type.PULL) {
+        boolean joinAtHead = replica.getBool(SliceMutator.PREFERRED_LEADER_PROP, false);
+        if (replica.getType() != Type.PULL) {
           joinElection(desc, afterExpiration, joinAtHead);
         } else if (replica.getType() == Type.PULL) {
           if (joinAtHead) {
@@ -1093,9 +1092,8 @@ public class ZkController {
       String ourUrl = ZkCoreNodeProps.getCoreUrl(baseUrl, coreName);
       log.debug("We are " + ourUrl + " and leader is " + leaderUrl);
       boolean isLeader = leaderUrl.equals(ourUrl);
-      Replica.Type replicaType =  zkStateReader.getClusterState().getCollection(collection).getReplica(coreZkNodeName).getType();
-      assert !(isLeader && replicaType == Type.PULL): "Pull replica became leader!";
-      
+      assert !(isLeader && replica.getType() == Type.PULL) : "Pull replica became leader!";
+
       try (SolrCore core = cc.getCore(desc.getName())) {
         
         // recover from local transaction log and wait for it to complete before
@@ -1105,7 +1103,7 @@ public class ZkController {
         // leader election perhaps?
         
         UpdateLog ulog = core.getUpdateHandler().getUpdateLog();
-        boolean isTlogReplicaAndNotLeader = replicaType == Replica.Type.TLOG && !isLeader;
+        boolean isTlogReplicaAndNotLeader = replica.getType() == Replica.Type.TLOG && !isLeader;
         if (isTlogReplicaAndNotLeader) {
           String commitVersion = ReplicateFromLeader.getCommitVersion(core);
           if (commitVersion != null) {
@@ -1138,23 +1136,40 @@ public class ZkController {
           publish(desc, Replica.State.ACTIVE);
         }
 
-        if (isRunningInNewLIR && replicaType != Type.PULL) {
+        if (isRunningInNewLIR && replica.getType() != Type.PULL) {
+          // the watcher is added to a set so multiple calls of this method will left only one watcher
           shardTerms.addListener(new RecoveringCoreTermWatcher(core.getCoreDescriptor(), getCoreContainer()));
         }
         core.getCoreDescriptor().getCloudDescriptor().setHasRegistered(true);
+      } catch (Exception e) {
+        unregister(coreName, desc, false);
+        throw e;
       }
       
       // make sure we have an update cluster state right away
       zkStateReader.forceUpdateCollection(collection);
+      // the watcher is added to a set so multiple calls of this method will left only one watcher
+      zkStateReader.registerCollectionStateWatcher(cloudDesc.getCollectionName(),
+          new UnloadCoreOnDeletedWatcher(coreZkNodeName, shardId, desc.getName()));
       return shardId;
-    } catch (Exception e) {
-      unregister(coreName, desc, false);
-      throw e;
     } finally {
       MDCLoggingContext.clear();
     }
   }
 
+  private Replica getReplicaOrNull(DocCollection docCollection, String shard, String coreNodeName) {
+    if (docCollection == null) return null;
+
+    Slice slice = docCollection.getSlice(shard);
+    if (slice == null) return null;
+
+    Replica replica = slice.getReplica(coreNodeName);
+    if (replica == null) return null;
+    if (!getNodeName().equals(replica.getNodeName())) return null;
+
+    return replica;
+  }
+
   public void startReplicationFromLeader(String coreName, boolean switchTransactionLog) throws InterruptedException {
     log.info("{} starting background replication from leader", coreName);
     ReplicateFromLeader replicateFromLeader = new ReplicateFromLeader(cc, coreName);
@@ -1359,11 +1374,7 @@ public class ZkController {
   }
 
   public void publish(final CoreDescriptor cd, final Replica.State state) throws Exception {
-    publish(cd, state, true);
-  }
-
-  public void publish(final CoreDescriptor cd, final Replica.State state, boolean updateLastState) throws Exception {
-    publish(cd, state, updateLastState, false);
+    publish(cd, state, true, false);
   }
 
   /**
@@ -1430,6 +1441,9 @@ public class ZkController {
       props.put(ZkStateReader.SHARD_ID_PROP, cd.getCloudDescriptor().getShardId());
       props.put(ZkStateReader.COLLECTION_PROP, collection);
       props.put(ZkStateReader.REPLICA_TYPE, cd.getCloudDescriptor().getReplicaType().toString());
+      if (!Overseer.isLegacy(zkStateReader)) {
+        props.put(ZkStateReader.FORCE_SET_STATE_PROP, "false");
+      }
       if (numShards != null) {
         props.put(ZkStateReader.NUM_SHARDS_PROP, numShards.toString());
       }
@@ -1521,7 +1535,6 @@ public class ZkController {
       }
     }
     CloudDescriptor cloudDescriptor = cd.getCloudDescriptor();
-    zkStateReader.unregisterCore(cloudDescriptor.getCollectionName());
     if (removeCoreFromZk) {
       ZkNodeProps m = new ZkNodeProps(Overseer.QUEUE_OPERATION,
           OverseerAction.DELETECORE.toLower(), ZkStateReader.CORE_NAME_PROP, coreName,
@@ -1653,7 +1666,6 @@ public class ZkController {
               "Collection {} not visible yet, but flagging it so a watch is registered when it becomes visible" :
               "Registering watch for collection {}",
           collectionName);
-      zkStateReader.registerCore(collectionName);
     } catch (KeeperException e) {
       log.error("", e);
       throw new ZooKeeperException(SolrException.ErrorCode.SERVER_ERROR, "", e);
@@ -2707,6 +2719,56 @@ public class ZkController {
     };
   }
 
+  private class UnloadCoreOnDeletedWatcher implements CollectionStateWatcher {
+    String coreNodeName;
+    String shard;
+    String coreName;
+
+    public UnloadCoreOnDeletedWatcher(String coreNodeName, String shard, String coreName) {
+      this.coreNodeName = coreNodeName;
+      this.shard = shard;
+      this.coreName = coreName;
+    }
+
+    @Override
+    // synchronized due to SOLR-11535
+    public synchronized boolean onStateChanged(Set<String> liveNodes, DocCollection collectionState) {
+      if (getCoreContainer().getCoreDescriptor(coreName) == null) return true;
+
+      boolean replicaRemoved = getReplicaOrNull(collectionState, shard, coreNodeName) == null;
+      if (replicaRemoved) {
+        try {
+          log.info("Replica {} removed from clusterstate, remove it.", coreName);
+          getCoreContainer().unload(coreName, true, true, true);
+        } catch (SolrException e) {
+          if (!e.getMessage().contains("Cannot unload non-existent core")) {
+            // no need to log if the core was already unloaded
+            log.warn("Failed to unregister core:{}", coreName, e);
+          }
+        } catch (Exception e) {
+          log.warn("Failed to unregister core:{}", coreName, e);
+        }
+      }
+      return replicaRemoved;
+    }
+
+    @Override
+    public boolean equals(Object o) {
+      if (this == o) return true;
+      if (o == null || getClass() != o.getClass()) return false;
+      UnloadCoreOnDeletedWatcher that = (UnloadCoreOnDeletedWatcher) o;
+      return Objects.equals(coreNodeName, that.coreNodeName) &&
+          Objects.equals(shard, that.shard) &&
+          Objects.equals(coreName, that.coreName);
+    }
+
+    @Override
+    public int hashCode() {
+
+      return Objects.hash(coreNodeName, shard, coreName);
+    }
+  }
+
   /**
    * Thrown during leader initiated recovery process if current node is not leader
    */

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/core/src/java/org/apache/solr/core/ZkContainer.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/core/ZkContainer.java b/solr/core/src/java/org/apache/solr/core/ZkContainer.java
index f89367f..34e5764 100644
--- a/solr/core/src/java/org/apache/solr/core/ZkContainer.java
+++ b/solr/core/src/java/org/apache/solr/core/ZkContainer.java
@@ -222,22 +222,6 @@ public class ZkContainer {
   public ZkController getZkController() {
     return zkController;
   }
-  
-  public void publishCoresAsDown(List<SolrCore> cores) {
-    
-    for (SolrCore core : cores) {
-      try {
-        zkController.publish(core.getCoreDescriptor(), Replica.State.DOWN);
-      } catch (KeeperException e) {
-        ZkContainer.log.error("", e);
-      } catch (InterruptedException e) {
-        Thread.interrupted();
-        ZkContainer.log.error("", e);
-      } catch (Exception e) {
-        ZkContainer.log.error("", e);
-      }
-    }
-  }
 
   public void close() {
     

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
index 5f4bc01..c02271e 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
@@ -40,7 +40,6 @@ import org.apache.solr.api.Api;
 import org.apache.solr.client.solrj.SolrResponse;
 import org.apache.solr.client.solrj.impl.HttpSolrClient;
 import org.apache.solr.client.solrj.impl.HttpSolrClient.Builder;
-import org.apache.solr.client.solrj.request.CoreAdminRequest;
 import org.apache.solr.client.solrj.request.CoreAdminRequest.RequestSyncShard;
 import org.apache.solr.client.solrj.response.RequestStatusState;
 import org.apache.solr.client.solrj.util.SolrIdentifierValidator;
@@ -282,7 +281,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
    * In SOLR-11739 we change the way the async IDs are checked to decide if one has
    * already been used or not. For backward compatibility, we continue to check in the
    * old way (meaning, in all the queues) for now. This extra check should be removed
-   * in Solr 9 
+   * in Solr 9
    */
   private static final boolean CHECK_ASYNC_ID_BACK_COMPAT_LOCATIONS = true;
 
@@ -306,7 +305,7 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
        }
 
        NamedList<String> r = new NamedList<>();
-       
+
        if (CHECK_ASYNC_ID_BACK_COMPAT_LOCATIONS && (
            coreContainer.getZkController().getOverseerCompletedMap().contains(asyncId) ||
            coreContainer.getZkController().getOverseerFailureMap().contains(asyncId) ||
@@ -1162,26 +1161,15 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
 
       // Wait till we have an active leader
       boolean success = false;
-      for (int i = 0; i < 10; i++) {
-        ZkCoreNodeProps zombieLeaderProps = getZombieLeader(zkController, collectionName, sliceId);
-        if (zombieLeaderProps != null) {
-          log.warn("A replica {} on node {} won the leader election, but not exist in clusterstate, " +
-                  "remove it and waiting for another round of election",
-              zombieLeaderProps.getCoreName(), zombieLeaderProps.getNodeName());
-          try (HttpSolrClient solrClient = new HttpSolrClient.Builder(zombieLeaderProps.getBaseUrl()).build()) {
-            CoreAdminRequest.unloadCore(zombieLeaderProps.getCoreName(), solrClient);
-          }
-          // waiting for another election round
-          i = 0;
-        }
-        clusterState = zkController.getClusterState();
+      for (int i = 0; i < 9; i++) {
+        Thread.sleep(5000);
+        clusterState = handler.coreContainer.getZkController().getClusterState();
         collection = clusterState.getCollection(collectionName);
         slice = collection.getSlice(sliceId);
         if (slice.getLeader() != null && slice.getLeader().getState() == State.ACTIVE) {
           success = true;
           break;
         }
-        Thread.sleep(5000);
         log.warn("Force leader attempt {}. Waiting 5 secs for an active leader. State of the slice: {}", (i + 1), slice);
       }
 
@@ -1198,25 +1186,6 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission
     }
   }
 
-  /**
-   * Zombie leader is a replica won the election but does not exist in clusterstate
-   * @return null if the zombie leader does not exist
-   */
-  private static ZkCoreNodeProps getZombieLeader(ZkController zkController, String collection, String shardId) {
-    try {
-      ZkCoreNodeProps leaderProps = zkController.getLeaderProps(collection, shardId, 1000);
-      DocCollection docCollection = zkController.getClusterState().getCollection(collection);
-      Replica replica = docCollection.getReplica(leaderProps.getNodeProps().getStr(ZkStateReader.CORE_NODE_NAME_PROP));
-      if (replica == null) return leaderProps;
-      if (!replica.getNodeName().equals(leaderProps.getNodeName())) {
-        return leaderProps;
-      }
-      return null;
-    } catch (Exception e) {
-      return null;
-    }
-  }
-
   public static void waitForActiveCollection(String collectionName, CoreContainer cc, SolrResponse createCollResponse)
       throws KeeperException, InterruptedException {
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java b/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
index d9dbba0..8c11713 100644
--- a/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
@@ -22,6 +22,7 @@ import java.nio.file.Path;
 import java.nio.file.Paths;
 import java.util.EnumSet;
 import java.util.List;
+import java.util.Set;
 import java.util.concurrent.Semaphore;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
@@ -34,11 +35,13 @@ import org.apache.solr.client.solrj.request.CoreStatus;
 import org.apache.solr.cloud.overseer.OverseerAction;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.cloud.CollectionStateWatcher;
 import org.apache.solr.common.cloud.DocCollection;
 import org.apache.solr.common.cloud.Replica;
 import org.apache.solr.common.cloud.Slice;
 import org.apache.solr.common.cloud.ZkNodeProps;
 import org.apache.solr.common.cloud.ZkStateReader;
+import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 import org.apache.solr.common.util.TimeSource;
 import org.apache.solr.common.util.Utils;
 import org.apache.solr.core.ZkContainer;
@@ -86,12 +89,17 @@ public class DeleteReplicaTest extends SolrCloudTestCase {
     assertTrue("Unexpected error message: " + e.getMessage(), e.getMessage().contains("state is 'active'"));
     assertTrue("Data directory for " + replica.getName() + " should not have been deleted", Files.exists(dataDir));
 
+    JettySolrRunner replicaJetty = cluster.getReplicaJetty(replica);
+    ZkStateReaderAccessor accessor = new ZkStateReaderAccessor(replicaJetty.getCoreContainer().getZkController().getZkStateReader());
+    Set<CollectionStateWatcher> watchers = accessor.getStateWatchers(collectionName);
     CollectionAdminRequest.deleteReplica(collectionName, shard.getName(), replica.getName())
         .process(cluster.getSolrClient());
     waitForState("Expected replica " + replica.getName() + " to have been removed", collectionName, (n, c) -> {
       Slice testShard = c.getSlice(shard.getName());
       return testShard.getReplica(replica.getName()) == null;
     });
+    // the core no longer watch collection state since it was removed
+    assertEquals(watchers.size() - 1, accessor.getStateWatchers(collectionName).size());
 
     assertFalse("Data directory for " + replica.getName() + " should have been removed", Files.exists(dataDir));
 
@@ -165,8 +173,63 @@ public class DeleteReplicaTest extends SolrCloudTestCase {
   }
 
   @Test
+  public void deleteReplicaFromClusterState() throws Exception {
+    deleteReplicaFromClusterState("true");
+    deleteReplicaFromClusterState("false");
+    CollectionAdminRequest.setClusterProperty(ZkStateReader.LEGACY_CLOUD, null).process(cluster.getSolrClient());
+  }
+
+  public void deleteReplicaFromClusterState(String legacyCloud) throws Exception {
+    CollectionAdminRequest.setClusterProperty(ZkStateReader.LEGACY_CLOUD, legacyCloud).process(cluster.getSolrClient());
+    final String collectionName = "deleteFromClusterState_"+legacyCloud;
+    CollectionAdminRequest.createCollection(collectionName, "conf", 1, 3)
+        .process(cluster.getSolrClient());
+    cluster.getSolrClient().add(collectionName, new SolrInputDocument("id", "1"));
+    cluster.getSolrClient().add(collectionName, new SolrInputDocument("id", "2"));
+    cluster.getSolrClient().commit(collectionName);
+
+    Slice shard = getCollectionState(collectionName).getSlice("shard1");
+    Replica replica = getRandomReplica(shard);
+    JettySolrRunner replicaJetty = cluster.getReplicaJetty(replica);
+    ZkStateReaderAccessor accessor = new ZkStateReaderAccessor(replicaJetty.getCoreContainer().getZkController().getZkStateReader());
+    Set<CollectionStateWatcher> watchers = accessor.getStateWatchers(collectionName);
+
+    ZkNodeProps m = new ZkNodeProps(
+        Overseer.QUEUE_OPERATION, OverseerAction.DELETECORE.toLower(),
+        ZkStateReader.CORE_NAME_PROP, replica.getCoreName(),
+        ZkStateReader.NODE_NAME_PROP, replica.getNodeName(),
+        ZkStateReader.COLLECTION_PROP, collectionName,
+        ZkStateReader.CORE_NODE_NAME_PROP, replica.getName(),
+        ZkStateReader.BASE_URL_PROP, replica.getBaseUrl());
+    Overseer.getStateUpdateQueue(cluster.getZkClient()).offer(Utils.toJSON(m));
+
+    waitForState("Timeout waiting for replica get deleted", collectionName,
+        (liveNodes, collectionState) -> collectionState.getSlice("shard1").getReplicas().size() == 2);
+
+    TimeOut timeOut = new TimeOut(60, TimeUnit.SECONDS, TimeSource.NANO_TIME);
+    timeOut.waitFor("Waiting for replica get unloaded", () ->
+        replicaJetty.getCoreContainer().getCoreDescriptor(replica.getCoreName()) == null
+    );
+    // the core no longer watch collection state since it was removed
+    timeOut = new TimeOut(60, TimeUnit.SECONDS, TimeSource.NANO_TIME);
+    timeOut.waitFor("Waiting for watcher get removed", () ->
+        watchers.size() - 1 == accessor.getStateWatchers(collectionName).size()
+    );
+
+    CollectionAdminRequest.deleteCollection(collectionName).process(cluster.getSolrClient());
+  }
+
+  @Test
+  @Slow
   public void raceConditionOnDeleteAndRegisterReplica() throws Exception {
-    final String collectionName = "raceDeleteReplica";
+    raceConditionOnDeleteAndRegisterReplica("true");
+    raceConditionOnDeleteAndRegisterReplica("false");
+    CollectionAdminRequest.setClusterProperty(ZkStateReader.LEGACY_CLOUD, null).process(cluster.getSolrClient());
+  }
+
+  public void raceConditionOnDeleteAndRegisterReplica(String legacyCloud) throws Exception {
+    CollectionAdminRequest.setClusterProperty(ZkStateReader.LEGACY_CLOUD, legacyCloud).process(cluster.getSolrClient());
+    final String collectionName = "raceDeleteReplica_"+legacyCloud;
     CollectionAdminRequest.createCollection(collectionName, "conf", 1, 2)
         .process(cluster.getSolrClient());
     waitForState("Expected 1x2 collections", collectionName, clusterShape(1, 2));
@@ -246,15 +309,16 @@ public class DeleteReplicaTest extends SolrCloudTestCase {
       ZkContainer.testing_beforeRegisterInZk = null;
     }
 
-
-    waitForState("Timeout for replica:"+replica1.getName()+" register itself as DOWN after failed to register", collectionName, (liveNodes, collectionState) -> {
-      Slice shard = collectionState.getSlice("shard1");
-      Replica replica = shard.getReplica(replica1.getName());
-      return replica != null && replica.getState() == DOWN;
-    });
-
-    CollectionAdminRequest.addReplicaToShard(collectionName, "shard1")
-        .process(cluster.getSolrClient());
+    while (true) {
+      try {
+        CollectionAdminRequest.addReplicaToShard(collectionName, "shard1")
+            .process(cluster.getSolrClient());
+        break;
+      } catch (Exception e) {
+        // expected, when the node is not fully started
+        Thread.sleep(500);
+      }
+    }
     waitForState("Expected 1x2 collections", collectionName, clusterShape(1, 2));
 
     String leaderJettyNodeName = leaderJetty.getNodeName();

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java b/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
index beaeb24..013434c 100644
--- a/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
@@ -63,81 +63,6 @@ public class ForceLeaderTest extends HttpPartitionTest {
   }
 
   /**
-   * Tests that FORCELEADER can get an active leader even in the case there are a replica won the election but not present in clusterstate
-   */
-  @Test
-  @Slow
-  public void testZombieLeader() throws Exception {
-    String testCollectionName = "forceleader_zombie_leader_collection";
-    createCollection(testCollectionName, "conf1", 1, 3, 1);
-    cloudClient.setDefaultCollection(testCollectionName);
-    try {
-      List<Replica> notLeaders = ensureAllReplicasAreActive(testCollectionName, SHARD1, 1, 3, maxWaitSecsToSeeAllActive);
-      assertEquals("Expected 2 replicas for collection " + testCollectionName
-          + " but found " + notLeaders.size() + "; clusterState: "
-          + printClusterStateInfo(testCollectionName), 2, notLeaders.size());
-      List<JettySolrRunner> notLeaderJetties = notLeaders.stream().map(rep -> getJettyOnPort(getReplicaPort(rep)))
-          .collect(Collectors.toList());
-
-      Replica leader = cloudClient.getZkStateReader().getLeaderRetry(testCollectionName, SHARD1);
-      JettySolrRunner leaderJetty = getJettyOnPort(getReplicaPort(leader));
-
-      // remove leader from clusterstate
-      ZkNodeProps m = new ZkNodeProps(
-          Overseer.QUEUE_OPERATION, OverseerAction.DELETECORE.toLower(),
-          ZkStateReader.CORE_NAME_PROP, leader.getCoreName(),
-          ZkStateReader.NODE_NAME_PROP, leader.getNodeName(),
-          ZkStateReader.COLLECTION_PROP, testCollectionName,
-          ZkStateReader.CORE_NODE_NAME_PROP, leader.getName(),
-          ZkStateReader.BASE_URL_PROP, leader.getBaseUrl());
-      Overseer.getStateUpdateQueue(cloudClient.getZkStateReader().getZkClient()).offer(Utils.toJSON(m));
-
-      boolean restartOtherReplicas = random().nextBoolean();
-      log.info("Starting test with restartOtherReplicas:{}", restartOtherReplicas);
-      if (restartOtherReplicas) {
-        for (JettySolrRunner notLeaderJetty : notLeaderJetties) {
-          notLeaderJetty.stop();
-        }
-      }
-      cloudClient.waitForState(testCollectionName, 30, TimeUnit.SECONDS,
-          (liveNodes, collectionState) -> collectionState.getReplicas().size() == 2);
-
-      if (restartOtherReplicas) {
-        for (JettySolrRunner notLeaderJetty : notLeaderJetties) {
-          notLeaderJetty.start();
-        }
-      }
-
-      log.info("Before forcing leader: " + cloudClient.getZkStateReader().getClusterState()
-          .getCollection(testCollectionName).getSlice(SHARD1));
-      doForceLeader(cloudClient, testCollectionName, SHARD1);
-
-      // By now we have an active leader. Wait for recoveries to begin
-      waitForRecoveriesToFinish(testCollectionName, cloudClient.getZkStateReader(), true);
-      ClusterState clusterState = cloudClient.getZkStateReader().getClusterState();
-      log.info("After forcing leader: " + clusterState.getCollection(testCollectionName).getSlice(SHARD1));
-
-      assertNull("Expected zombie leader get deleted", leaderJetty.getCoreContainer().getCore(leader.getCoreName()));
-      Replica newLeader = clusterState.getCollectionOrNull(testCollectionName).getSlice(SHARD1).getLeader();
-      assertNotNull(newLeader);
-      assertEquals(State.ACTIVE, newLeader.getState());
-
-      int numActiveReplicas = getNumberOfActiveReplicas(clusterState, testCollectionName, SHARD1);
-      assertEquals(2, numActiveReplicas);
-
-      // Assert that indexing works again
-      sendDoc(1);
-      cloudClient.commit();
-
-      assertDocsExistInAllReplicas(notLeaders, testCollectionName, 1, 1);
-    } finally {
-      log.info("Cleaning up after the test.");
-      // try to clean up
-      attemptCollectionDelete(cloudClient, testCollectionName);
-    }
-  }
-
-  /**
    * Tests that FORCELEADER can get an active leader even only replicas with term lower than leader's term are live
    */
   @Test

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java b/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
index 0879063..652a2e2 100644
--- a/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
@@ -60,9 +60,6 @@ import org.slf4j.LoggerFactory;
 public class MoveReplicaTest extends SolrCloudTestCase {
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  private static ZkStateReaderAccessor accessor;
-  private static int overseerLeaderIndex;
-
   // used by MoveReplicaHDFSTest
   protected boolean inPlaceMove = true;
 
@@ -78,14 +75,12 @@ public class MoveReplicaTest extends SolrCloudTestCase {
       JettySolrRunner jetty = cluster.getJettySolrRunner(i);
       if (jetty.getNodeName().equals(overseerLeader)) {
         overseerJetty = jetty;
-        overseerLeaderIndex = i;
         break;
       }
     }
     if (overseerJetty == null) {
       fail("no overseer leader!");
     }
-    accessor = new ZkStateReaderAccessor(overseerJetty.getCoreContainer().getZkController().getZkStateReader());
   }
 
   protected String getSolrXml() {
@@ -137,8 +132,6 @@ public class MoveReplicaTest extends SolrCloudTestCase {
       }
     }
 
-    Set<CollectionStateWatcher> watchers = new HashSet<>(accessor.getStateWatchers(coll));
-
     int sourceNumCores = getNumOfCores(cloudClient, replica.getNodeName(), coll);
     int targetNumCores = getNumOfCores(cloudClient, targetNode, coll);
 
@@ -201,9 +194,6 @@ public class MoveReplicaTest extends SolrCloudTestCase {
 
     assertEquals(100, cluster.getSolrClient().query(coll, new SolrQuery("*:*")).getResults().getNumFound());
 
-    Set<CollectionStateWatcher> newWatchers = new HashSet<>(accessor.getStateWatchers(coll));
-    assertEquals(watchers, newWatchers);
-
     moveReplica = createMoveReplicaRequest(coll, replica, targetNode, shardId);
     moveReplica.setInPlaceMove(inPlaceMove);
     moveReplica.process(cloudClient);
@@ -243,8 +233,6 @@ public class MoveReplicaTest extends SolrCloudTestCase {
       }
     }
     assertTrue("replica never fully recovered", recovered);
-    newWatchers = new HashSet<>(accessor.getStateWatchers(coll));
-    assertEquals(watchers, newWatchers);
 
     assertEquals(100, cluster.getSolrClient().query(coll, new SolrQuery("*:*")).getResults().getNumFound());
   }
@@ -258,8 +246,6 @@ public class MoveReplicaTest extends SolrCloudTestCase {
 
     CloudSolrClient cloudClient = cluster.getSolrClient();
 
-    Set<CollectionStateWatcher> watchers = new HashSet<>(accessor.getStateWatchers(coll));
-
     CollectionAdminRequest.Create create = CollectionAdminRequest.createCollection(coll, "conf1", 2, REPLICATION);
     create.setAutoAddReplicas(false);
     cloudClient.request(create);
@@ -303,9 +289,6 @@ public class MoveReplicaTest extends SolrCloudTestCase {
     }
     assertFalse(success);
 
-    Set<CollectionStateWatcher> newWatchers = new HashSet<>(accessor.getStateWatchers(coll));
-    assertEquals(watchers, newWatchers);
-
     log.info("--- current collection state: " + cloudClient.getZkStateReader().getClusterState().getCollection(coll));
     assertEquals(100, cluster.getSolrClient().query(coll, new SolrQuery("*:*")).getResults().getNumFound());
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/09db13f4/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
index b0b591a..7d5401d 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
@@ -1572,8 +1572,12 @@ public class ZkStateReader implements Closeable {
         return v;
       });
       for (CollectionStateWatcher watcher : watchers) {
-        if (watcher.onStateChanged(liveNodes, collectionState)) {
-          removeCollectionStateWatcher(collection, watcher);
+        try {
+          if (watcher.onStateChanged(liveNodes, collectionState)) {
+            removeCollectionStateWatcher(collection, watcher);
+          }
+        } catch (Throwable throwable) {
+          LOG.warn("Error on calling watcher", throwable);
         }
       }
     }


[09/40] lucene-solr:jira/solr-11833: [TEST] Ensure IW doesn't autoflush since test relies on it producing a single segment

Posted by ab...@apache.org.
[TEST] Ensure IW doesn't autoflush since test relies on it producing a single segment


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/dd39128e
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/dd39128e
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/dd39128e

Branch: refs/heads/jira/solr-11833
Commit: dd39128eaeeaae3ab607d27b1e6707409ca436e7
Parents: dbdedf3
Author: Simon Willnauer <si...@apache.org>
Authored: Wed Apr 18 17:45:55 2018 +0200
Committer: Simon Willnauer <si...@apache.org>
Committed: Wed Apr 18 17:46:45 2018 +0200

----------------------------------------------------------------------
 .../test/org/apache/lucene/index/TestPendingSoftDeletes.java    | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/dd39128e/lucene/core/src/test/org/apache/lucene/index/TestPendingSoftDeletes.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestPendingSoftDeletes.java b/lucene/core/src/test/org/apache/lucene/index/TestPendingSoftDeletes.java
index eac4388..9878b16 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestPendingSoftDeletes.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestPendingSoftDeletes.java
@@ -152,7 +152,10 @@ public class TestPendingSoftDeletes extends TestPendingDeletes {
 
   public void testUpdateAppliedOnlyOnce() throws IOException {
     Directory dir = newDirectory();
-    IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig().setSoftDeletesField("_soft_deletes"));
+    IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig()
+        .setSoftDeletesField("_soft_deletes")
+        .setMaxBufferedDocs(3) // make sure we write one segment
+        .setRAMBufferSizeMB(IndexWriterConfig.DISABLE_AUTO_FLUSH));
     Document doc = new Document();
     doc.add(new StringField("id", "1", Field.Store.YES));
     writer.softUpdateDocument(new Term("id", "1"), doc,


[21/40] lucene-solr:jira/solr-11833: Add suppresscodec to avoid OOM on nightly runs.

Posted by ab...@apache.org.
Add suppresscodec to avoid OOM on nightly runs.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/cf05e17a
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/cf05e17a
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/cf05e17a

Branch: refs/heads/jira/solr-11833
Commit: cf05e17adcf99817d37b9e20939db55df443931f
Parents: 48e071f
Author: Dawid Weiss <dw...@apache.org>
Authored: Fri Apr 20 11:32:38 2018 +0200
Committer: Dawid Weiss <dw...@apache.org>
Committed: Fri Apr 20 11:32:38 2018 +0200

----------------------------------------------------------------------
 .../test/org/apache/lucene/search/TestInetAddressRangeQueries.java | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/cf05e17a/lucene/misc/src/test/org/apache/lucene/search/TestInetAddressRangeQueries.java
----------------------------------------------------------------------
diff --git a/lucene/misc/src/test/org/apache/lucene/search/TestInetAddressRangeQueries.java b/lucene/misc/src/test/org/apache/lucene/search/TestInetAddressRangeQueries.java
index 907ba55..8f0c4ca 100644
--- a/lucene/misc/src/test/org/apache/lucene/search/TestInetAddressRangeQueries.java
+++ b/lucene/misc/src/test/org/apache/lucene/search/TestInetAddressRangeQueries.java
@@ -23,10 +23,12 @@ import java.util.Arrays;
 import org.apache.lucene.document.InetAddressPoint;
 import org.apache.lucene.document.InetAddressRange;
 import org.apache.lucene.util.StringHelper;
+import org.apache.lucene.util.LuceneTestCase.SuppressCodecs; 
 
 /**
  * Random testing for {@link InetAddressRange}
  */
+@SuppressCodecs({"Direct", "Memory"})
 public class TestInetAddressRangeQueries extends BaseRangeFieldQueryTestCase {
   private static final String FIELD_NAME = "ipRangeField";
 


[23/40] lucene-solr:jira/solr-11833: SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework

Posted by ab...@apache.org.
SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/86b34fe0
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/86b34fe0
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/86b34fe0

Branch: refs/heads/jira/solr-11833
Commit: 86b34fe0fd0b1facb203406a4dab63ce76827b75
Parents: 4eead83
Author: Shalin Shekhar Mangar <sh...@apache.org>
Authored: Fri Apr 20 20:08:37 2018 +0530
Committer: Shalin Shekhar Mangar <sh...@apache.org>
Committed: Fri Apr 20 20:08:37 2018 +0530

----------------------------------------------------------------------
 solr/CHANGES.txt                                |  2 ++
 .../client/solrj/cloud/autoscaling/Clause.java  | 21 ++++++------
 .../autoscaling/DelegatingCloudManager.java     |  2 +-
 .../client/solrj/cloud/autoscaling/Operand.java |  2 +-
 .../client/solrj/cloud/autoscaling/Policy.java  | 34 +++++++++++---------
 .../solrj/cloud/autoscaling/ReplicaCount.java   |  6 ++++
 .../solrj/cloud/autoscaling/Suggestion.java     |  4 +--
 .../solrj/cloud/autoscaling/Violation.java      |  2 +-
 .../solrj/impl/SolrClientNodeStateProvider.java |  4 +--
 .../solrj/cloud/autoscaling/TestPolicy.java     | 22 ++++++-------
 10 files changed, 56 insertions(+), 43 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index 516a0d7..f5808ec 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -239,6 +239,8 @@ Other Changes
 
 * SOLR-12142: EmbeddedSolrServer should use req.getContentWriter (noble)
 
+* SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework. (shalin)
+
 ==================  7.3.1 ==================
 
 Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Clause.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Clause.java b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Clause.java
index 92854fd..c739588 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Clause.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Clause.java
@@ -46,13 +46,17 @@ import static org.apache.solr.common.params.CoreAdminParams.COLLECTION;
 import static org.apache.solr.common.params.CoreAdminParams.REPLICA;
 import static org.apache.solr.common.params.CoreAdminParams.SHARD;
 
-// a set of conditions in a policy
+/**
+ * Represents a set of conditions in the policy
+ */
 public class Clause implements MapWriter, Comparable<Clause> {
+  private static final Set<String> IGNORE_TAGS = new HashSet<>(Arrays.asList(REPLICA, COLLECTION, SHARD, "strict", "type"));
+
   final Map<String, Object> original;
   Condition collection, shard, replica, tag, globalTag;
   final Replica.Type type;
 
-  boolean strict = true;
+  boolean strict;
 
   public Clause(Map<String, Object> m) {
     this.original = Utils.getDeepCopy(m, 10);
@@ -76,7 +80,7 @@ public class Clause implements MapWriter, Comparable<Clause> {
       }
       this.replica = parse(REPLICA, m);
       if (replica.op == WILDCARD) throw new RuntimeException("replica val cannot be null" + Utils.toJSONString(m));
-      m.forEach((s, o) -> parseCondition(s, o));
+      m.forEach(this::parseCondition);
     }
     if (tag == null)
       throw new RuntimeException("Invalid op, must have one and only one tag other than collection, shard,replica " + Utils.toJSONString(m));
@@ -193,7 +197,7 @@ public class Clause implements MapWriter, Comparable<Clause> {
               .opposite(isReplicaZero() && this == tag)
               .delta(Clause.parseDouble(name, this.val), Clause.parseDouble(name, val));
         } else {
-          return 0l;
+          return 0L;
         }
       } else return op
           .opposite(isReplicaZero() && this == tag)
@@ -290,13 +294,14 @@ public class Clause implements MapWriter, Comparable<Clause> {
           if (!shard.isPass(shardName)) break;
           Map<String, ReplicaCount> tagVsCount = collMap.computeIfAbsent(shardName, s -> new HashMap<>());
           Object tagVal = row.getVal(tag.name);
-          tagVsCount.computeIfAbsent(tag.isPass(tagVal) ? String.valueOf(tagVal) : "", s -> new ReplicaCount());
-          if (tag.isPass(tagVal)) {
+          boolean pass = tag.isPass(tagVal);
+          tagVsCount.computeIfAbsent(pass ? String.valueOf(tagVal) : "", s -> new ReplicaCount());
+          if (pass) {
             tagVsCount.get(String.valueOf(tagVal)).increment(shards.getValue());
           }
-          }
         }
       }
+    }
     return collVsShardVsTagVsCount;
   }
 
@@ -318,8 +323,6 @@ public class Clause implements MapWriter, Comparable<Clause> {
     NOT_APPLICABLE, FAIL, PASS
   }
 
-  private static final Set<String> IGNORE_TAGS = new HashSet<>(Arrays.asList(REPLICA, COLLECTION, SHARD, "strict", "type"));
-
   public static String parseString(Object val) {
     return val == null ? null : String.valueOf(val);
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/DelegatingCloudManager.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/DelegatingCloudManager.java b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/DelegatingCloudManager.java
index 22b2a51..8f3b08b 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/DelegatingCloudManager.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/DelegatingCloudManager.java
@@ -33,7 +33,7 @@ import org.apache.solr.common.util.TimeSource;
  * Base class for overriding some behavior of {@link SolrCloudManager}.
  */
 public class DelegatingCloudManager implements SolrCloudManager {
-  private final SolrCloudManager delegate;
+  protected final SolrCloudManager delegate;
   private ObjectCache objectCache = new ObjectCache();
   private TimeSource timeSource = TimeSource.NANO_TIME;
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Operand.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Operand.java b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Operand.java
index 33decf0..11df06f 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Operand.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Operand.java
@@ -124,7 +124,7 @@ public enum Operand {
       Long actualL = ((Number) actual).longValue();
       return _delta(expectedL, actualL);
     } else {
-      return 0l;
+      return 0L;
     }
 
   }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
index 9496b0f..cbdb2a7 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
@@ -77,6 +77,12 @@ public class Policy implements MapWriter {
       Arrays.asList(
           new Preference((Map<String, Object>) Utils.fromJSONString("{minimize : cores, precision:1}")),
           new Preference((Map<String, Object>) Utils.fromJSONString("{maximize : freedisk}"))));
+
+  /**
+   * These parameters are always fetched for all nodes regardless of whether they are used in preferences or not
+   */
+  private static final List<String> DEFAULT_PARAMS_OF_INTEREST = Arrays.asList(ImplicitSnitch.DISK, ImplicitSnitch.CORES);
+
   final Map<String, List<Clause>> policies;
   final List<Clause> clusterPolicy;
   final List<Preference> clusterPreferences;
@@ -87,6 +93,7 @@ public class Policy implements MapWriter {
     this(Collections.emptyMap());
   }
 
+  @SuppressWarnings("unchecked")
   public Policy(Map<String, Object> jsonMap) {
     int[] idx = new int[1];
     List<Preference> initialClusterPreferences = ((List<Map<String, Object>>) jsonMap.getOrDefault(CLUSTER_PREFERENCES, emptyList())).stream()
@@ -100,9 +107,7 @@ public class Policy implements MapWriter {
       initialClusterPreferences.addAll(DEFAULT_PREFERENCES);
     }
     this.clusterPreferences = Collections.unmodifiableList(initialClusterPreferences);
-    final SortedSet<String> paramsOfInterest = new TreeSet<>();
-    paramsOfInterest.add(ImplicitSnitch.DISK);//always get freedisk anyway.
-    paramsOfInterest.add(ImplicitSnitch.CORES);//always get cores anyway.
+    final SortedSet<String> paramsOfInterest = new TreeSet<>(DEFAULT_PARAMS_OF_INTEREST);
     clusterPreferences.forEach(preference -> paramsOfInterest.add(preference.name.toString()));
     List<String> newParams = new ArrayList<>(paramsOfInterest);
     clusterPolicy = ((List<Map<String, Object>>) jsonMap.getOrDefault(CLUSTER_POLICY, emptyList())).stream()
@@ -149,9 +154,7 @@ public class Policy implements MapWriter {
       paramsOfInterest.add(p.name.toString());
     });
     List<String> newParams = new ArrayList<>(paramsOfInterest);
-    policy.forEach(c -> {
-      c.addTags(newParams);
-    });
+    policy.forEach(c -> c.addTags(newParams));
     policies.values().forEach(clauses -> clauses.forEach(c -> c.addTags(newParams)));
     return newParams;
   }
@@ -212,8 +215,7 @@ public class Policy implements MapWriter {
 
     if (!getPolicies().equals(policy.getPolicies())) return false;
     if (!getClusterPolicy().equals(policy.getClusterPolicy())) return false;
-    if (!getClusterPreferences().equals(policy.getClusterPreferences())) return false;
-    return true;
+    return getClusterPreferences().equals(policy.getClusterPreferences());
   }
 
   /*This stores the logical state of the system, given a policy and
@@ -332,8 +334,7 @@ public class Policy implements MapWriter {
     @Override
     public void writeMap(EntryWriter ew) throws IOException {
       ew.put("znodeVersion", znodeVersion);
-      for (int i = 0; i < matrix.size(); i++) {
-        Row row = matrix.get(i);
+      for (Row row : matrix) {
         ew.put(row.node, row);
       }
     }
@@ -363,15 +364,16 @@ public class Policy implements MapWriter {
       ArrayList<Row> tmpMatrix = new ArrayList<>(matrix);
       for (Preference p : clusterPreferences) {
         try {
-          Collections.sort(tmpMatrix, (r1, r2) -> p.compare(r1, r2, false));
+          tmpMatrix.sort((r1, r2) -> p.compare(r1, r2, false));
         } catch (Exception e) {
           LOG.error("Exception! prefs = {}, matrix = {}", clusterPreferences, matrix);
           throw e;
         }
         p.setApproxVal(tmpMatrix);
       }
-      //approximate values are set now. Let's do recursive sorting
-      Collections.sort(matrix, (Row r1, Row r2) -> {
+      // the tmpMatrix was needed only to set the approximate values, now we sort the real matrix
+      // recursing through each preference
+      matrix.sort((Row r1, Row r2) -> {
         int result = clusterPreferences.get(0).compare(r1, r2, true);
         if (result == 0) result = clusterPreferences.get(0).compare(r1, r2, false);
         return result;
@@ -465,10 +467,10 @@ public class Policy implements MapWriter {
   private static final Map<CollectionAction, Supplier<Suggester>> ops = new HashMap<>();
 
   static {
-    ops.put(CollectionAction.ADDREPLICA, () -> new AddReplicaSuggester());
+    ops.put(CollectionAction.ADDREPLICA, AddReplicaSuggester::new);
     ops.put(CollectionAction.DELETEREPLICA, () -> new UnsupportedSuggester(CollectionAction.DELETEREPLICA));
-    ops.put(CollectionAction.MOVEREPLICA, () -> new MoveReplicaSuggester());
-    ops.put(CollectionAction.SPLITSHARD, () -> new SplitShardSuggester());
+    ops.put(CollectionAction.MOVEREPLICA, MoveReplicaSuggester::new);
+    ops.put(CollectionAction.SPLITSHARD, SplitShardSuggester::new);
     ops.put(CollectionAction.MERGESHARDS, () -> new UnsupportedSuggester(CollectionAction.MERGESHARDS));
   }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/ReplicaCount.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/ReplicaCount.java b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/ReplicaCount.java
index acb8c68..0fe53f4 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/ReplicaCount.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/ReplicaCount.java
@@ -22,6 +22,7 @@ import java.util.List;
 
 import org.apache.solr.common.MapWriter;
 import org.apache.solr.common.cloud.Replica;
+import org.apache.solr.common.util.Utils;
 
 class ReplicaCount extends Number implements MapWriter {
   long nrt, tlog, pull;
@@ -89,4 +90,9 @@ class ReplicaCount extends Number implements MapWriter {
       }
     }
   }
+
+  @Override
+  public String toString() {
+    return Utils.toJSONString(this);
+  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Suggestion.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Suggestion.java b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Suggestion.java
index 0c9013e..a4eed4b 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Suggestion.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Suggestion.java
@@ -134,8 +134,8 @@ public class Suggestion {
       @Override
       public int compareViolation(Violation v1, Violation v2) {
         return Long.compare(
-            v1.getViolatingReplicas().stream().mapToLong(v -> v.delta == null? 0 :v.delta).max().orElse(0l),
-            v2.getViolatingReplicas().stream().mapToLong(v3 -> v3.delta == null? 0 : v3.delta).max().orElse(0l));
+            v1.getViolatingReplicas().stream().mapToLong(v -> v.delta == null? 0 :v.delta).max().orElse(0L),
+            v2.getViolatingReplicas().stream().mapToLong(v3 -> v3.delta == null? 0 : v3.delta).max().orElse(0L));
       }
 
       @Override

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Violation.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Violation.java b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Violation.java
index bb5aa6f..76bd7d5 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Violation.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Violation.java
@@ -81,7 +81,7 @@ public class Violation implements MapWriter {
     @Override
     public void writeMap(EntryWriter ew) throws IOException {
       ew.put("replica", replicaInfo);
-      ew.putIfNotNull("delta",delta );
+      ew.putIfNotNull("delta", delta);
     }
   }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientNodeStateProvider.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientNodeStateProvider.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientNodeStateProvider.java
index 03809a2..5fe9058 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientNodeStateProvider.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientNodeStateProvider.java
@@ -135,7 +135,7 @@ public class SolrClientNodeStateProvider implements NodeStateProvider, MapWriter
           Pair<String, ReplicaInfo> p = keyVsReplica.get(k);
           Suggestion.ConditionType validator = Suggestion.getTagType(p.first());
           if (validator != null) o = validator.convertVal(o);
-          if (p != null) p.second().getVariables().put(p.first(), o);
+          if (p.second() != null) p.second().getVariables().put(p.first(), o);
         });
 
       }
@@ -145,7 +145,7 @@ public class SolrClientNodeStateProvider implements NodeStateProvider, MapWriter
 
   static void fetchMetrics(String solrNode, ClientSnitchCtx ctx, Map<String, Object> metricsKeyVsTag) {
     ModifiableSolrParams params = new ModifiableSolrParams();
-    params.add("key", metricsKeyVsTag.keySet().toArray(new String[metricsKeyVsTag.size()]));
+    params.add("key", metricsKeyVsTag.keySet().toArray(new String[0]));
     try {
       SimpleSolrResponse rsp = ctx.invoke(solrNode, CommonParams.METRICS_PATH, params);
       metricsKeyVsTag.forEach((key, tag) -> {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/86b34fe0/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java b/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java
index 2b97b71..a53b60c 100644
--- a/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java
+++ b/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java
@@ -155,7 +155,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
   public void testValidate() {
     expectError("replica", -1, "must be greater than");
     expectError("replica", "hello", "not a valid number");
-    assertEquals(1l, Clause.validate("replica", "1", true));
+    assertEquals(1L, Clause.validate("replica", "1", true));
     assertEquals("c", Clause.validate("collection", "c", true));
     assertEquals("s", Clause.validate("shard", "s", true));
     assertEquals("overseer", Clause.validate("nodeRole", "overseer", true));
@@ -176,7 +176,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
     expectError("ip_1", "-1", "must be greater than");
     expectError("ip_1", -1, "must be greater than");
 
-    assertEquals(1l, Clause.validate("ip_1", "1", true));
+    assertEquals(1L, Clause.validate("ip_1", "1", true));
 
     expectError("heapUsage", "-1", "must be greater than");
     expectError("heapUsage", -1, "must be greater than");
@@ -474,7 +474,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
       public DistribStateManager getDistribStateManager() {
         return new DelegatingDistribStateManager(null) {
           @Override
-          public AutoScalingConfig getAutoScalingConfig() throws InterruptedException, IOException {
+          public AutoScalingConfig getAutoScalingConfig() {
             return asc;
           }
         };
@@ -868,7 +868,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
     assertEquals("sysprop.rack", clauses.get(0).tag.getName());
   }
 
-  public void testRules() throws IOException {
+  public void testRules() {
     String rules = "{" +
         "cluster-policy:[" +
         "{nodeRole:'overseer',replica : 0 , strict:false}," +
@@ -1020,7 +1020,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
   private DistribStateManager delegatingDistribStateManager(AutoScalingConfig config) {
     return new DelegatingDistribStateManager(null) {
       @Override
-      public AutoScalingConfig getAutoScalingConfig() throws InterruptedException, IOException {
+      public AutoScalingConfig getAutoScalingConfig() {
         return config;
       }
     };
@@ -1248,7 +1248,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
       }
 
       @Override
-      public void close() throws IOException {
+      public void close() {
 
       }
 
@@ -1291,12 +1291,12 @@ public class TestPolicy extends SolrTestCaseJ4 {
       }
 
       @Override
-      public SolrResponse request(SolrRequest req) throws IOException {
+      public SolrResponse request(SolrRequest req) {
         return null;
       }
 
       @Override
-      public byte[] httpRequest(String url, SolrRequest.METHOD method, Map<String, String> headers, String payload, int timeout, boolean followRedirects) throws IOException {
+      public byte[] httpRequest(String url, SolrRequest.METHOD method, Map<String, String> headers, String payload, int timeout, boolean followRedirects) {
         return new byte[0];
       }
     };
@@ -1505,7 +1505,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
     assertFalse(l.isEmpty());
 
     Map m = l.get(0).toMap(new LinkedHashMap<>());
-    assertEquals(1l, Utils.getObjectByPath(m, true, "violation/violation/delta"));
+    assertEquals(1L, Utils.getObjectByPath(m, true, "violation/violation/delta"));
     assertEquals("POST", Utils.getObjectByPath(m, true, "operation/method"));
     assertEquals("/c/mycoll1", Utils.getObjectByPath(m, true, "operation/path"));
     assertNotNull(Utils.getObjectByPath(m, false, "operation/command/move-replica"));
@@ -1579,7 +1579,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
     AutoScalingConfig cfg = new AutoScalingConfig((Map<String, Object>) Utils.fromJSONString(autoScalingjson));
     List<Violation> violations = cfg.getPolicy().createSession(cloudManagerWithData(dataproviderdata)).getViolations();
     assertFalse(violations.isEmpty());
-    assertEquals(2l, violations.get(0).replicaCountDelta.longValue());
+    assertEquals(2L, violations.get(0).replicaCountDelta.longValue());
 
     List<Suggester.SuggestionInfo> l = PolicyHelper.getSuggestions(cfg,
         cloudManagerWithData(dataproviderdata));
@@ -1794,7 +1794,7 @@ public class TestPolicy extends SolrTestCaseJ4 {
           }
 
           @Override
-          public DocCollection getCollection(String name) throws IOException {
+          public DocCollection getCollection(String name) {
             return new DocCollection(name, Collections.emptyMap(), Collections.emptyMap(), DocRouter.DEFAULT) {
               @Override
               public Replica getLeader(String sliceName) {


[19/40] lucene-solr:jira/solr-11833: LUCENE-8258: A better fix to avoid out-of-world plane intersections for traversal planes.

Posted by ab...@apache.org.
LUCENE-8258: A better fix to avoid out-of-world plane intersections for traversal planes.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/493bdec3
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/493bdec3
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/493bdec3

Branch: refs/heads/jira/solr-11833
Commit: 493bdec3a7e6b684efd72b68304f3a8c0ca7601e
Parents: a033759
Author: Karl Wright <Da...@gmail.com>
Authored: Fri Apr 20 03:30:09 2018 -0400
Committer: Karl Wright <Da...@gmail.com>
Committed: Fri Apr 20 03:30:09 2018 -0400

----------------------------------------------------------------------
 .../spatial3d/geom/GeoComplexPolygon.java       | 322 ++++++++++---------
 1 file changed, 176 insertions(+), 146 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/493bdec3/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
----------------------------------------------------------------------
diff --git a/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java b/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
index 744646a..2dbcd58 100644
--- a/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
+++ b/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoComplexPolygon.java
@@ -59,7 +59,7 @@ class GeoComplexPolygon extends GeoBasePolygon {
   private final GeoPoint[] edgePoints;
   private final Edge[] shapeStartEdges;
   
-  private final static double NEAR_EDGE_CUTOFF = -10.0 * Vector.MINIMUM_RESOLUTION;
+  private final static double NEAR_EDGE_CUTOFF = 0.0;
   
   /**
    * Create a complex polygon from multiple lists of points, and a single point which is known to be in or out of
@@ -282,171 +282,201 @@ class GeoComplexPolygon extends GeoBasePolygon {
       GeoPoint intersectionPoint = null;
 
       if (testPointFixedYAbovePlane != null && testPointFixedYBelowPlane != null && fixedXAbovePlane != null && fixedXBelowPlane != null) {
-        final GeoPoint[] XIntersectionsY = travelPlaneFixedX.findIntersections(planetModel, testPointFixedYPlane);
-        for (final GeoPoint p : XIntersectionsY) {
-          // Travel would be in YZ plane (fixed x) then in XZ (fixed y)
-          // We compute distance we need to travel as a placeholder for the number of intersections we might encounter.
-          //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
-          final double tpDelta1 = testPoint.x - p.x;
-          final double tpDelta2 = testPoint.z - p.z;
-          final double cpDelta1 = y - p.y;
-          final double cpDelta2 = z - p.z;
-          final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
-          //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.z - p.z) * (thePoint.z - p.z);
-          //final double newDistance = Math.abs(testPoint.x - p.x) + Math.abs(thePoint.y - p.y);
-          if (newDistance < bestDistance) {
-            bestDistance = newDistance;
-            firstLegValue = testPoint.y;
-            secondLegValue = x;
-            firstLegPlane = testPointFixedYPlane;
-            firstLegAbovePlane = testPointFixedYAbovePlane;
-            firstLegBelowPlane = testPointFixedYBelowPlane;
-            secondLegPlane = travelPlaneFixedX;
-            secondLegAbovePlane = fixedXAbovePlane;
-            secondLegBelowPlane = fixedXBelowPlane;
-            firstLegTree = yTree;
-            secondLegTree = xTree;
-            intersectionPoint = p;
+        //check if planes intersects  inside world
+        final double checkAbove = 4.0 * (fixedXAbovePlane.D * fixedXAbovePlane.D * planetModel.inverseAbSquared + testPointFixedYAbovePlane.D * testPointFixedYAbovePlane.D * planetModel.inverseAbSquared - 1.0);
+        final double checkBelow = 4.0 * (fixedXBelowPlane.D * fixedXBelowPlane.D * planetModel.inverseAbSquared + testPointFixedYBelowPlane.D * testPointFixedYBelowPlane.D * planetModel.inverseAbSquared - 1.0);
+        if (checkAbove < Vector.MINIMUM_RESOLUTION_SQUARED && checkBelow < Vector.MINIMUM_RESOLUTION_SQUARED) {
+          final GeoPoint[] XIntersectionsY = travelPlaneFixedX.findIntersections(planetModel, testPointFixedYPlane);
+          for (final GeoPoint p : XIntersectionsY) {
+            // Travel would be in YZ plane (fixed x) then in XZ (fixed y)
+            // We compute distance we need to travel as a placeholder for the number of intersections we might encounter.
+            //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
+            final double tpDelta1 = testPoint.x - p.x;
+            final double tpDelta2 = testPoint.z - p.z;
+            final double cpDelta1 = y - p.y;
+            final double cpDelta2 = z - p.z;
+            final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
+            //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.z - p.z) * (thePoint.z - p.z);
+            //final double newDistance = Math.abs(testPoint.x - p.x) + Math.abs(thePoint.y - p.y);
+            if (newDistance < bestDistance) {
+              bestDistance = newDistance;
+              firstLegValue = testPoint.y;
+              secondLegValue = x;
+              firstLegPlane = testPointFixedYPlane;
+              firstLegAbovePlane = testPointFixedYAbovePlane;
+              firstLegBelowPlane = testPointFixedYBelowPlane;
+              secondLegPlane = travelPlaneFixedX;
+              secondLegAbovePlane = fixedXAbovePlane;
+              secondLegBelowPlane = fixedXBelowPlane;
+              firstLegTree = yTree;
+              secondLegTree = xTree;
+              intersectionPoint = p;
+            }
           }
         }
       }
       if (testPointFixedZAbovePlane != null && testPointFixedZBelowPlane != null && fixedXAbovePlane != null && fixedXBelowPlane != null) {
-        final GeoPoint[] XIntersectionsZ = travelPlaneFixedX.findIntersections(planetModel, testPointFixedZPlane);
-        for (final GeoPoint p : XIntersectionsZ) {
-          // Travel would be in YZ plane (fixed x) then in XY (fixed z)
-          //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
-          final double tpDelta1 = testPoint.x - p.x;
-          final double tpDelta2 = testPoint.y - p.y;
-          final double cpDelta1 = y - p.y;
-          final double cpDelta2 = z - p.z;
-          final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
-          //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.y - p.y) * (testPoint.y - p.y)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.z - p.z) * (thePoint.z - p.z);
-          //final double newDistance = Math.abs(testPoint.x - p.x) + Math.abs(thePoint.z - p.z);
-          if (newDistance < bestDistance) {
-            bestDistance = newDistance;
-            firstLegValue = testPoint.z;
-            secondLegValue = x;
-            firstLegPlane = testPointFixedZPlane;
-            firstLegAbovePlane = testPointFixedZAbovePlane;
-            firstLegBelowPlane = testPointFixedZBelowPlane;
-            secondLegPlane = travelPlaneFixedX;
-            secondLegAbovePlane = fixedXAbovePlane;
-            secondLegBelowPlane = fixedXBelowPlane;
-            firstLegTree = zTree;
-            secondLegTree = xTree;
-            intersectionPoint = p;
+        //check if planes intersects  inside world
+        final double checkAbove = 4.0 * (fixedXAbovePlane.D * fixedXAbovePlane.D * planetModel.inverseAbSquared + testPointFixedZAbovePlane.D * testPointFixedZAbovePlane.D * planetModel.inverseCSquared - 1.0);
+        final double checkBelow = 4.0 * (fixedXBelowPlane.D * fixedXBelowPlane.D * planetModel.inverseAbSquared + testPointFixedZBelowPlane.D * testPointFixedZBelowPlane.D * planetModel.inverseCSquared - 1.0);
+        if (checkAbove < Vector.MINIMUM_RESOLUTION_SQUARED && checkBelow < Vector.MINIMUM_RESOLUTION_SQUARED) {
+          final GeoPoint[] XIntersectionsZ = travelPlaneFixedX.findIntersections(planetModel, testPointFixedZPlane);
+          for (final GeoPoint p : XIntersectionsZ) {
+            // Travel would be in YZ plane (fixed x) then in XY (fixed z)
+            //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
+            final double tpDelta1 = testPoint.x - p.x;
+            final double tpDelta2 = testPoint.y - p.y;
+            final double cpDelta1 = y - p.y;
+            final double cpDelta2 = z - p.z;
+            final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
+            //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.y - p.y) * (testPoint.y - p.y)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.z - p.z) * (thePoint.z - p.z);
+            //final double newDistance = Math.abs(testPoint.x - p.x) + Math.abs(thePoint.z - p.z);
+            if (newDistance < bestDistance) {
+              bestDistance = newDistance;
+              firstLegValue = testPoint.z;
+              secondLegValue = x;
+              firstLegPlane = testPointFixedZPlane;
+              firstLegAbovePlane = testPointFixedZAbovePlane;
+              firstLegBelowPlane = testPointFixedZBelowPlane;
+              secondLegPlane = travelPlaneFixedX;
+              secondLegAbovePlane = fixedXAbovePlane;
+              secondLegBelowPlane = fixedXBelowPlane;
+              firstLegTree = zTree;
+              secondLegTree = xTree;
+              intersectionPoint = p;
+            }
           }
         }
       }
       if (testPointFixedXAbovePlane != null && testPointFixedXBelowPlane != null && fixedYAbovePlane != null && fixedYBelowPlane != null) {
-        final GeoPoint[] YIntersectionsX = travelPlaneFixedY.findIntersections(planetModel, testPointFixedXPlane);
-        for (final GeoPoint p : YIntersectionsX) {
-          // Travel would be in XZ plane (fixed y) then in YZ (fixed x)
-          //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
-          final double tpDelta1 = testPoint.y - p.y;
-          final double tpDelta2 = testPoint.z - p.z;
-          final double cpDelta1 = x - p.x;
-          final double cpDelta2 = z - p.z;
-          final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
-          //final double newDistance = (testPoint.y - p.y) * (testPoint.y - p.y) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.x - p.x) * (thePoint.x - p.x) + (thePoint.z - p.z) * (thePoint.z - p.z);
-          //final double newDistance = Math.abs(testPoint.y - p.y) + Math.abs(thePoint.x - p.x);
-          if (newDistance < bestDistance) {
-            bestDistance = newDistance;
-            firstLegValue = testPoint.x;
-            secondLegValue = y;
-            firstLegPlane = testPointFixedXPlane;
-            firstLegAbovePlane = testPointFixedXAbovePlane;
-            firstLegBelowPlane = testPointFixedXBelowPlane;
-            secondLegPlane = travelPlaneFixedY;
-            secondLegAbovePlane = fixedYAbovePlane;
-            secondLegBelowPlane = fixedYBelowPlane;
-            firstLegTree = xTree;
-            secondLegTree = yTree;
-            intersectionPoint = p;
+        //check if planes intersects inside world
+        final double checkAbove = 4.0 * (testPointFixedXAbovePlane.D * testPointFixedXAbovePlane.D * planetModel.inverseAbSquared + fixedYAbovePlane.D * fixedYAbovePlane.D * planetModel.inverseAbSquared - 1.0);
+        final double checkBelow = 4.0 * (testPointFixedXBelowPlane.D * testPointFixedXBelowPlane.D * planetModel.inverseAbSquared + fixedYBelowPlane.D * fixedYBelowPlane.D * planetModel.inverseAbSquared - 1.0);
+        if (checkAbove < Vector.MINIMUM_RESOLUTION_SQUARED && checkBelow < Vector.MINIMUM_RESOLUTION_SQUARED) {
+          final GeoPoint[] YIntersectionsX = travelPlaneFixedY.findIntersections(planetModel, testPointFixedXPlane);
+          for (final GeoPoint p : YIntersectionsX) {
+            // Travel would be in XZ plane (fixed y) then in YZ (fixed x)
+            //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
+            final double tpDelta1 = testPoint.y - p.y;
+            final double tpDelta2 = testPoint.z - p.z;
+            final double cpDelta1 = x - p.x;
+            final double cpDelta2 = z - p.z;
+            final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
+            //final double newDistance = (testPoint.y - p.y) * (testPoint.y - p.y) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.x - p.x) * (thePoint.x - p.x) + (thePoint.z - p.z) * (thePoint.z - p.z);
+            //final double newDistance = Math.abs(testPoint.y - p.y) + Math.abs(thePoint.x - p.x);
+            if (newDistance < bestDistance) {
+              bestDistance = newDistance;
+              firstLegValue = testPoint.x;
+              secondLegValue = y;
+              firstLegPlane = testPointFixedXPlane;
+              firstLegAbovePlane = testPointFixedXAbovePlane;
+              firstLegBelowPlane = testPointFixedXBelowPlane;
+              secondLegPlane = travelPlaneFixedY;
+              secondLegAbovePlane = fixedYAbovePlane;
+              secondLegBelowPlane = fixedYBelowPlane;
+              firstLegTree = xTree;
+              secondLegTree = yTree;
+              intersectionPoint = p;
+            }
           }
         }
       }
       if (testPointFixedZAbovePlane != null && testPointFixedZBelowPlane != null && fixedYAbovePlane != null && fixedYBelowPlane != null) {
-        final GeoPoint[] YIntersectionsZ = travelPlaneFixedY.findIntersections(planetModel, testPointFixedZPlane);
-        for (final GeoPoint p : YIntersectionsZ) {
-          // Travel would be in XZ plane (fixed y) then in XY (fixed z)
-          //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
-          final double tpDelta1 = testPoint.x - p.x;
-          final double tpDelta2 = testPoint.y - p.y;
-          final double cpDelta1 = x - p.x;
-          final double cpDelta2 = z - p.z;
-          final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
-          //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.y - p.y) * (testPoint.y - p.y)  + (thePoint.x - p.x) * (thePoint.x - p.x) + (thePoint.z - p.z) * (thePoint.z - p.z);
-          //final double newDistance = Math.abs(testPoint.y - p.y) + Math.abs(thePoint.z - p.z);
-          if (newDistance < bestDistance) {
-            bestDistance = newDistance;
-            firstLegValue = testPoint.z;
-            secondLegValue = y;
-            firstLegPlane = testPointFixedZPlane;
-            firstLegAbovePlane = testPointFixedZAbovePlane;
-            firstLegBelowPlane = testPointFixedZBelowPlane;
-            secondLegPlane = travelPlaneFixedY;
-            secondLegAbovePlane = fixedYAbovePlane;
-            secondLegBelowPlane = fixedYBelowPlane;
-            firstLegTree = zTree;
-            secondLegTree = yTree;
-            intersectionPoint = p;
+        //check if planes intersects inside world
+        final double checkAbove = 4.0 * (testPointFixedZAbovePlane.D * testPointFixedZAbovePlane.D * planetModel.inverseCSquared + fixedYAbovePlane.D * fixedYAbovePlane.D * planetModel.inverseAbSquared - 1.0);
+        final double checkBelow = 4.0 * (testPointFixedZBelowPlane.D * testPointFixedZBelowPlane.D * planetModel.inverseCSquared + fixedYBelowPlane.D * fixedYBelowPlane.D * planetModel.inverseAbSquared - 1.0);
+        if (checkAbove < Vector.MINIMUM_RESOLUTION_SQUARED && checkBelow < Vector.MINIMUM_RESOLUTION_SQUARED) {
+          final GeoPoint[] YIntersectionsZ = travelPlaneFixedY.findIntersections(planetModel, testPointFixedZPlane);
+          for (final GeoPoint p : YIntersectionsZ) {
+            // Travel would be in XZ plane (fixed y) then in XY (fixed z)
+            //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
+            final double tpDelta1 = testPoint.x - p.x;
+            final double tpDelta2 = testPoint.y - p.y;
+            final double cpDelta1 = x - p.x;
+            final double cpDelta2 = z - p.z;
+            final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
+            //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.y - p.y) * (testPoint.y - p.y)  + (thePoint.x - p.x) * (thePoint.x - p.x) + (thePoint.z - p.z) * (thePoint.z - p.z);
+            //final double newDistance = Math.abs(testPoint.y - p.y) + Math.abs(thePoint.z - p.z);
+            if (newDistance < bestDistance) {
+              bestDistance = newDistance;
+              firstLegValue = testPoint.z;
+              secondLegValue = y;
+              firstLegPlane = testPointFixedZPlane;
+              firstLegAbovePlane = testPointFixedZAbovePlane;
+              firstLegBelowPlane = testPointFixedZBelowPlane;
+              secondLegPlane = travelPlaneFixedY;
+              secondLegAbovePlane = fixedYAbovePlane;
+              secondLegBelowPlane = fixedYBelowPlane;
+              firstLegTree = zTree;
+              secondLegTree = yTree;
+              intersectionPoint = p;
+            }
           }
         }
       }
       if (testPointFixedXAbovePlane != null && testPointFixedXBelowPlane != null && fixedZAbovePlane != null && fixedZBelowPlane != null) {
-        final GeoPoint[] ZIntersectionsX = travelPlaneFixedZ.findIntersections(planetModel, testPointFixedXPlane);
-        for (final GeoPoint p : ZIntersectionsX) {
-          // Travel would be in XY plane (fixed z) then in YZ (fixed x)
-          //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
-          final double tpDelta1 = testPoint.y - p.y;
-          final double tpDelta2 = testPoint.z - p.z;
-          final double cpDelta1 = y - p.y;
-          final double cpDelta2 = x - p.x;
-          final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
-          //final double newDistance = (testPoint.y - p.y) * (testPoint.y - p.y) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.x - p.x) * (thePoint.x - p.x);
-          //final double newDistance = Math.abs(testPoint.z - p.z) + Math.abs(thePoint.x - p.x);
-          if (newDistance < bestDistance) {
-            bestDistance = newDistance;
-            firstLegValue = testPoint.x;
-            secondLegValue = z;
-            firstLegPlane = testPointFixedXPlane;
-            firstLegAbovePlane = testPointFixedXAbovePlane;
-            firstLegBelowPlane = testPointFixedXBelowPlane;
-            secondLegPlane = travelPlaneFixedZ;
-            secondLegAbovePlane = fixedZAbovePlane;
-            secondLegBelowPlane = fixedZBelowPlane;
-            firstLegTree = xTree;
-            secondLegTree = zTree;
-            intersectionPoint = p;
+        //check if planes intersects inside world
+        final double checkAbove = 4.0 * (testPointFixedXAbovePlane.D * testPointFixedXAbovePlane.D * planetModel.inverseAbSquared + fixedZAbovePlane.D * fixedZAbovePlane.D * planetModel.inverseCSquared - 1.0);
+        final double checkBelow = 4.0 * (testPointFixedXBelowPlane.D * testPointFixedXBelowPlane.D * planetModel.inverseAbSquared + fixedZBelowPlane.D * fixedZBelowPlane.D * planetModel.inverseCSquared - 1.0);
+        if (checkAbove < Vector.MINIMUM_RESOLUTION_SQUARED && checkBelow < Vector.MINIMUM_RESOLUTION_SQUARED) {
+          final GeoPoint[] ZIntersectionsX = travelPlaneFixedZ.findIntersections(planetModel, testPointFixedXPlane);
+          for (final GeoPoint p : ZIntersectionsX) {
+            // Travel would be in XY plane (fixed z) then in YZ (fixed x)
+            //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
+            final double tpDelta1 = testPoint.y - p.y;
+            final double tpDelta2 = testPoint.z - p.z;
+            final double cpDelta1 = y - p.y;
+            final double cpDelta2 = x - p.x;
+            final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
+            //final double newDistance = (testPoint.y - p.y) * (testPoint.y - p.y) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.x - p.x) * (thePoint.x - p.x);
+            //final double newDistance = Math.abs(testPoint.z - p.z) + Math.abs(thePoint.x - p.x);
+            if (newDistance < bestDistance) {
+              bestDistance = newDistance;
+              firstLegValue = testPoint.x;
+              secondLegValue = z;
+              firstLegPlane = testPointFixedXPlane;
+              firstLegAbovePlane = testPointFixedXAbovePlane;
+              firstLegBelowPlane = testPointFixedXBelowPlane;
+              secondLegPlane = travelPlaneFixedZ;
+              secondLegAbovePlane = fixedZAbovePlane;
+              secondLegBelowPlane = fixedZBelowPlane;
+              firstLegTree = xTree;
+              secondLegTree = zTree;
+              intersectionPoint = p;
+            }
           }
         }
       }
       if (testPointFixedYAbovePlane != null && testPointFixedYBelowPlane != null && fixedZAbovePlane != null && fixedZBelowPlane != null) {
-        final GeoPoint[] ZIntersectionsY = travelPlaneFixedZ.findIntersections(planetModel, testPointFixedYPlane);
-        for (final GeoPoint p : ZIntersectionsY) {
-          // Travel would be in XY plane (fixed z) then in XZ (fixed y)
-          //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
-          final double tpDelta1 = testPoint.x - p.x;
-          final double tpDelta2 = testPoint.z - p.z;
-          final double cpDelta1 = y - p.y;
-          final double cpDelta2 = x - p.x;
-          final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
-          //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.x - p.x) * (thePoint.x - p.x);
-          //final double newDistance = Math.abs(testPoint.z - p.z) + Math.abs(thePoint.y - p.y);
-          if (newDistance < bestDistance) {
-            bestDistance = newDistance;
-            firstLegValue = testPoint.y;
-            secondLegValue = z;
-            firstLegPlane = testPointFixedYPlane;
-            firstLegAbovePlane = testPointFixedYAbovePlane;
-            firstLegBelowPlane = testPointFixedYBelowPlane;
-            secondLegPlane = travelPlaneFixedZ;
-            secondLegAbovePlane = fixedZAbovePlane;
-            secondLegBelowPlane = fixedZBelowPlane;
-            firstLegTree = yTree;
-            secondLegTree = zTree;
-            intersectionPoint = p;
+        //check if planes intersects inside world
+        final double checkAbove = 4.0 * (testPointFixedYAbovePlane.D * testPointFixedYAbovePlane.D * planetModel.inverseAbSquared + fixedZAbovePlane.D * fixedZAbovePlane.D * planetModel.inverseCSquared - 1.0);
+        final double checkBelow = 4.0 * (testPointFixedYBelowPlane.D * testPointFixedYBelowPlane.D * planetModel.inverseAbSquared + fixedZBelowPlane.D * fixedZBelowPlane.D * planetModel.inverseCSquared - 1.0);
+        if (checkAbove < Vector.MINIMUM_RESOLUTION_SQUARED && checkBelow < Vector.MINIMUM_RESOLUTION_SQUARED) {
+          final GeoPoint[] ZIntersectionsY = travelPlaneFixedZ.findIntersections(planetModel, testPointFixedYPlane);
+          for (final GeoPoint p : ZIntersectionsY) {
+            // Travel would be in XY plane (fixed z) then in XZ (fixed y)
+            //final double newDistance = p.arcDistance(testPoint) + p.arcDistance(thePoint);
+            final double tpDelta1 = testPoint.x - p.x;
+            final double tpDelta2 = testPoint.z - p.z;
+            final double cpDelta1 = y - p.y;
+            final double cpDelta2 = x - p.x;
+            final double newDistance = tpDelta1 * tpDelta1 + tpDelta2 * tpDelta2 + cpDelta1 * cpDelta1 + cpDelta2 * cpDelta2;
+            //final double newDistance = (testPoint.x - p.x) * (testPoint.x - p.x) + (testPoint.z - p.z) * (testPoint.z - p.z)  + (thePoint.y - p.y) * (thePoint.y - p.y) + (thePoint.x - p.x) * (thePoint.x - p.x);
+            //final double newDistance = Math.abs(testPoint.z - p.z) + Math.abs(thePoint.y - p.y);
+            if (newDistance < bestDistance) {
+              bestDistance = newDistance;
+              firstLegValue = testPoint.y;
+              secondLegValue = z;
+              firstLegPlane = testPointFixedYPlane;
+              firstLegAbovePlane = testPointFixedYAbovePlane;
+              firstLegBelowPlane = testPointFixedYBelowPlane;
+              secondLegPlane = travelPlaneFixedZ;
+              secondLegAbovePlane = fixedZAbovePlane;
+              secondLegBelowPlane = fixedZBelowPlane;
+              firstLegTree = yTree;
+              secondLegTree = zTree;
+              intersectionPoint = p;
+            }
           }
         }
       }


[31/40] lucene-solr:jira/solr-11833: SOLR-9304: Fix Solr's HTTP handling to respect '-Dsolr.ssl.checkPeerName=false' aka SOLR_SSL_CHECK_PEER_NAME

Posted by ab...@apache.org.
SOLR-9304: Fix Solr's HTTP handling to respect '-Dsolr.ssl.checkPeerName=false' aka SOLR_SSL_CHECK_PEER_NAME


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4e0e8e97
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4e0e8e97
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4e0e8e97

Branch: refs/heads/jira/solr-11833
Commit: 4e0e8e979b66abdf0778fc0ea86ae5ef5d8f2f91
Parents: 8f296d0
Author: Chris Hostetter <ho...@apache.org>
Authored: Sun Apr 22 13:38:37 2018 -0700
Committer: Chris Hostetter <ho...@apache.org>
Committed: Sun Apr 22 13:38:37 2018 -0700

----------------------------------------------------------------------
 solr/CHANGES.txt                                |   5 +-
 solr/bin/solr                                   |   4 +
 solr/bin/solr.cmd                               |   3 +
 solr/bin/solr.in.cmd                            |  12 ++-
 solr/bin/solr.in.sh                             |  16 ++-
 .../solr/cloud/TestMiniSolrCloudClusterSSL.java |  59 ++++++++++
 solr/solr-ref-guide/src/enabling-ssl.adoc       |  21 +++-
 .../solr/client/solrj/impl/HttpClientUtil.java  |  59 +++++++++-
 .../client/solrj/impl/HttpClientUtilTest.java   | 108 +++++++++++++++++++
 .../org/apache/solr/util/SSLTestConfig.java     |  89 ++++++++-------
 ...estConfig.hostname-and-ip-missmatch.keystore | Bin 0 -> 2246 bytes
 .../resources/SSLTestConfig.testing.keystore    | Bin 2208 -> 2207 bytes
 .../src/resources/create-keystores.sh           |  37 +++++++
 13 files changed, 362 insertions(+), 51 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index efa6000..a9e63f3 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -176,7 +176,10 @@ Bug Fixes
 * SOLR-6286: TestReplicationHandler.doTestReplicateAfterCoreReload(): stop checking for identical
   commits before/after master core reload; and make non-nightly mode test 10 docs instead of 0.
   (shalin, hossman, Mark Miller, Steve Rowe)
- 
+
+* SOLR-9304: Fix Solr's HTTP handling to respect '-Dsolr.ssl.checkPeerName=false' aka SOLR_SSL_CHECK_PEER_NAME
+  (Shawn Heisey, Carlton Findley, Robby Pond, hossman)
+
 Optimizations
 ----------------------
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/bin/solr
----------------------------------------------------------------------
diff --git a/solr/bin/solr b/solr/bin/solr
index 3cda782..68d1140 100755
--- a/solr/bin/solr
+++ b/solr/bin/solr
@@ -224,6 +224,10 @@ if [ "$SOLR_SSL_ENABLED" == "true" ]; then
     fi
   fi
 
+  if [ -n "$SOLR_SSL_CHECK_PEER_NAME" ]; then
+    SOLR_SSL_OPTS+=" -Dsolr.ssl.checkPeerName=$SOLR_SSL_CHECK_PEER_NAME"
+  fi
+
   if [ -n "$SOLR_SSL_CLIENT_TRUST_STORE" ]; then
     SOLR_SSL_OPTS+=" -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE"
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/bin/solr.cmd
----------------------------------------------------------------------
diff --git a/solr/bin/solr.cmd b/solr/bin/solr.cmd
index e9f6c45..7235a4c 100644
--- a/solr/bin/solr.cmd
+++ b/solr/bin/solr.cmd
@@ -111,6 +111,9 @@ IF "%SOLR_SSL_ENABLED%"=="true" (
      set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! -Djavax.net.ssl.trustStoreType=%SOLR_SSL_TRUST_STORE_TYPE%"
     )
   )
+  IF DEFINED SOLR_SSL_CHECK_PEER_NAME (
+   set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! -Dsolr.ssl.checkPeerName=%SOLR_SSL_CHECK_PEER_NAME%"
+  )
 ) ELSE (
   set SOLR_SSL_OPTS=
 )

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/bin/solr.in.cmd
----------------------------------------------------------------------
diff --git a/solr/bin/solr.in.cmd b/solr/bin/solr.in.cmd
index a1771ad..86ad708 100644
--- a/solr/bin/solr.in.cmd
+++ b/solr/bin/solr.in.cmd
@@ -103,20 +103,26 @@ REM Uncomment to set SSL-related system properties
 REM Be sure to update the paths to the correct keystore for your environment
 REM set SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
 REM set SOLR_SSL_KEY_STORE_PASSWORD=secret
-REM set SOLR_SSL_KEY_STORE_TYPE=JKS
 REM set SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
 REM set SOLR_SSL_TRUST_STORE_PASSWORD=secret
-REM set SOLR_SSL_TRUST_STORE_TYPE=JKS
+REM Require clients to authenticate
 REM set SOLR_SSL_NEED_CLIENT_AUTH=false
+REM Enable clients to authenticate (but not require)
 REM set SOLR_SSL_WANT_CLIENT_AUTH=false
+REM SSL Certificates contain host/ip "peer name" information that is validated by default. Setting
+REM this to false can be useful to disable these checks when re-using a certificate on many hosts
+REM set SOLR_SSL_CHECK_PEER_NAME=true
+REM Override Key/Trust Store types if necessary
+REM set SOLR_SSL_KEY_STORE_TYPE=JKS
+REM set SOLR_SSL_TRUST_STORE_TYPE=JKS
 
 REM Uncomment if you want to override previously defined SSL values for HTTP client
 REM otherwise keep them commented and the above values will automatically be set for HTTP clients
 REM set SOLR_SSL_CLIENT_KEY_STORE=
 REM set SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
-REM set SOLR_SSL_CLIENT_KEY_STORE_TYPE=
 REM set SOLR_SSL_CLIENT_TRUST_STORE=
 REM set SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD=
+REM set SOLR_SSL_CLIENT_KEY_STORE_TYPE=
 REM set SOLR_SSL_CLIENT_TRUST_STORE_TYPE=
 
 REM Sets path of Hadoop credential provider (hadoop.security.credential.provider.path property) and

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/bin/solr.in.sh
----------------------------------------------------------------------
diff --git a/solr/bin/solr.in.sh b/solr/bin/solr.in.sh
index 7cf6a84..9b15bea 100644
--- a/solr/bin/solr.in.sh
+++ b/solr/bin/solr.in.sh
@@ -118,22 +118,28 @@
 #SOLR_SSL_ENABLED=true
 # Uncomment to set SSL-related system properties
 # Be sure to update the paths to the correct keystore for your environment
-#SOLR_SSL_KEY_STORE=/home/shalin/work/oss/shalin-lusolr/solr/server/etc/solr-ssl.keystore.jks
+#SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
 #SOLR_SSL_KEY_STORE_PASSWORD=secret
-#SOLR_SSL_KEY_STORE_TYPE=JKS
-#SOLR_SSL_TRUST_STORE=/home/shalin/work/oss/shalin-lusolr/solr/server/etc/solr-ssl.keystore.jks
+#SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
 #SOLR_SSL_TRUST_STORE_PASSWORD=secret
-#SOLR_SSL_TRUST_STORE_TYPE=JKS
+# Require clients to authenticate
 #SOLR_SSL_NEED_CLIENT_AUTH=false
+# Enable clients to authenticate (but not require)
 #SOLR_SSL_WANT_CLIENT_AUTH=false
+# SSL Certificates contain host/ip "peer name" information that is validated by default. Setting
+# this to false can be useful to disable these checks when re-using a certificate on many hosts
+#SOLR_SSL_CHECK_PEER_NAME=true
+# Override Key/Trust Store types if necessary
+#SOLR_SSL_KEY_STORE_TYPE=JKS
+#SOLR_SSL_TRUST_STORE_TYPE=JKS
 
 # Uncomment if you want to override previously defined SSL values for HTTP client
 # otherwise keep them commented and the above values will automatically be set for HTTP clients
 #SOLR_SSL_CLIENT_KEY_STORE=
 #SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
-#SOLR_SSL_CLIENT_KEY_STORE_TYPE=
 #SOLR_SSL_CLIENT_TRUST_STORE=
 #SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD=
+#SOLR_SSL_CLIENT_KEY_STORE_TYPE=
 #SOLR_SSL_CLIENT_TRUST_STORE_TYPE=
 
 # Sets path of Hadoop credential provider (hadoop.security.credential.provider.path property) and

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudClusterSSL.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudClusterSSL.java b/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudClusterSSL.java
index 98f952a..7a6606a 100644
--- a/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudClusterSSL.java
+++ b/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudClusterSSL.java
@@ -17,6 +17,7 @@
 package org.apache.solr.cloud;
 
 import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLException;
 import java.io.IOException;
 import java.lang.invoke.MethodHandles;
 import java.util.List;
@@ -32,6 +33,8 @@ import org.apache.http.impl.client.CloseableHttpClient;
 import org.apache.http.impl.client.HttpClientBuilder;
 import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
 import org.apache.lucene.util.Constants;
+import org.apache.lucene.util.TestRuleRestoreSystemProperties;
+
 import org.apache.solr.SolrTestCaseJ4;
 import org.apache.solr.client.solrj.SolrServerException;
 import org.apache.solr.client.solrj.embedded.JettyConfig;
@@ -46,6 +49,9 @@ import org.apache.solr.common.params.CoreAdminParams.CoreAdminAction;
 import org.apache.solr.util.SSLTestConfig;
 import org.junit.After;
 import org.junit.Before;
+import org.junit.Rule;
+import org.junit.rules.TestRule;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -72,6 +78,10 @@ public class TestMiniSolrCloudClusterSSL extends SolrTestCaseJ4 {
   public static final int NUM_SERVERS = 3;
   public static final String CONF_NAME = MethodHandles.lookup().lookupClass().getName();
   
+  @Rule
+  public TestRule syspropRestore = new TestRuleRestoreSystemProperties
+    (HttpClientUtil.SYS_PROP_CHECK_PEER_NAME);
+  
   @Before
   public void before() {
     // undo the randomization of our super class
@@ -120,6 +130,13 @@ public class TestMiniSolrCloudClusterSSL extends SolrTestCaseJ4 {
     checkClusterWithNodeReplacement(sslConfig);
   }
   
+  public void testSslWithCheckPeerName() throws Exception {
+    final SSLTestConfig sslConfig = new SSLTestConfig(true, false, true);
+    HttpClientUtil.setSchemaRegistryProvider(sslConfig.buildClientSchemaRegistryProvider());
+    System.setProperty(ZkStateReader.URL_SCHEME, "https");
+    checkClusterWithNodeReplacement(sslConfig);
+  }
+  
   /**
    * Constructs a cluster with the specified sslConfigs, runs {@link #checkClusterWithCollectionCreations}, 
    * then verifies that if we modify the default SSLContext (mimicing <code>javax.net.ssl.*</code> 
@@ -142,6 +159,8 @@ public class TestMiniSolrCloudClusterSSL extends SolrTestCaseJ4 {
       // our test config doesn't use SSL, and reset HttpClientUtil to it's defaults so it picks up our
       // SSLContext that way.
       SSLContext.setDefault( sslConfig.isSSLMode() ? sslConfig.buildClientSSLContext() : DEFAULT_SSL_CONTEXT);
+      System.setProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME,
+                         Boolean.toString(sslConfig.getCheckPeerName()));
       HttpClientUtil.resetHttpClientBuilder();
       
       // recheck that we can communicate with all the jetty instances in our cluster
@@ -151,6 +170,46 @@ public class TestMiniSolrCloudClusterSSL extends SolrTestCaseJ4 {
     }
   }
 
+  /** Sanity check that our test scaffolding for validating SSL peer names fails when it should */
+  public void testSslWithInvalidPeerName() throws Exception {
+    // NOTE: first initialize the cluster w/o peer name checks, which means our server will use
+    // certs with a bogus hostname/ip and clients shouldn't care...
+    final SSLTestConfig sslConfig = new SSLTestConfig(true, false, false);
+    HttpClientUtil.setSchemaRegistryProvider(sslConfig.buildClientSchemaRegistryProvider());
+    System.setProperty(ZkStateReader.URL_SCHEME, "https");
+    final JettyConfig config = JettyConfig.builder().withSSLConfig(sslConfig).build();
+    final MiniSolrCloudCluster cluster = new MiniSolrCloudCluster(NUM_SERVERS, createTempDir(), config);
+    try {
+      checkClusterWithCollectionCreations(cluster, sslConfig);
+      
+      // now initialize a client that still uses the existing SSLContext/Provider, so it will accept
+      // our existing certificate, but *does* care about validating the peer name
+      System.setProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME, "true");
+      HttpClientUtil.resetHttpClientBuilder();
+
+      // and validate we get failures when trying to talk to our cluster...
+      final List<JettySolrRunner> jettys = cluster.getJettySolrRunners();
+      for (JettySolrRunner jetty : jettys) {
+        final String baseURL = jetty.getBaseUrl().toString();
+        // verify new solr clients validate peer name and can't talk to this server
+        Exception ex = expectThrows(SolrServerException.class, () -> {
+            try (HttpSolrClient client = getRandomizedHttpSolrClient(baseURL)) {
+              CoreAdminRequest req = new CoreAdminRequest();
+              req.setAction( CoreAdminAction.STATUS );
+              client.request(req);
+            }
+          });
+        assertTrue("Expected an root cause SSL Exception, got: " + ex.toString(),
+                   ex.getCause() instanceof SSLException);
+      }
+    } finally {
+      cluster.shutdown();
+    }
+
+
+    
+  }
+
   /**
    * General purpose cluster sanity check...
    * <ol>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/solr-ref-guide/src/enabling-ssl.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
index b641bfd..35bc1d8 100644
--- a/solr/solr-ref-guide/src/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -77,6 +77,11 @@ NOTE: If you setup Solr as a service on Linux using the steps outlined in <<taki
 .bin/solr.in.sh example SOLR_SSL_* configuration
 [source,bash]
 ----
+# Enables HTTPS. It is implictly true if you set SOLR_SSL_KEY_STORE. Use this config
+# to enable https module with custom jetty configuration.
+SOLR_SSL_ENABLED=true
+# Uncomment to set SSL-related system properties
+# Be sure to update the paths to the correct keystore for your environment
 SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
 SOLR_SSL_KEY_STORE_PASSWORD=secret
 SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
@@ -85,7 +90,10 @@ SOLR_SSL_TRUST_STORE_PASSWORD=secret
 SOLR_SSL_NEED_CLIENT_AUTH=false
 # Enable clients to authenticate (but not require)
 SOLR_SSL_WANT_CLIENT_AUTH=false
-# Define Key Store type if necessary
+# SSL Certificates contain host/ip "peer name" information that is validated by default. Setting
+# this to false can be useful to disable these checks when re-using a certificate on many hosts
+SOLR_SSL_CHECK_PEER_NAME=true
+# Override Key/Trust Store types if necessary
 SOLR_SSL_KEY_STORE_TYPE=JKS
 SOLR_SSL_TRUST_STORE_TYPE=JKS
 ----
@@ -100,6 +108,11 @@ Similarly, when you start Solr on Windows, the `bin\solr.cmd` script includes th
 .bin\solr.in.cmd example SOLR_SSL_* configuration
 [source,text]
 ----
+REM Enables HTTPS. It is implictly true if you set SOLR_SSL_KEY_STORE. Use this config
+REM to enable https module with custom jetty configuration.
+set SOLR_SSL_ENABLED=true
+REM Uncomment to set SSL-related system properties
+REM Be sure to update the paths to the correct keystore for your environment
 set SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
 set SOLR_SSL_KEY_STORE_PASSWORD=secret
 set SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
@@ -108,6 +121,12 @@ REM Require clients to authenticate
 set SOLR_SSL_NEED_CLIENT_AUTH=false
 REM Enable clients to authenticate (but not require)
 set SOLR_SSL_WANT_CLIENT_AUTH=false
+REM SSL Certificates contain host/ip "peer name" information that is validated by default. Setting
+REM this to false can be useful to disable these checks when re-using a certificate on many hosts
+set SOLR_SSL_CHECK_PEER_NAME=true
+REM Override Key/Trust Store types if necessary
+set SOLR_SSL_KEY_STORE_TYPE=JKS
+set SOLR_SSL_TRUST_STORE_TYPE=JKS
 ----
 
 === Run Single Node Solr using SSL

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java
index d064a06..e08f85f 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java
@@ -47,6 +47,7 @@ import org.apache.http.config.RegistryBuilder;
 import org.apache.http.conn.ConnectionKeepAliveStrategy;
 import org.apache.http.conn.socket.ConnectionSocketFactory;
 import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.NoopHostnameVerifier;
 import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
 import org.apache.http.entity.HttpEntityWrapper;
 import org.apache.http.impl.client.BasicCredentialsProvider;
@@ -56,6 +57,7 @@ import org.apache.http.impl.client.HttpClientBuilder;
 import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
 import org.apache.http.protocol.HttpContext;
 import org.apache.http.protocol.HttpRequestExecutor;
+import org.apache.http.ssl.SSLContexts;
 import org.apache.solr.common.params.ModifiableSolrParams;
 import org.apache.solr.common.params.SolrParams;
 import org.apache.solr.common.util.ObjectReleaseTracker;
@@ -93,7 +95,16 @@ public class HttpClientUtil {
   public static final String PROP_BASIC_AUTH_USER = "httpBasicAuthUser";
   // Basic auth password 
   public static final String PROP_BASIC_AUTH_PASS = "httpBasicAuthPassword";
-  
+
+  /**
+   * System property consulted to determine if the default {@link SchemaRegistryProvider} 
+   * will require hostname validation of SSL Certificates.  The default behavior is to enforce 
+   * peer name validation.
+   * <p>
+   * This property will have no effect if {@link #setSchemaRegistryProvider} is used to override
+   * the default {@link SchemaRegistryProvider} 
+   * </p>
+   */
   public static final String SYS_PROP_CHECK_PEER_NAME = "solr.ssl.checkPeerName";
   
   // * NOTE* The following params configure the default request config and this
@@ -181,6 +192,9 @@ public class HttpClientUtil {
     httpClientBuilder = newHttpClientBuilder;
   }
 
+  /**
+   * @see #SYS_PROP_CHECK_PEER_NAME
+   */
   public static void setSchemaRegistryProvider(SchemaRegistryProvider newRegistryProvider) {
     schemaRegistryProvider = newRegistryProvider;
   }
@@ -188,7 +202,10 @@ public class HttpClientUtil {
   public static SolrHttpClientBuilder getHttpClientBuilder() {
     return httpClientBuilder;
   }
-  
+
+  /**
+   * @see #SYS_PROP_CHECK_PEER_NAME
+   */
   public static SchemaRegistryProvider getSchemaRegisteryProvider() {
     return schemaRegistryProvider;
   }
@@ -205,9 +222,22 @@ public class HttpClientUtil {
       // except that we explicitly use SSLConnectionSocketFactory.getSystemSocketFactory()
       // to pick up the system level default SSLContext (where javax.net.ssl.* properties
       // related to keystore & truststore are specified)
-      RegistryBuilder<ConnectionSocketFactory> builder = RegistryBuilder.<ConnectionSocketFactory>create();
+      RegistryBuilder<ConnectionSocketFactory> builder = RegistryBuilder.<ConnectionSocketFactory> create();
       builder.register("http", PlainConnectionSocketFactory.getSocketFactory());
-      builder.register("https", SSLConnectionSocketFactory.getSystemSocketFactory());
+
+      // logic to turn off peer host check
+      SSLConnectionSocketFactory sslConnectionSocketFactory = null;
+      boolean sslCheckPeerName = toBooleanDefaultIfNull(
+          toBooleanObject(System.getProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME)), true);
+      if (sslCheckPeerName) {
+        sslConnectionSocketFactory = SSLConnectionSocketFactory.getSystemSocketFactory();
+      } else {
+        sslConnectionSocketFactory = new SSLConnectionSocketFactory(SSLContexts.createSystemDefault(),
+                                                                    NoopHostnameVerifier.INSTANCE);
+        logger.debug(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME + "is false, hostname checks disabled.");
+      }
+      builder.register("https", sslConnectionSocketFactory);
+
       return builder.build();
     }
   }
@@ -459,5 +489,26 @@ public class HttpClientUtil {
     cookiePolicy = policyName;
   }
 
+  /**
+   * @lucene.internal
+   */
+  static boolean toBooleanDefaultIfNull(Boolean bool, boolean valueIfNull) {
+    if (bool == null) {
+      return valueIfNull;
+    }
+    return bool.booleanValue() ? true : false;
+  }
 
+  /**
+   * @lucene.internal
+   */
+  static Boolean toBooleanObject(String str) {
+    if ("true".equalsIgnoreCase(str)) {
+      return Boolean.TRUE;
+    } else if ("false".equalsIgnoreCase(str)) {
+      return Boolean.FALSE;
+    }
+    // no match
+    return null;
+  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpClientUtilTest.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpClientUtilTest.java b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpClientUtilTest.java
new file mode 100644
index 0000000..ce2f8b7
--- /dev/null
+++ b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpClientUtilTest.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.client.solrj.impl;
+
+import javax.net.ssl.HostnameVerifier;
+import java.io.IOException;
+
+import org.apache.solr.client.solrj.impl.HttpClientUtil.SchemaRegistryProvider;
+
+import org.apache.commons.lang.reflect.FieldUtils;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.ssl.DefaultHostnameVerifier;
+import org.apache.http.conn.ssl.NoopHostnameVerifier;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.lucene.util.LuceneTestCase;
+import org.apache.lucene.util.TestRuleRestoreSystemProperties;
+
+import org.junit.After;
+import org.junit.Rule;
+import org.junit.rules.TestRule;
+import org.junit.Test;
+
+public class HttpClientUtilTest extends LuceneTestCase {
+
+  @Rule
+  public TestRule syspropRestore = new TestRuleRestoreSystemProperties
+    (HttpClientUtil.SYS_PROP_CHECK_PEER_NAME);
+  
+  @After
+  public void resetHttpClientBuilder() {
+    HttpClientUtil.resetHttpClientBuilder();
+  }
+    
+  public void testSSLSystemProperties() throws IOException {
+    
+    assertNotNull("HTTPS scheme could not be created using system defaults",
+                  HttpClientUtil.getSchemaRegisteryProvider().getSchemaRegistry().lookup("https"));
+
+    assertSSLHostnameVerifier(DefaultHostnameVerifier.class, HttpClientUtil.getSchemaRegisteryProvider());
+
+    System.setProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME, "true");
+    resetHttpClientBuilder();
+    assertSSLHostnameVerifier(DefaultHostnameVerifier.class, HttpClientUtil.getSchemaRegisteryProvider());
+
+    System.setProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME, "");
+    resetHttpClientBuilder();
+    assertSSLHostnameVerifier(DefaultHostnameVerifier.class, HttpClientUtil.getSchemaRegisteryProvider());
+    
+    System.setProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME, "false");
+    resetHttpClientBuilder();
+    assertSSLHostnameVerifier(NoopHostnameVerifier.class, HttpClientUtil.getSchemaRegisteryProvider());
+  }
+
+  private void assertSSLHostnameVerifier(Class<? extends HostnameVerifier> expected,
+                                         SchemaRegistryProvider provider) {
+    ConnectionSocketFactory socketFactory = provider.getSchemaRegistry().lookup("https");
+    assertNotNull("unable to lookup https", socketFactory);
+    assertTrue("socketFactory is not an SSLConnectionSocketFactory: " + socketFactory.getClass(),
+               socketFactory instanceof SSLConnectionSocketFactory);
+    SSLConnectionSocketFactory sslSocketFactory = (SSLConnectionSocketFactory) socketFactory;
+    try {
+      Object hostnameVerifier = FieldUtils.readField(sslSocketFactory, "hostnameVerifier", true);
+      assertNotNull("sslSocketFactory has null hostnameVerifier", hostnameVerifier);
+      assertEquals("sslSocketFactory does not have expected hostnameVerifier impl",
+                   expected, hostnameVerifier.getClass());
+    } catch (IllegalAccessException e) {
+      throw new AssertionError("Unexpected access error reading hostnameVerifier field", e);
+    }
+  }
+  
+  @Test
+  public void testToBooleanDefaultIfNull() throws Exception {
+    assertFalse(HttpClientUtil.toBooleanDefaultIfNull(Boolean.FALSE, true));
+    assertTrue(HttpClientUtil.toBooleanDefaultIfNull(Boolean.TRUE, false));
+    assertFalse(HttpClientUtil.toBooleanDefaultIfNull(null, false));
+    assertTrue(HttpClientUtil.toBooleanDefaultIfNull(null, true));
+  }
+
+  @Test
+  public void testToBooleanObject() throws Exception {
+    assertEquals(Boolean.TRUE, HttpClientUtil.toBooleanObject("true"));
+    assertEquals(Boolean.TRUE, HttpClientUtil.toBooleanObject("TRUE"));
+    assertEquals(Boolean.TRUE, HttpClientUtil.toBooleanObject("tRuE"));
+
+    assertEquals(Boolean.FALSE, HttpClientUtil.toBooleanObject("false"));
+    assertEquals(Boolean.FALSE, HttpClientUtil.toBooleanObject("FALSE"));
+    assertEquals(Boolean.FALSE, HttpClientUtil.toBooleanObject("fALSE"));
+
+    assertEquals(null, HttpClientUtil.toBooleanObject("t"));
+    assertEquals(null, HttpClientUtil.toBooleanObject("f"));
+    assertEquals(null, HttpClientUtil.toBooleanObject("foo"));
+    assertEquals(null, HttpClientUtil.toBooleanObject(null));
+  }
+}

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/test-framework/src/java/org/apache/solr/util/SSLTestConfig.java
----------------------------------------------------------------------
diff --git a/solr/test-framework/src/java/org/apache/solr/util/SSLTestConfig.java b/solr/test-framework/src/java/org/apache/solr/util/SSLTestConfig.java
index 8268fcd..3b03f6e 100644
--- a/solr/test-framework/src/java/org/apache/solr/util/SSLTestConfig.java
+++ b/solr/test-framework/src/java/org/apache/solr/util/SSLTestConfig.java
@@ -16,8 +16,6 @@
  */
 package org.apache.solr.util;
 
-import javax.net.ssl.SSLContext;
-import java.io.IOException;
 import java.security.KeyManagementException;
 import java.security.KeyStore;
 import java.security.KeyStoreException;
@@ -27,15 +25,17 @@ import java.security.SecureRandomSpi;
 import java.security.UnrecoverableKeyException;
 import java.util.Random;
 
+import javax.net.ssl.SSLContext;
+
 import org.apache.http.config.Registry;
 import org.apache.http.config.RegistryBuilder;
 import org.apache.http.conn.socket.ConnectionSocketFactory;
 import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.NoopHostnameVerifier;
 import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
-import org.apache.http.conn.ssl.SSLContextBuilder;
-import org.apache.http.conn.ssl.SSLContexts;
-import org.apache.http.conn.ssl.SSLSocketFactory;
 import org.apache.http.conn.ssl.TrustSelfSignedStrategy;
+import org.apache.http.ssl.SSLContextBuilder;
+import org.apache.http.ssl.SSLContexts;
 import org.apache.solr.client.solrj.embedded.SSLConfig;
 import org.apache.solr.client.solrj.impl.HttpClientUtil;
 import org.apache.solr.client.solrj.impl.HttpClientUtil.SchemaRegistryProvider;
@@ -49,9 +49,11 @@ import org.eclipse.jetty.util.ssl.SslContextFactory;
  */
 public class SSLTestConfig extends SSLConfig {
 
-  private static final String TEST_KEYSTORE_RESOURCE = "SSLTestConfig.testing.keystore";
+  private static final String TEST_KEYSTORE_BOGUSHOST_RESOURCE = "SSLTestConfig.hostname-and-ip-missmatch.keystore";
+  private static final String TEST_KEYSTORE_LOCALHOST_RESOURCE = "SSLTestConfig.testing.keystore";
   private static final String TEST_KEYSTORE_PASSWORD = "secret";
 
+  private final boolean checkPeerName;
   private final Resource keyStore;
   private final Resource trustStore;
   
@@ -59,44 +61,59 @@ public class SSLTestConfig extends SSLConfig {
   public SSLTestConfig() {
     this(false, false);
   }
+  
+  /**
+   * Create an SSLTestConfig based on a few caller specified options, 
+   * implicitly assuming <code>checkPeerName=false</code>.  
+   * <p>
+   * As needed, keystore/truststore information will be pulled from a hardcoded resource 
+   * file provided by the solr test-framework
+   * </p>
+   *
+   * @param useSSL - whether SSL should be required.
+   * @param clientAuth - whether client authentication should be required.
+   */
+  public SSLTestConfig(boolean useSSL, boolean clientAuth) {
+    this(useSSL, clientAuth, false);
+  }
 
-  /** 
+  // NOTE: if any javadocs below change, update create-keystores.sh
+  /**
    * Create an SSLTestConfig based on a few caller specified options.  As needed, 
-   * keystore/truststore information will be pulled from a hardocded resource file provided 
-   * by the solr test-framework.
+   * keystore/truststore information will be pulled from a hardcoded resource files provided 
+   * by the solr test-framework based on the value of <code>checkPeerName</code>:
+   * <ul>
+   * <li><code>true</code> - A keystore resource file will be used that specifies 
+   *     a CN of <code>localhost</code> and a SAN IP of <code>127.0.0.1</code>, to 
+   *     ensure that all connections should be valid regardless of what machine runs the tests.</li> 
+   * <li><code>false</code> - A keystore resource file will be used that specifies 
+   *     a bogus hostname in the CN and reserved IP as the SAN, since no (valid) tests using this 
+   *     SSLTestConfig should care what CN/SAN are.</li> 
+   * </ul>
    *
-   * @param useSSL - wether SSL should be required.
+   * @param useSSL - whether SSL should be required.
    * @param clientAuth - whether client authentication should be required.
+   * @param checkPeerName - whether the client should validate the 'peer name' of the SSL Certificate (and which testing Cert should be used)
+   * @see HttpClientUtil#SYS_PROP_CHECK_PEER_NAME
    */
-  public SSLTestConfig(boolean useSSL, boolean clientAuth) {
+  public SSLTestConfig(boolean useSSL, boolean clientAuth, boolean checkPeerName) {
     super(useSSL, clientAuth, null, TEST_KEYSTORE_PASSWORD, null, TEST_KEYSTORE_PASSWORD);
-    trustStore = keyStore = Resource.newClassPathResource(TEST_KEYSTORE_RESOURCE);
+    this.checkPeerName = checkPeerName;
+    
+    final String resourceName = checkPeerName
+      ? TEST_KEYSTORE_LOCALHOST_RESOURCE : TEST_KEYSTORE_BOGUSHOST_RESOURCE;
+    trustStore = keyStore = Resource.newClassPathResource(resourceName);
     if (null == keyStore || ! keyStore.exists() ) {
       throw new IllegalStateException("Unable to locate keystore resource file in classpath: "
-                                      + TEST_KEYSTORE_RESOURCE);
+                                      + resourceName);
     }
   }
 
-  /**
-   * Helper utility for building resources from arbitrary user input paths/urls
-   * if input is null, returns null; otherwise attempts to build Resource and verifies that Resource exists.
-   */
-  private static final Resource tryNewResource(String userInput, String type) {
-    if (null == userInput) {
-      return null;
-    }
-    Resource result;
-    try {
-      result = Resource.newResource(userInput);
-    } catch (IOException e) {
-      throw new IllegalArgumentException("Can't build " + type + " Resource: " + e.getMessage(), e);
-    }
-    if (! result.exists()) {
-      throw new IllegalArgumentException(type + " Resource does not exist " + result.getName());
-    }
-    return result;
+  /** If true, then servers hostname/ip should be validated against the SSL Cert metadata */
+  public boolean getCheckPeerName() {
+    return checkPeerName;
   }
-
+  
   /** 
    * NOTE: This method is meaningless in SSLTestConfig.
    * @return null
@@ -175,7 +192,7 @@ public class SSLTestConfig extends SSLConfig {
     
     SSLContextBuilder builder = SSLContexts.custom();
     builder.setSecureRandom(NotSecurePsuedoRandom.INSTANCE);
-    
+
     builder.loadKeyMaterial(buildKeyStore(keyStore, getKeyStorePassword()), getKeyStorePassword().toCharArray());
 
     if (isClientAuthMode()) {
@@ -229,11 +246,9 @@ public class SSLTestConfig extends SSLConfig {
     }
     SSLConnectionSocketFactory sslConnectionFactory;
     try {
-      boolean sslCheckPeerName = toBooleanDefaultIfNull(toBooleanObject(System.getProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME)), true);
       SSLContext sslContext = buildClientSSLContext();
-      if (sslCheckPeerName == false) {
-        sslConnectionFactory = new SSLConnectionSocketFactory
-          (sslContext, SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);
+      if (checkPeerName == false) {
+        sslConnectionFactory = new SSLConnectionSocketFactory(sslContext, NoopHostnameVerifier.INSTANCE);
       } else {
         sslConnectionFactory = new SSLConnectionSocketFactory(sslContext);
       }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/test-framework/src/resources/SSLTestConfig.hostname-and-ip-missmatch.keystore
----------------------------------------------------------------------
diff --git a/solr/test-framework/src/resources/SSLTestConfig.hostname-and-ip-missmatch.keystore b/solr/test-framework/src/resources/SSLTestConfig.hostname-and-ip-missmatch.keystore
new file mode 100644
index 0000000..691a3be
Binary files /dev/null and b/solr/test-framework/src/resources/SSLTestConfig.hostname-and-ip-missmatch.keystore differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/test-framework/src/resources/SSLTestConfig.testing.keystore
----------------------------------------------------------------------
diff --git a/solr/test-framework/src/resources/SSLTestConfig.testing.keystore b/solr/test-framework/src/resources/SSLTestConfig.testing.keystore
index bcc6ec0..4fdb494 100644
Binary files a/solr/test-framework/src/resources/SSLTestConfig.testing.keystore and b/solr/test-framework/src/resources/SSLTestConfig.testing.keystore differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4e0e8e97/solr/test-framework/src/resources/create-keystores.sh
----------------------------------------------------------------------
diff --git a/solr/test-framework/src/resources/create-keystores.sh b/solr/test-framework/src/resources/create-keystores.sh
new file mode 100755
index 0000000..0b43f28
--- /dev/null
+++ b/solr/test-framework/src/resources/create-keystores.sh
@@ -0,0 +1,37 @@
+#!/bin/bash -ex
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+############
+ 
+# This script shows how the keystore files used for solr tests were generated.
+#
+# Running this script should only be necessary if the keystore files need to be
+# replaced, which shouldn't be required until sometime around the year 4751.
+
+# NOTE: if anything below changes, sanity check SSLTestConfig constructor javadocs
+
+echo "### remove old keystores"
+rm -f SSLTestConfig.testing.keystore SSLTestConfig.hostname-and-ip-missmatch.keystore
+
+echo "### create 'localhost' keystore and keys"
+keytool -keystore SSLTestConfig.testing.keystore -storepass "secret" -alias solrtest -keypass "secret" -genkey -keyalg RSA -dname "cn=localhost, ou=SolrTest, o=lucene.apache.org, c=US" -ext "san=dns:localhost,ip:127.0.0.1" -validity 999999
+
+# See https://tools.ietf.org/html/rfc5737
+echo "### create 'Bogus Host' keystore and keys"
+keytool -keystore SSLTestConfig.hostname-and-ip-missmatch.keystore -storepass "secret" -alias solrtest -keypass "secret" -genkey -keyalg RSA -dname "cn=bogus.hostname.tld, ou=SolrTest, o=lucene.apache.org, c=US" -ext "san=dns:bogus.hostname.tld,ip:192.0.2.0" -validity 999999
+
+


[03/40] lucene-solr:jira/solr-11833: LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

Posted by ab...@apache.org.
LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

IndexWriter#numDeletesToMerge was creating a ReadersAndUpdates
for all incoming SegmentCommitInfo even if that info wasn't private
to the IndexWriter. This is an illegal use of this API but since it's
transitively public via MergePolicy#findMerges we have to be conservative
with regestiering ReadersAndUpdates. In IndexWriter#numDeletesToMerge we
can only use existing ones. This means for soft-deletes we need to react
earlier in order to produce accurate numbers.

This change partially rolls back the changes in LUCENE-8253. Instead of
registering the readers once they are pulled via IndexWriter#numDeletesToMerge
we now check if segments are fully deleted on flush which is very unlikely and
can be done in a lazy fashion ie. it's only paying the extra cost of opening a
reader and checking all soft-deletes if soft deletes are used and present
in the flushed segment.

This has the side-effect that flushed segments that are 100% hard deleted are also
cleaned up right after they are flushed, previously these segments were sticking
around for a while until they got picked for a merge or received another delete.

This also closes LUCENE-8256


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/d9041124
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/d9041124
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/d9041124

Branch: refs/heads/jira/solr-11833
Commit: d904112428184ce9c1726313add5d184f4014a72
Parents: 09db13f
Author: Simon Willnauer <si...@apache.org>
Authored: Tue Apr 17 10:16:58 2018 +0200
Committer: Simon Willnauer <si...@apache.org>
Committed: Tue Apr 17 16:26:52 2018 +0200

----------------------------------------------------------------------
 .../lucene/index/BufferedUpdatesStream.java     |   2 +-
 .../lucene/index/DocumentsWriterFlushQueue.java |   5 +-
 .../apache/lucene/index/FilterMergePolicy.java  |   4 +-
 .../lucene/index/FrozenBufferedUpdates.java     |   1 -
 .../org/apache/lucene/index/IndexWriter.java    |  59 +++++++----
 .../org/apache/lucene/index/MergePolicy.java    |   2 +-
 .../org/apache/lucene/index/NoMergePolicy.java  |   4 +-
 .../org/apache/lucene/index/PendingDeletes.java |   2 +-
 .../apache/lucene/index/PendingSoftDeletes.java |  12 ++-
 .../apache/lucene/index/ReadersAndUpdates.java  |  37 +++----
 .../index/SoftDeletesRetentionMergePolicy.java  |   5 +-
 .../lucene/index/StandardDirectoryReader.java   |   2 +-
 .../apache/lucene/index/TestIndexWriter.java    |   3 +-
 .../lucene/index/TestIndexWriterOnDiskFull.java |   3 +-
 .../apache/lucene/index/TestMultiFields.java    |   3 +-
 .../apache/lucene/index/TestPendingDeletes.java |   4 +-
 .../TestSoftDeletesDirectoryReaderWrapper.java  |  32 ------
 .../TestSoftDeletesRetentionMergePolicy.java    | 101 ++++++++++++++++---
 .../admin/SegmentsInfoRequestHandlerTest.java   |   2 -
 19 files changed, 183 insertions(+), 100 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
index 78fe950..32ee256 100644
--- a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
+++ b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
@@ -324,7 +324,7 @@ class BufferedUpdatesStream implements Accountable {
         totDelCount += segState.rld.getPendingDeleteCount() - segState.startDelCount;
         int fullDelCount = segState.rld.info.getDelCount() + segState.rld.getPendingDeleteCount();
         assert fullDelCount <= segState.rld.info.info.maxDoc() : fullDelCount + " > " + segState.rld.info.info.maxDoc();
-        if (segState.rld.isFullyDeleted() && writer.getConfig().mergePolicy.keepFullyDeletedSegment(segState.reader) == false) {
+        if (segState.rld.isFullyDeleted() && writer.getConfig().mergePolicy.keepFullyDeletedSegment(() -> segState.reader) == false) {
           if (allDeleted == null) {
             allDeleted = new ArrayList<>();
           }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushQueue.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushQueue.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushQueue.java
index b051545..fde7587 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushQueue.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushQueue.java
@@ -188,7 +188,8 @@ class DocumentsWriterFlushQueue {
     protected final void publishFlushedSegment(IndexWriter indexWriter, FlushedSegment newSegment, FrozenBufferedUpdates globalPacket)
         throws IOException {
       assert newSegment != null;
-      assert newSegment.segmentInfo != null;
+      SegmentCommitInfo segmentInfo = newSegment.segmentInfo;
+      assert segmentInfo != null;
       final FrozenBufferedUpdates segmentUpdates = newSegment.segmentUpdates;
       if (indexWriter.infoStream.isEnabled("DW")) {
         indexWriter.infoStream.message("DW", "publishFlushedSegment seg-private updates=" + segmentUpdates);  
@@ -198,7 +199,7 @@ class DocumentsWriterFlushQueue {
         indexWriter.infoStream.message("DW", "flush: push buffered seg private updates: " + segmentUpdates);
       }
       // now publish!
-      indexWriter.publishFlushedSegment(newSegment.segmentInfo, segmentUpdates, globalPacket, newSegment.sortMap);
+      indexWriter.publishFlushedSegment(segmentInfo, newSegment.fieldInfos, segmentUpdates, globalPacket, newSegment.sortMap);
     }
     
     protected final void finishFlush(IndexWriter indexWriter, FlushedSegment newSegment, FrozenBufferedUpdates bufferedUpdates)

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/FilterMergePolicy.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/FilterMergePolicy.java b/lucene/core/src/java/org/apache/lucene/index/FilterMergePolicy.java
index d073b84..afe232a 100644
--- a/lucene/core/src/java/org/apache/lucene/index/FilterMergePolicy.java
+++ b/lucene/core/src/java/org/apache/lucene/index/FilterMergePolicy.java
@@ -94,8 +94,8 @@ public class FilterMergePolicy extends MergePolicy {
   }
 
   @Override
-  public boolean keepFullyDeletedSegment(CodecReader reader) throws IOException {
-    return in.keepFullyDeletedSegment(reader);
+  public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
+    return in.keepFullyDeletedSegment(readerIOSupplier);
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
index a017db9..fc268df 100644
--- a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
+++ b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
@@ -340,7 +340,6 @@ class FrozenBufferedUpdates {
                                                messagePrefix + "done inner apply del packet (%s) to %d segments; %d new deletes/updates; took %.3f sec",
                                                this, segStates.length, delCount, (System.nanoTime() - iterStartNS) / 1000000000.));
       }
-      
       if (privateSegment != null) {
         // No need to retry for a segment-private packet: the merge that folds in our private segment already waits for all deletes to
         // be applied before it kicks off, so this private segment must already not be in the set of merging segments

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
index e8f3e13..d6237e1 100644
--- a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
@@ -2767,7 +2767,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
    * segments SegmentInfo to the index writer.
    */
   synchronized void publishFlushedSegment(SegmentCommitInfo newSegment,
-                                          FrozenBufferedUpdates packet, FrozenBufferedUpdates globalPacket,
+                                          FieldInfos fieldInfos, FrozenBufferedUpdates packet, FrozenBufferedUpdates globalPacket,
                                           Sorter.DocMap sortMap) throws IOException {
     boolean published = false;
     try {
@@ -2792,7 +2792,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
 
         // Do this as an event so it applies higher in the stack when we are not holding DocumentsWriterFlushQueue.purgeLock:
         docWriter.putEvent(new DocumentsWriter.ResolveUpdatesEvent(packet));
-          
+
       } else {
         // Since we don't have a delete packet to apply we can get a new
         // generation right away
@@ -2807,14 +2807,37 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       segmentInfos.add(newSegment);
       published = true;
       checkpoint();
-
       if (packet != null && packet.any() && sortMap != null) {
         // TODO: not great we do this heavyish op while holding IW's monitor lock,
         // but it only applies if you are using sorted indices and updating doc values:
         ReadersAndUpdates rld = readerPool.get(newSegment, true);
         rld.sortMap = sortMap;
+        // DON't release this ReadersAndUpdates we need to stick with that sortMap
+      }
+      FieldInfo fieldInfo = fieldInfos.fieldInfo(config.softDeletesField); // will return null if no soft deletes are present
+      // this is a corner case where documents delete them-self with soft deletes. This is used to
+      // build delete tombstones etc. in this case we haven't seen any updates to the DV in this fresh flushed segment.
+      // if we have seen updates the update code checks if the segment is fully deleted.
+      boolean hasInitialSoftDeleted = (fieldInfo != null
+          && fieldInfo.getDocValuesGen() == -1
+          && fieldInfo.getDocValuesType() != DocValuesType.NONE);
+      final boolean isFullyHardDeleted = newSegment.getDelCount() == newSegment.info.maxDoc();
+      // we either have a fully hard-deleted segment or one or more docs are soft-deleted. In both cases we need
+      // to go and check if they are fully deleted. This has the nice side-effect that we now have accurate numbers
+      // for the soft delete right after we flushed to disk.
+      if (hasInitialSoftDeleted || isFullyHardDeleted){
+        // this operation is only really executed if needed an if soft-deletes are not configured it only be executed
+        // if we deleted all docs in this newly flushed segment.
+        ReadersAndUpdates rld = readerPool.get(newSegment, true);
+        try {
+          if (isFullyDeleted(rld)) {
+            dropDeletedSegment(newSegment);
+          }
+        } finally {
+          readerPool.release(rld);
+        }
       }
-      
+
     } finally {
       if (published == false) {
         adjustPendingNumDocs(-newSegment.info.maxDoc());
@@ -2822,6 +2845,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       flushCount.incrementAndGet();
       doAfterFlush();
     }
+
   }
 
   private synchronized void resetMergeExceptions() {
@@ -3355,7 +3379,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
             flushSuccess = true;
 
             applyAllDeletesAndUpdates();
-
             synchronized(this) {
 
               readerPool.commit(segmentInfos);
@@ -5211,12 +5234,8 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
 
   final boolean isFullyDeleted(ReadersAndUpdates readersAndUpdates) throws IOException {
     if (readersAndUpdates.isFullyDeleted()) {
-      SegmentReader reader = readersAndUpdates.getReader(IOContext.READ);
-      try {
-        return config.mergePolicy.keepFullyDeletedSegment(reader) == false;
-      } finally {
-        readersAndUpdates.release(reader);
-      }
+      assert Thread.holdsLock(this);
+      return readersAndUpdates.keepFullyDeletedSegment(config.getMergePolicy()) == false;
     }
     return false;
   }
@@ -5230,15 +5249,17 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
    */
   public final int numDeletesToMerge(SegmentCommitInfo info) throws IOException {
     MergePolicy mergePolicy = config.getMergePolicy();
-    final ReadersAndUpdates rld = readerPool.get(info, true);
-    try {
-      int numDeletesToMerge = rld.numDeletesToMerge(mergePolicy);
-      assert numDeletesToMerge <= info.info.maxDoc() :
-          "numDeletesToMerge: " + numDeletesToMerge + " > maxDoc: " + info.info.maxDoc();
-      return numDeletesToMerge;
-    } finally {
-      readerPool.release(rld);
+    final ReadersAndUpdates rld = readerPool.get(info, false);
+    int numDeletesToMerge;
+    if (rld != null) {
+      numDeletesToMerge = rld.numDeletesToMerge(mergePolicy);
+    } else {
+      // if we don't have a  pooled instance lets just return the hard deletes, this is safe!
+      numDeletesToMerge = info.getDelCount();
     }
+    assert numDeletesToMerge <= info.info.maxDoc() :
+        "numDeletesToMerge: " + numDeletesToMerge + " > maxDoc: " + info.info.maxDoc();
+    return numDeletesToMerge;
 
   }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java b/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
index 093fe5a..029cca9 100644
--- a/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
+++ b/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
@@ -611,7 +611,7 @@ public abstract class MergePolicy {
    * Returns true if the segment represented by the given CodecReader should be keep even if it's fully deleted.
    * This is useful for testing of for instance if the merge policy implements retention policies for soft deletes.
    */
-  public boolean keepFullyDeletedSegment(CodecReader reader) throws IOException {
+  public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
     return false;
   }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/NoMergePolicy.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/NoMergePolicy.java b/lucene/core/src/java/org/apache/lucene/index/NoMergePolicy.java
index e1f1a54..86a173c 100644
--- a/lucene/core/src/java/org/apache/lucene/index/NoMergePolicy.java
+++ b/lucene/core/src/java/org/apache/lucene/index/NoMergePolicy.java
@@ -76,8 +76,8 @@ public final class NoMergePolicy extends MergePolicy {
   }
 
   @Override
-  public boolean keepFullyDeletedSegment(CodecReader reader) throws IOException {
-    return super.keepFullyDeletedSegment(reader);
+  public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
+    return super.keepFullyDeletedSegment(readerIOSupplier);
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/PendingDeletes.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/PendingDeletes.java b/lucene/core/src/java/org/apache/lucene/index/PendingDeletes.java
index c0aed38..ae91be8 100644
--- a/lucene/core/src/java/org/apache/lucene/index/PendingDeletes.java
+++ b/lucene/core/src/java/org/apache/lucene/index/PendingDeletes.java
@@ -230,7 +230,7 @@ class PendingDeletes {
   /**
    * Returns <code>true</code> iff the segment represented by this {@link PendingDeletes} is fully deleted
    */
-  boolean isFullyDeleted() {
+  boolean isFullyDeleted(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
     return info.getDelCount() + numPendingDeletes() == info.info.maxDoc();
   }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/PendingSoftDeletes.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/PendingSoftDeletes.java b/lucene/core/src/java/org/apache/lucene/index/PendingSoftDeletes.java
index 3dca782..68a2eac 100644
--- a/lucene/core/src/java/org/apache/lucene/index/PendingSoftDeletes.java
+++ b/lucene/core/src/java/org/apache/lucene/index/PendingSoftDeletes.java
@@ -168,6 +168,11 @@ final class PendingSoftDeletes extends PendingDeletes {
 
   @Override
   int numDeletesToMerge(MergePolicy policy, IOSupplier<CodecReader> readerIOSupplier) throws IOException {
+    ensureInitialized(readerIOSupplier); // initialize to ensure we have accurate counts
+    return super.numDeletesToMerge(policy, readerIOSupplier);
+  }
+
+  private void ensureInitialized(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
     if (dvGeneration == -2) {
       FieldInfos fieldInfos = readFieldInfos();
       FieldInfo fieldInfo = fieldInfos.fieldInfo(field);
@@ -183,7 +188,12 @@ final class PendingSoftDeletes extends PendingDeletes {
         dvGeneration = fieldInfo == null ? -1 : fieldInfo.getDocValuesGen();
       }
     }
-    return super.numDeletesToMerge(policy, readerIOSupplier);
+  }
+
+  @Override
+  boolean isFullyDeleted(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
+    ensureInitialized(readerIOSupplier); // initialize to ensure we have accurate counts - only needed in the soft-delete case
+    return super.isFullyDeleted(readerIOSupplier);
   }
 
   private FieldInfos readFieldInfos() throws IOException {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java b/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
index 2543721..76a28e2 100644
--- a/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
+++ b/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
@@ -41,7 +41,6 @@ import org.apache.lucene.store.IOContext;
 import org.apache.lucene.store.TrackingDirectoryWrapper;
 import org.apache.lucene.util.Bits;
 import org.apache.lucene.util.BytesRef;
-import org.apache.lucene.util.IOSupplier;
 import org.apache.lucene.util.IOUtils;
 import org.apache.lucene.util.InfoStream;
 
@@ -260,19 +259,20 @@ final class ReadersAndUpdates {
   }
 
   synchronized int numDeletesToMerge(MergePolicy policy) throws IOException {
-    IOSupplier<CodecReader> readerSupplier = () -> {
-      if (this.reader == null) {
-        // get a reader and dec the ref right away we just make sure we have a reader
-        getReader(IOContext.READ).decRef();
-      }
-      if (reader.getLiveDocs() != pendingDeletes.getLiveDocs()
-          || reader.numDeletedDocs() != info.getDelCount() - pendingDeletes.numPendingDeletes()) {
-        // we have a reader but its live-docs are out of sync. let's create a temporary one that we never share
-        swapNewReaderWithLatestLiveDocs();
-      }
-      return reader;
-    };
-    return pendingDeletes.numDeletesToMerge(policy, readerSupplier);
+    return pendingDeletes.numDeletesToMerge(policy, this::getLatestReader);
+  }
+
+  private CodecReader getLatestReader() throws IOException {
+    if (this.reader == null) {
+      // get a reader and dec the ref right away we just make sure we have a reader
+      getReader(IOContext.READ).decRef();
+    }
+    if (reader.getLiveDocs() != pendingDeletes.getLiveDocs()
+        || reader.numDeletedDocs() != info.getDelCount() - pendingDeletes.numPendingDeletes()) {
+      // we have a reader but its live-docs are out of sync. let's create a temporary one that we never share
+      swapNewReaderWithLatestLiveDocs();
+    }
+    return reader;
   }
 
   public synchronized Bits getLiveDocs() {
@@ -813,8 +813,8 @@ final class ReadersAndUpdates {
     return sb.toString();
   }
 
-  public synchronized boolean isFullyDeleted() {
-    return pendingDeletes.isFullyDeleted();
+  public synchronized boolean isFullyDeleted() throws IOException {
+    return pendingDeletes.isFullyDeleted(this::getLatestReader);
   }
 
   private final void markAsShared() {
@@ -822,5 +822,8 @@ final class ReadersAndUpdates {
     liveDocsSharedPending = false;
     pendingDeletes.liveDocsShared(); // this is not costly we can just call it even if it's already marked as shared
   }
-  
+
+  boolean keepFullyDeletedSegment(MergePolicy mergePolicy) throws IOException {
+    return mergePolicy.keepFullyDeletedSegment(this::getLatestReader);
+  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/SoftDeletesRetentionMergePolicy.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/SoftDeletesRetentionMergePolicy.java b/lucene/core/src/java/org/apache/lucene/index/SoftDeletesRetentionMergePolicy.java
index 8538775..1447440 100644
--- a/lucene/core/src/java/org/apache/lucene/index/SoftDeletesRetentionMergePolicy.java
+++ b/lucene/core/src/java/org/apache/lucene/index/SoftDeletesRetentionMergePolicy.java
@@ -71,7 +71,8 @@ public final class SoftDeletesRetentionMergePolicy extends OneMergeWrappingMerge
   }
 
   @Override
-  public boolean keepFullyDeletedSegment(CodecReader reader) throws IOException {
+  public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
+    CodecReader reader = readerIOSupplier.get();
     /* we only need a single hit to keep it no need for soft deletes to be checked*/
     Scorer scorer = getScorer(retentionQuerySupplier.get(), wrapLiveDocs(reader, null, reader.maxDoc()));
     if (scorer != null) {
@@ -79,7 +80,7 @@ public final class SoftDeletesRetentionMergePolicy extends OneMergeWrappingMerge
       boolean atLeastOneHit = iterator.nextDoc() != DocIdSetIterator.NO_MORE_DOCS;
       return atLeastOneHit;
     }
-    return super.keepFullyDeletedSegment(reader) ;
+    return super.keepFullyDeletedSegment(readerIOSupplier) ;
   }
 
   // pkg private for testing

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java b/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
index 488ccaf..63c6d95 100644
--- a/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
+++ b/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
@@ -103,7 +103,7 @@ public final class StandardDirectoryReader extends DirectoryReader {
         final ReadersAndUpdates rld = writer.readerPool.get(info, true);
         try {
           final SegmentReader reader = rld.getReadOnlyClone(IOContext.READ);
-          if (reader.numDocs() > 0 || writer.getConfig().mergePolicy.keepFullyDeletedSegment(reader)) {
+          if (reader.numDocs() > 0 || writer.getConfig().mergePolicy.keepFullyDeletedSegment(() -> reader)) {
             // Steal the ref:
             readers.add(reader);
             infosUpto++;

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
index 12151e7..80e108d 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
@@ -90,6 +90,7 @@ import org.apache.lucene.store.SimpleFSLockFactory;
 import org.apache.lucene.util.Bits;
 import org.apache.lucene.util.BytesRef;
 import org.apache.lucene.util.Constants;
+import org.apache.lucene.util.IOSupplier;
 import org.apache.lucene.util.IOUtils;
 import org.apache.lucene.util.InfoStream;
 import org.apache.lucene.util.LuceneTestCase;
@@ -2224,7 +2225,7 @@ public class TestIndexWriter extends LuceneTestCase {
     AtomicBoolean keepFullyDeletedSegments = new AtomicBoolean();
     iwc.setMergePolicy(new FilterMergePolicy(iwc.getMergePolicy()) {
       @Override
-      public boolean keepFullyDeletedSegment(CodecReader reader) throws IOException {
+      public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
         return keepFullyDeletedSegments.get();
       }
     });

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnDiskFull.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnDiskFull.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnDiskFull.java
index ce3c72c..d225f43 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnDiskFull.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnDiskFull.java
@@ -35,6 +35,7 @@ import org.apache.lucene.store.AlreadyClosedException;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.store.MockDirectoryWrapper;
 import org.apache.lucene.store.RAMDirectory;
+import org.apache.lucene.util.IOSupplier;
 import org.apache.lucene.util.LuceneTestCase;
 import org.apache.lucene.util.TestUtil;
 
@@ -503,7 +504,7 @@ public class TestIndexWriterOnDiskFull extends LuceneTestCase {
           .setReaderPooling(true)
           .setMergePolicy(new FilterMergePolicy(newLogMergePolicy(2)) {
             @Override
-            public boolean keepFullyDeletedSegment(CodecReader reader) throws IOException {
+            public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) throws IOException {
               // we can do this because we add/delete/add (and dont merge to "nothing")
               return true;
             }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/test/org/apache/lucene/index/TestMultiFields.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestMultiFields.java b/lucene/core/src/test/org/apache/lucene/index/TestMultiFields.java
index 3c09bbd..439ec51 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestMultiFields.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestMultiFields.java
@@ -32,6 +32,7 @@ import org.apache.lucene.search.DocIdSetIterator;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.util.Bits;
 import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.IOSupplier;
 import org.apache.lucene.util.LuceneTestCase;
 import org.apache.lucene.util.TestUtil;
 import org.apache.lucene.util.UnicodeUtil;
@@ -51,7 +52,7 @@ public class TestMultiFields extends LuceneTestCase {
       IndexWriter w = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
                                              .setMergePolicy(new FilterMergePolicy(NoMergePolicy.INSTANCE) {
                                                @Override
-                                               public boolean keepFullyDeletedSegment(CodecReader reader) {
+                                               public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) {
                                                  // we can do this because we use NoMergePolicy (and dont merge to "nothing")
                                                  return true;
                                                }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/test/org/apache/lucene/index/TestPendingDeletes.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestPendingDeletes.java b/lucene/core/src/test/org/apache/lucene/index/TestPendingDeletes.java
index bbe309a..7c6891e 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestPendingDeletes.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestPendingDeletes.java
@@ -134,13 +134,15 @@ public class TestPendingDeletes extends LuceneTestCase {
     SegmentInfo si = new SegmentInfo(dir, Version.LATEST, Version.LATEST, "test", 3, false, Codec.getDefault(),
         Collections.emptyMap(), StringHelper.randomId(), new HashMap<>(), null);
     SegmentCommitInfo commitInfo = new SegmentCommitInfo(si, 0, -1, -1, -1);
+    FieldInfos fieldInfos = new FieldInfos(new FieldInfo[0]);
+    si.getCodec().fieldInfosFormat().write(dir, si, "", fieldInfos, IOContext.DEFAULT);
     PendingDeletes deletes = newPendingDeletes(commitInfo);
     for (int i = 0; i < 3; i++) {
       assertTrue(deletes.delete(i));
       if (random().nextBoolean()) {
         assertTrue(deletes.writeLiveDocs(dir));
       }
-      assertEquals(i == 2, deletes.isFullyDeleted());
+      assertEquals(i == 2, deletes.isFullyDeleted(() -> null));
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesDirectoryReaderWrapper.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesDirectoryReaderWrapper.java b/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesDirectoryReaderWrapper.java
index 30a11b6..dea7bc9 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesDirectoryReaderWrapper.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesDirectoryReaderWrapper.java
@@ -27,7 +27,6 @@ import org.apache.lucene.document.Field;
 import org.apache.lucene.document.NumericDocValuesField;
 import org.apache.lucene.document.StringField;
 import org.apache.lucene.search.IndexSearcher;
-import org.apache.lucene.search.MatchNoDocsQuery;
 import org.apache.lucene.search.TermQuery;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.util.IOUtils;
@@ -197,35 +196,4 @@ public class TestSoftDeletesDirectoryReaderWrapper extends LuceneTestCase {
     assertEquals(1, leafCalled.get());
     IOUtils.close(reader, writer, dir);
   }
-
-  public void testForceMergeDeletes() throws Exception {
-    Directory dir = newDirectory();
-    IndexWriterConfig config = newIndexWriterConfig().setSoftDeletesField("soft_delete");
-    config.setMergePolicy(newMergePolicy(random(), false)); // no mock MP it might not select segments for force merge
-    if (random().nextBoolean()) {
-      config.setMergePolicy(new SoftDeletesRetentionMergePolicy("soft_delete",
-          () -> new MatchNoDocsQuery(), config.getMergePolicy()));
-    }
-    IndexWriter writer = new IndexWriter(dir, config);
-    // The first segment includes d1 and d2
-    for (int i = 0; i < 2; i++) {
-      Document d = new Document();
-      d.add(new StringField("id", Integer.toString(i), Field.Store.YES));
-      writer.addDocument(d);
-    }
-    writer.flush();
-    // The second segment includes only the tombstone
-    Document tombstone = new Document();
-    tombstone.add(new NumericDocValuesField("soft_delete", 1));
-    writer.softUpdateDocument(new Term("id", "1"), tombstone, new NumericDocValuesField("soft_delete", 1));
-    // Internally, forceMergeDeletes will call flush to flush pending updates
-    // Thus, we will have two segments - both having soft-deleted documents.
-    // We expect any MP to merge these segments into one segment
-    // when calling forceMergeDeletes.
-    writer.forceMergeDeletes(true);
-    assertEquals(1, writer.maxDoc());
-    assertEquals(1, writer.segmentInfos.asList().size());
-    writer.close();
-    dir.close();
-  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java b/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java
index 061d006..b868a2e 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java
@@ -71,12 +71,12 @@ public class TestSoftDeletesRetentionMergePolicy extends LuceneTestCase {
     {
       assertEquals(2, reader.leaves().size());
       final SegmentReader segmentReader = (SegmentReader) reader.leaves().get(0).reader();
-      assertTrue(policy.keepFullyDeletedSegment(segmentReader));
+      assertTrue(policy.keepFullyDeletedSegment(() -> segmentReader));
       assertEquals(0, policy.numDeletesToMerge(segmentReader.getSegmentInfo(), 0, () -> segmentReader));
     }
     {
       SegmentReader segmentReader = (SegmentReader) reader.leaves().get(1).reader();
-      assertTrue(policy.keepFullyDeletedSegment(segmentReader));
+      assertTrue(policy.keepFullyDeletedSegment(() -> segmentReader));
       assertEquals(0, policy.numDeletesToMerge(segmentReader.getSegmentInfo(), 0, () -> segmentReader));
       writer.forceMerge(1);
       reader.close();
@@ -86,7 +86,7 @@ public class TestSoftDeletesRetentionMergePolicy extends LuceneTestCase {
       assertEquals(1, reader.leaves().size());
       SegmentReader segmentReader = (SegmentReader) reader.leaves().get(0).reader();
       assertEquals(2, reader.maxDoc());
-      assertTrue(policy.keepFullyDeletedSegment(segmentReader));
+      assertTrue(policy.keepFullyDeletedSegment(() -> segmentReader));
       assertEquals(0, policy.numDeletesToMerge(segmentReader.getSegmentInfo(), 0, () -> segmentReader));
     }
     writer.forceMerge(1); // make sure we don't merge this
@@ -114,10 +114,9 @@ public class TestSoftDeletesRetentionMergePolicy extends LuceneTestCase {
     writer.addDocument(doc);
     DirectoryReader reader = writer.getReader();
     assertEquals(1, reader.leaves().size());
-    SegmentReader segmentReader = (SegmentReader) reader.leaves().get(0).reader();
     MergePolicy policy = new SoftDeletesRetentionMergePolicy("soft_delete",
         () -> new DocValuesFieldExistsQuery("keep_around"), NoMergePolicy.INSTANCE);
-    assertFalse(policy.keepFullyDeletedSegment(segmentReader));
+    assertFalse(policy.keepFullyDeletedSegment(() -> (SegmentReader) reader.leaves().get(0).reader()));
     reader.close();
 
     doc = new Document();
@@ -126,15 +125,13 @@ public class TestSoftDeletesRetentionMergePolicy extends LuceneTestCase {
     doc.add(new NumericDocValuesField("soft_delete", 1));
     writer.addDocument(doc);
 
-    reader = writer.getReader();
-    assertEquals(2, reader.leaves().size());
-    segmentReader = (SegmentReader) reader.leaves().get(0).reader();
-    assertFalse(policy.keepFullyDeletedSegment(segmentReader));
+    DirectoryReader reader1 = writer.getReader();
+    assertEquals(2, reader1.leaves().size());
+    assertFalse(policy.keepFullyDeletedSegment(() -> (SegmentReader) reader1.leaves().get(0).reader()));
 
-    segmentReader = (SegmentReader) reader.leaves().get(1).reader();
-    assertTrue(policy.keepFullyDeletedSegment(segmentReader));
+    assertTrue(policy.keepFullyDeletedSegment(() -> (SegmentReader) reader1.leaves().get(1).reader()));
 
-    IOUtils.close(reader, writer, dir);
+    IOUtils.close(reader1, writer, dir);
   }
 
   public void testFieldBasedRetention() throws IOException {
@@ -365,4 +362,84 @@ public class TestSoftDeletesRetentionMergePolicy extends LuceneTestCase {
     IOUtils.close(reader, writer, dir);
   }
 
+  public void testForceMergeDeletes() throws Exception {
+    Directory dir = newDirectory();
+    IndexWriterConfig config = newIndexWriterConfig().setSoftDeletesField("soft_delete");
+    config.setMergePolicy(newMergePolicy(random(), false)); // no mock MP it might not select segments for force merge
+    if (random().nextBoolean()) {
+      config.setMergePolicy(new SoftDeletesRetentionMergePolicy("soft_delete",
+          () -> new MatchNoDocsQuery(), config.getMergePolicy()));
+    }
+    IndexWriter writer = new IndexWriter(dir, config);
+    // The first segment includes d1 and d2
+    for (int i = 0; i < 2; i++) {
+      Document d = new Document();
+      d.add(new StringField("id", Integer.toString(i), Field.Store.YES));
+      writer.addDocument(d);
+    }
+    writer.flush();
+    // The second segment includes only the tombstone
+    Document tombstone = new Document();
+    tombstone.add(new NumericDocValuesField("soft_delete", 1));
+    writer.softUpdateDocument(new Term("id", "1"), tombstone, new NumericDocValuesField("soft_delete", 1));
+    // Internally, forceMergeDeletes will call flush to flush pending updates
+    // Thus, we will have two segments - both having soft-deleted documents.
+    // We expect any MP to merge these segments into one segment
+    // when calling forceMergeDeletes.
+    writer.forceMergeDeletes(true);
+    assertEquals(1, writer.maxDoc());
+    assertEquals(1, writer.segmentInfos.asList().size());
+    writer.close();
+    dir.close();
+  }
+
+  public void testDropFullySoftDeletedSegment() throws Exception {
+    Directory dir = newDirectory();
+    String softDelete = random().nextBoolean() ? null : "soft_delete";
+    IndexWriterConfig config = newIndexWriterConfig().setSoftDeletesField(softDelete);
+    config.setMergePolicy(newMergePolicy(random(), true));
+    if (softDelete != null && random().nextBoolean()) {
+      config.setMergePolicy(new SoftDeletesRetentionMergePolicy(softDelete,
+          () -> new MatchNoDocsQuery(), config.getMergePolicy()));
+    }
+    IndexWriter writer = new IndexWriter(dir, config);
+    for (int i = 0; i < 2; i++) {
+      Document d = new Document();
+      d.add(new StringField("id", Integer.toString(i), Field.Store.YES));
+      writer.addDocument(d);
+    }
+    writer.flush();
+    assertEquals(1, writer.segmentInfos.asList().size());
+
+    if (softDelete != null) {
+      // the newly created segment should be dropped as it is fully deleted (i.e. only contains deleted docs).
+      if (random().nextBoolean()) {
+        Document tombstone = new Document();
+        tombstone.add(new NumericDocValuesField(softDelete, 1));
+        writer.softUpdateDocument(new Term("id", "1"), tombstone, new NumericDocValuesField(softDelete, 1));
+      } else {
+        Document doc = new Document();
+        doc.add(new StringField("id", Integer.toString(1), Field.Store.YES));
+        if (random().nextBoolean()) {
+          writer.softUpdateDocument(new Term("id", "1"), doc, new NumericDocValuesField(softDelete, 1));
+        } else {
+          writer.addDocument(doc);
+        }
+        writer.updateDocValues(new Term("id", "1"), new NumericDocValuesField(softDelete, 1));
+      }
+    } else {
+      Document d = new Document();
+      d.add(new StringField("id", "1", Field.Store.YES));
+      writer.addDocument(d);
+      writer.deleteDocuments(new Term("id", "1"));
+    }
+    writer.commit();
+    IndexReader reader = writer.getReader();
+    assertEquals(reader.numDocs(), 1);
+    reader.close();
+    assertEquals(1, writer.segmentInfos.asList().size());
+
+    writer.close();
+    dir.close();
+  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d9041124/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java b/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
index c501f5f..3173c12 100644
--- a/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
+++ b/solr/core/src/test/org/apache/solr/handler/admin/SegmentsInfoRequestHandlerTest.java
@@ -16,7 +16,6 @@
  */
 package org.apache.solr.handler.admin;
 
-import org.apache.lucene.util.LuceneTestCase;
 import org.apache.lucene.util.Version;
 import org.apache.solr.index.LogDocMergePolicyFactory;
 import org.apache.solr.SolrTestCaseJ4;
@@ -27,7 +26,6 @@ import org.junit.Test;
 /**
  * Tests for SegmentsInfoRequestHandler. Plugin entry, returning data of created segment.
  */
-@LuceneTestCase.AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-8253")
 public class SegmentsInfoRequestHandlerTest extends SolrTestCaseJ4 {
   private static final int DOC_COUNT = 5;
   


[22/40] lucene-solr:jira/solr-11833: SOLR-11200: A new CMS config option 'ioThrottle' to manually enable/disable ConcurrentMergeSchedule.doAutoIOThrottle. (Amrit Sarkar, Nawab Zada Asad iqbal)

Posted by ab...@apache.org.
SOLR-11200: A new CMS config option 'ioThrottle' to manually enable/disable ConcurrentMergeSchedule.doAutoIOThrottle. (Amrit Sarkar, Nawab Zada Asad iqbal)


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4eead83a
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4eead83a
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4eead83a

Branch: refs/heads/jira/solr-11833
Commit: 4eead83a83235b235145f07f0a625055b860ad65
Parents: cf05e17
Author: Dawid Weiss <dw...@apache.org>
Authored: Fri Apr 20 11:34:04 2018 +0200
Committer: Dawid Weiss <dw...@apache.org>
Committed: Fri Apr 20 11:34:04 2018 +0200

----------------------------------------------------------------------
 solr/CHANGES.txt                                |  3 ++
 .../org/apache/solr/update/SolrIndexConfig.java |  4 +++
 .../solrconfig-concurrentmergescheduler.xml     | 37 ++++++++++++++++++++
 .../apache/solr/update/SolrIndexConfigTest.java | 24 +++++++++++++
 4 files changed, 68 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4eead83a/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index be3f704..516a0d7 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -69,6 +69,9 @@ Upgrade Notes
 New Features
 ----------------------
 
+* SOLR-11200: A new CMS config option 'ioThrottle' to manually enable/disable 
+  ConcurrentMergeSchedule.doAutoIOThrottle. (Amrit Sarkar, Nawab Zada Asad iqbal via Dawid Weiss)
+
 * SOLR-11670: Implement a periodic house-keeping task. This uses a scheduled autoscaling trigger and
   currently performs cleanup of old inactive shards. (ab, shalin)
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4eead83a/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java b/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java
index c663783..48b2417 100644
--- a/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java
+++ b/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java
@@ -299,6 +299,10 @@ public class SolrIndexConfig implements MapSerializable {
           maxThreadCount = ((ConcurrentMergeScheduler) scheduler).getMaxThreadCount();
         }
         ((ConcurrentMergeScheduler)scheduler).setMaxMergesAndThreads(maxMergeCount, maxThreadCount);
+        Boolean ioThrottle = (Boolean) args.remove("ioThrottle");
+        if (ioThrottle != null && !ioThrottle) { //by-default 'enabled'
+            ((ConcurrentMergeScheduler) scheduler).disableAutoIOThrottle();
+        }
         SolrPluginUtils.invokeSetters(scheduler, args);
       } else {
         SolrPluginUtils.invokeSetters(scheduler, mergeSchedulerInfo.initArgs);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4eead83a/solr/core/src/test-files/solr/collection1/conf/solrconfig-concurrentmergescheduler.xml
----------------------------------------------------------------------
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-concurrentmergescheduler.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-concurrentmergescheduler.xml
new file mode 100644
index 0000000..140c4cf
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-concurrentmergescheduler.xml
@@ -0,0 +1,37 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+  <indexConfig>
+    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>
+    <mergePolicyFactory class="org.apache.solr.index.TieredMergePolicyFactory" />
+    <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler">
+      <int name="maxMergeCount">987</int>
+      <int name="maxThreadCount">42</int>
+      <bool name="ioThrottle">false</bool>
+    </mergeScheduler>
+  </indexConfig>
+
+  <requestHandler name="/select" class="solr.SearchHandler"></requestHandler>
+
+</config>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4eead83a/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java b/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java
index ec5719c..d5ebf05 100644
--- a/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java
+++ b/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java
@@ -48,6 +48,7 @@ public class SolrIndexConfigTest extends SolrTestCaseJ4 {
   private static final String solrConfigFileName = "solrconfig.xml";
   private static final String solrConfigFileNameWarmerRandomMergePolicyFactory = "solrconfig-warmer-randommergepolicyfactory.xml";
   private static final String solrConfigFileNameTieredMergePolicyFactory = "solrconfig-tieredmergepolicyfactory.xml";
+  private static final String solrConfigFileNameConnMSPolicyFactory = "solrconfig-concurrentmergescheduler.xml";
   private static final String solrConfigFileNameSortingMergePolicyFactory = "solrconfig-sortingmergepolicyfactory.xml";
   private static final String schemaFileName = "schema.xml";
 
@@ -93,6 +94,29 @@ public class SolrIndexConfigTest extends SolrTestCaseJ4 {
     ConcurrentMergeScheduler ms = (ConcurrentMergeScheduler)  iwc.getMergeScheduler();
     assertEquals("ms.maxMergeCount", 987, ms.getMaxMergeCount());
     assertEquals("ms.maxThreadCount", 42, ms.getMaxThreadCount());
+    assertEquals("ms.isAutoIOThrottle", true, ms.getAutoIOThrottle());
+
+  }
+
+  @Test
+  public void testConcurrentMergeSchedularSolrIndexConfigCreation() throws Exception {
+    String solrConfigFileName = solrConfigFileNameConnMSPolicyFactory;
+    SolrConfig solrConfig = new SolrConfig(instanceDir, solrConfigFileName, null);
+    SolrIndexConfig solrIndexConfig = new SolrIndexConfig(solrConfig, null, null);
+    IndexSchema indexSchema = IndexSchemaFactory.buildIndexSchema(schemaFileName, solrConfig);
+
+    h.getCore().setLatestSchema(indexSchema);
+    IndexWriterConfig iwc = solrIndexConfig.toIndexWriterConfig(h.getCore());
+
+    assertNotNull("null mp", iwc.getMergePolicy());
+    assertTrue("mp is not TieredMergePolicy", iwc.getMergePolicy() instanceof TieredMergePolicy);
+
+    assertNotNull("null ms", iwc.getMergeScheduler());
+    assertTrue("ms is not CMS", iwc.getMergeScheduler() instanceof ConcurrentMergeScheduler);
+    ConcurrentMergeScheduler ms = (ConcurrentMergeScheduler)  iwc.getMergeScheduler();
+    assertEquals("ms.maxMergeCount", 987, ms.getMaxMergeCount());
+    assertEquals("ms.maxThreadCount", 42, ms.getMaxThreadCount());
+    assertEquals("ms.isAutoIOThrottle", false, ms.getAutoIOThrottle());
 
   }
 


[26/40] lucene-solr:jira/solr-11833: SOLR-11646: Add v2 APIs for Config API; change "ConfigSet" to "configset" in docs & specs to match community spelling

Posted by ab...@apache.org.
SOLR-11646: Add v2 APIs for Config API; change "ConfigSet" to "configset" in docs & specs to match community spelling


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/d08e62d5
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/d08e62d5
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/d08e62d5

Branch: refs/heads/jira/solr-11833
Commit: d08e62d59878147b8447698e87374dfbfeb597c1
Parents: f0d1e11
Author: Cassandra Targett <ct...@apache.org>
Authored: Thu Apr 19 09:57:50 2018 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Fri Apr 20 14:28:31 2018 -0500

----------------------------------------------------------------------
 solr/solr-ref-guide/src/config-api.adoc         | 720 ++++++++++++++-----
 .../apispec/cluster.configs.Commands.json       |  12 +-
 .../apispec/cluster.configs.delete.json         |   2 +-
 .../src/resources/apispec/cluster.configs.json  |   2 +-
 4 files changed, 543 insertions(+), 193 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d08e62d5/solr/solr-ref-guide/src/config-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/config-api.adoc b/solr/solr-ref-guide/src/config-api.adoc
index 5220f49..48106c7 100644
--- a/solr/solr-ref-guide/src/config-api.adoc
+++ b/solr/solr-ref-guide/src/config-api.adoc
@@ -22,53 +22,140 @@ This feature is enabled by default and works similarly in both SolrCloud and sta
 
 When using this API, `solrconfig.xml` is not changed. Instead, all edited configuration is stored in a file called `configoverlay.json`. The values in `configoverlay.json` override the values in `solrconfig.xml`.
 
-== Config API Entry Points
+== Config API Endpoints
 
-* `/config`: retrieve or modify the config. GET to retrieve and POST for executing commands
-* `/config/overlay`: retrieve the details in the `configoverlay.json` alone
-* `/config/params`: allows creating parameter sets that can override or take the place of parameters defined in `solrconfig.xml`. See the <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>> section for more details.
+All Config API endpoints are collection-specific, meaning this API can inspect or modify the configuration for a single collection at a time.
+
+* `_collection_/config`: retrieve the full effective config, or modify the config. Use GET to retrieve and POST for executing commands.
+* `_collection_/config/overlay`: retrieve the details in the `configoverlay.json` only, removing any options defined in `solrconfig.xml` directly or implicitly through defaults.
+* `_collection_/config/params`: create parameter sets that can override or take the place of parameters defined in `solrconfig.xml`. See <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>> for more information about this endpoint.
 
 == Retrieving the Config
 
-All configuration items, can be retrieved by sending a GET request to the `/config` endpoint - the results will be the effective configuration resulting from merging settings in `configoverlay.json` with those in `solrconfig.xml`:
+All configuration items can be retrieved by sending a GET request to the `/config` endpoint:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1getconfig]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+http://localhost:8983/solr/techproducts/config
+----
+====
+
+[example.tab-pane#v2getconfig]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
+
+The response will be the total Solr configuration resulting from merging settings in `configoverlay.json` with those in `solrconfig.xml` and those configured implicitly (by default) by Solr out of the box.
+
+It's possible to restrict the returned config to a top-level section, such as, `query`, `requestHandler` or `updateHandler`. To do this, append the name of the section to the `config` endpoint. For example, to retrieve configuration for all request handlers:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1gethandler]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+http://localhost:8983/solr/techproducts/config/requestHandler
+
+----
+====
+
+[example.tab-pane#v2gethandler]
+====
+[.tab-label]*V2 API*
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/techproducts/config
+http://localhost:8983/api/collections/techproducts/config/requestHandler
 ----
+====
+--
 
-To restrict the returned results to a top level section, e.g., `query`, `requestHandler` or `updateHandler`, append the name of the section to the `/config` endpoint following a slash. For example, to retrieve configuration for all request handlers:
+The output will be details of each request handler defined in `solrconfig.xml`, all  <<implicit-requesthandlers.adoc#implicit-requesthandlers,defined implicitly>> by Solr, and all defined with this Config API stored in `configoverlay.json`.
+
+The available top-level sections that can be added as path parameters are: `query`, `requestHandler`, `searchComponent`, `updateHandler`, `queryResponseWriter`, `initParams`, `znodeVersion`, `listener`, `directoryFactory`, `indexConfig`, and `codecFactory`.
+
+To further restrict the request to a single component within a top-level section, use the `componentName` request parameter.
+
+For example, to return configuration for the `/select` request handler:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1getcomponent]
+====
+[.tab-label]*V1 API*
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/techproducts/config/requestHandler
+http://localhost:8983/solr/techproducts/config/requestHandler?componentName=/select
 ----
+====
 
-To further restrict returned results to a single component within a top level section, use the `componentName` request param, e.g., to return configuration for the `/select` request handler:
+[example.tab-pane#v2getcomponent]
+====
+[.tab-label]*V2 API*
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/techproducts/config/requestHandler?componentName=/select
+http://localhost:8983/api/collections/techproducts/config/requestHandler?componentName=/select
+----
+====
+--
+
+The output of this command will look similar to:
+
+[source,json]
+----
+{
+  "config":{"requestHandler":{"/select":{
+        "name": "/select",
+        "class": "solr.SearchHandler",
+        "defaults":{
+          "echoParams": "explicit",
+          "rows":10,
+          "preferLocalShards":false
+        }}}}
+}
 ----
 
+The ability to restrict to objects within a top-level section is limited to request handlers (`requestHandler`), search components (`searchComponent`), and response writers (`queryResponseWriter`).
+
 == Commands to Modify the Config
 
-This API uses specific commands to tell Solr what property or type of property to add to `configoverlay.json`. The commands are passed as part of the data sent with the request.
+This API uses specific commands with POST requests to tell Solr what property or type of property to add to or modify in `configoverlay.json`. The commands are passed with the data to add or modify the property or component.
 
-The config commands are categorized into 3 different sections which manipulate various data structures in `solrconfig.xml`. Each of these is described below.
+The Config API commands for modifications are categorized into 3 types, each of which manipulate specific data structures in `solrconfig.xml`. These types are:
 
-* <<Commands for Common Properties,Common Properties>>
-* <<Commands for Custom Handlers and Local Components,Components>>
-* <<Commands for User-Defined Properties,User-defined properties>>
+* `set-property` and `unset-property` for <<Commands for Common Properties,Common Properties>>
+* Component-specific `add-`, `update-`, and `delete-` commands for <<Commands for Handlers and Components,Custom Handlers and Local Components>>
+* `set-user-property` and `unset-user-property` for <<Commands for User-Defined Properties,User-defined properties>>
 
 === Commands for Common Properties
 
-The common properties are those that are frequently need to be customized in a Solr instance. They are manipulated with two commands:
+The common properties are those that are frequently customized in a Solr instance. They are manipulated with two commands:
 
 * `set-property`: Set a well known property. The names of the properties are predefined and fixed. If the property has already been set, this command will overwrite the previous setting.
 * `unset-property`: Remove a property set using the `set-property` command.
 
-The properties that are configured with these commands are predefined and listed below. The names of these properties are derived from their XML paths as found in `solrconfig.xml`.
+The properties that can be configured with `set-property` and `unset-property` are predefined and listed below. The names of these properties are derived from their XML paths as found in `solrconfig.xml`.
+
+*Update Handler Settings*
+
+See <<updatehandlers-in-solrconfig.adoc#updatehandlers-in-solrconfig,UpdateHandlers in SolrConfig>> for defaults and acceptable values for these settings.
 
 * `updateHandler.autoCommit.maxDocs`
 * `updateHandler.autoCommit.maxTime`
@@ -77,56 +164,170 @@ The properties that are configured with these commands are predefined and listed
 * `updateHandler.autoSoftCommit.maxTime`
 * `updateHandler.commitWithin.softCommit`
 * `updateHandler.indexWriter.closeWaitsForMerges`
+
+*Query Settings*
+
+See <<query-settings-in-solrconfig.adoc#query-settings-in-solrconfig,Query Settings in SolrConfig>> for defaults and acceptable values for these settings.
+
+_Caches and Cache Sizes_
+
 * `query.filterCache.class`
 * `query.filterCache.size`
 * `query.filterCache.initialSize`
 * `query.filterCache.autowarmCount`
+* `query.filterCache.maxRamMB`
 * `query.filterCache.regenerator`
 * `query.queryResultCache.class`
 * `query.queryResultCache.size`
 * `query.queryResultCache.initialSize`
 * `query.queryResultCache.autowarmCount`
+* `query.queryResultCache.maxRamMB`
 * `query.queryResultCache.regenerator`
 * `query.documentCache.class`
 * `query.documentCache.size`
 * `query.documentCache.initialSize`
 * `query.documentCache.autowarmCount`
-
 * `query.documentCache.regenerator`
 * `query.fieldValueCache.class`
 * `query.fieldValueCache.size`
 * `query.fieldValueCache.initialSize`
 * `query.fieldValueCache.autowarmCount`
 * `query.fieldValueCache.regenerator`
+
+_Query Sizing and Warming_
+
+* `query.maxBooleanClauses`
+* `query.enableLazyFieldLoading`
 * `query.useFilterForSortedQuery`
 * `query.queryResultWindowSize`
 * `query.queryResultMaxDocCached`
-* `query.enableLazyFieldLoading`
-* `query.boolToFilterOptimizer`
-* `query.maxBooleanClauses`
-* `jmx.agentId`
-* `jmx.serviceUrl`
-* `jmx.rootName`
+
+*RequestDispatcher Settings*
+
+See <<requestdispatcher-in-solrconfig.adoc#requestdispatcher-in-solrconfig,RequestDispatcher in SolrConfig>> for defaults and acceptable values for these settings.
+
 * `requestDispatcher.handleSelect`
-* `requestDispatcher.requestParsers.multipartUploadLimitInKB`
-* `requestDispatcher.requestParsers.formdataUploadLimitInKB`
 * `requestDispatcher.requestParsers.enableRemoteStreaming`
 * `requestDispatcher.requestParsers.enableStreamBody`
+* `requestDispatcher.requestParsers.multipartUploadLimitInKB`
+* `requestDispatcher.requestParsers.formdataUploadLimitInKB`
 * `requestDispatcher.requestParsers.addHttpRequestToContext`
 
-=== Commands for Custom Handlers and Local Components
+==== Examples of Common Properties
+
+Constructing a command to modify or add one of these properties follows this pattern:
+
+[source,json,subs="quotes"]
+----
+{"set-property":{"<_property_>": "<_value_>"}}
+----
+
+A request to increase the `updateHandler.autoCommit.maxTime` would look like:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1-setprop]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type: application/json' -d '{"set-property":{"updateHandler.autoCommit.maxTime":15000}}' http://localhost:8983/solr/techproducts/config
+----
+====
+
+[example.tab-pane#v2-setprop]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type: application/json' -d '{"set-property":{"updateHandler.autoCommit.maxTime":15000}}' http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
+
+You can use the `config/overlay` endpoint to verify the property has been added to `configoverlay.json`:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1overlay]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl http://localhost:8983/solr/techproducts/config/overlay?omitHeader=true
+----
+====
+
+[example.tab-pane#v2overlay]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl http://localhost:8983/api/collections/techproducts/config/overlay?omitHeader=true
+----
+====
+--
+
+Output:
 
-Custom request handlers, search components, and other types of localized Solr components (such as custom query parsers, update processors, etc.) can be added, updated and deleted with specific commands for the component being modified.
+[source,json]
+----
+{
+  "overlay": {
+    "znodeVersion": 1,
+    "props": {
+      "updateHandler": {
+        "autoCommit": {"maxTime": 15000}
+      }
+}}}
+----
+
+To unset the property:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1unsetprop]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type: application/json' -d '{"unset-property": "updateHandler.autoCommit.maxTime"}' http://localhost:8983/solr/techproducts/config
+----
+====
 
-The syntax is similar in each case: `add-<component-name>`, `update-<component-name>`, and `delete-<component-name>`. The command name is not case sensitive, so `Add-RequestHandler`, `ADD-REQUESTHANDLER` and `add-requesthandler` are all equivalent.
+[example.tab-pane#v2unsetprop]
+====
+[.tab-label]*V2 API*
 
-In each case, `add-` commands add the new configuration to `configoverlay.json`, which will override any other settings for the component in `solrconfig.xml`; `update-` commands overwrite an existing setting in `configoverlay.json`; and `delete-` commands remove the setting from `configoverlay.json`.
+[source,bash]
+----
+curl -X POST -H 'Content-type: application/json' -d '{"unset-property": "updateHandler.autoCommit.maxTime"}' http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
+
+=== Commands for Handlers and Components
+
+Request handlers, search components, and other types of localized Solr components (such as query parsers, update processors, etc.) can be added, updated and deleted with specific commands for the type of component being modified.
 
-Settings removed from `configoverlay.json` are not removed from `solrconfig.xml`.
+The syntax is similar in each case: `add-<component-name>`, `update-_<component-name>_`, and `delete-<component-name>`. The command name is not case sensitive, so `Add-RequestHandler`, `ADD-REQUESTHANDLER` and `add-requesthandler` are equivalent.
+
+In each case, `add-` commands add a new configuration to `configoverlay.json`, which will override any other settings for the component in `solrconfig.xml`.
+
+`update-` commands overwrite an existing setting in `configoverlay.json`.
+
+`delete-` commands remove the setting from `configoverlay.json`.
+
+Settings removed from `configoverlay.json` are not removed from `solrconfig.xml` if they happen to be duplicated there.
 
 The full list of available commands follows below:
 
-==== General Purpose Commands
+==== Basic Commands for Components
 
 These commands are the most commonly used:
 
@@ -143,7 +344,7 @@ These commands are the most commonly used:
 * `update-queryresponsewriter`
 * `delete-queryresponsewriter`
 
-==== Advanced Commands
+==== Advanced Commands for Components
 
 These commands allow registering more advanced customizations to Solr:
 
@@ -159,7 +360,6 @@ These commands allow registering more advanced customizations to Solr:
 * `add-updateprocessor`
 * `update-updateprocessor`
 * `delete-updateprocessor`
-
 * `add-queryconverter`
 * `update-queryconverter`
 * `delete-queryconverter`
@@ -170,23 +370,159 @@ These commands allow registering more advanced customizations to Solr:
 * `update-runtimelib`
 * `delete-runtimelib`
 
-See the section <<Creating and Updating Request Handlers>> below for examples of using these commands.
+==== Examples of Handler and Component Commands
 
-==== What about updateRequestProcessorChain?
+To create a request handler, we can use the `add-requesthandler` command:
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json'  -d '{
+  "add-requesthandler": {
+    "name": "/mypath",
+    "class": "solr.DumpRequestHandler",
+    "defaults":{ "x": "y" ,"a": "b", "rows":10 },
+    "useParams": "x"
+  }
+}' http://localhost:8983/solr/techproducts/config
+----
 
-The Config API does not let you create or edit `updateRequestProcessorChain` elements. However, it is possible to create `updateProcessor` entries and can use them by name to create a chain.
+[.dynamic-tabs]
+--
+[example.tab-pane#v1addhandler]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{
+  "add-requesthandler": {
+    "name": "/mypath",
+    "class": "solr.DumpRequestHandler",
+    "defaults": { "x": "y" ,"a": "b", "rows":10 },
+    "useParams": "x"
+  }
+}' http://localhost:8983/solr/techproducts/config
+----
+====
 
-example:
+[example.tab-pane#v2addhandler]
+====
+[.tab-label]*V2 API*
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json' -d '{
-"add-updateprocessor" : { "name" : "firstFld",
-                          "class": "solr.FirstFieldValueUpdateProcessorFactory",
-                          "fieldName":"test_s"}}'
+curl -X POST -H 'Content-type:application/json' -d '{
+  "add-requesthandler": {
+    "name": "/mypath",
+    "class": "solr.DumpRequestHandler",
+    "defaults": { "x": "y" ,"a": "b", "rows":10 },
+    "useParams": "x"
+  }
+}' http://localhost:8983/api/collections/techproducts/config
 ----
+====
+--
 
-You can use this directly in your request by adding a parameter in the `updateRequestProcessorChain` for the specific update processor called `processor=firstFld`.
+Make a call to the new request handler to check if it is registered:
+
+[source,bash]
+----
+curl http://localhost:8983/solr/techproducts/mypath?omitHeader=true
+----
+
+And you should see the following as output:
+
+[source,json]
+----
+{
+  "params":{
+    "indent": "true",
+    "a": "b",
+    "x": "y",
+    "rows": "10"},
+  "context":{
+    "webapp": "/solr",
+    "path": "/mypath",
+    "httpMethod": "GET"}}
+----
+
+To update a request handler, you should use the `update-requesthandler` command:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1updatehandler]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{
+  "update-requesthandler": {
+    "name": "/mypath",
+    "class": "solr.DumpRequestHandler",
+    "defaults": {"x": "new value for X", "rows": "20"},
+    "useParams": "x"
+  }
+}' http://localhost:8983/solr/techproducts/config
+----
+====
+
+[example.tab-pane#v2updatehandler]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{
+  "update-requesthandler": {
+    "name": "/mypath",
+    "class": "solr.DumpRequestHandler",
+    "defaults": {"x": "new value for X", "rows": "20"},
+    "useParams": "x"
+  }
+}' http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
+
+As a second example, we'll create another request handler, this time adding the 'terms' component as part of the definition:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1add-handler]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{
+  "add-requesthandler": {
+    "name": "/myterms",
+    "class": "solr.SearchHandler",
+    "defaults": {"terms": true, "distrib":false},
+    "components": ["terms"]
+  }
+}' http://localhost:8983/solr/techproducts/config
+----
+====
+
+[example.tab-pane#v2add-handler]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{
+  "add-requesthandler": {
+    "name": "/myterms",
+    "class": "solr.SearchHandler",
+    "defaults": {"terms": true, "distrib":false},
+    "components": ["terms"]
+  }
+}' http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
 
 === Commands for User-Defined Properties
 
@@ -195,12 +531,139 @@ Solr lets users templatize the `solrconfig.xml` using the place holder format `$
 * `set-user-property`: Set a user-defined property. If the property has already been set, this command will overwrite the previous setting.
 * `unset-user-property`: Remove a user-defined property.
 
-The structure of the request is similar to the structure of requests using other commands, in the format of `"command":{"variable_name":"property_value"}`. You can add more than one variable at a time if necessary.
+The structure of the request is similar to the structure of requests using other commands, in the format of `"command":{"variable_name": "property_value"}`. You can add more than one variable at a time if necessary.
 
 For more information about user-defined properties, see the section <<configuring-solrconfig-xml.adoc#user-defined-properties-in-core-properties,User defined properties in core.properties>>.
 
 See also the section <<Creating and Updating User-Defined Properties>> below for examples of how to use this type of command.
 
+==== Creating and Updating User-Defined Properties
+
+This command sets a user property.
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1userprop]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{"set-user-property": {"variable_name": "some_value"}}' http://localhost:8983/solr/techproducts/config
+----
+====
+
+[example.tab-pane#v2userprop]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{"set-user-property": {"variable_name": "some_value"}}' http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
+
+Again, we can use the `/config/overlay` endpoint to verify the changes have been made:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1useroverlay]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl http://localhost:8983/solr/techproducts/config/overlay?omitHeader=true
+----
+====
+
+[example.tab-pane#v2useroverlay]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl http://localhost:8983/api/collections/techproducts/config/overlay?omitHeader=true
+----
+====
+--
+
+And we would expect to see output like this:
+
+[source,json]
+----
+{"overlay":{
+   "znodeVersion":5,
+   "userProps":{
+     "variable_name": "some_value"}}
+}
+----
+
+To unset the variable, issue a command like this:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1unsetuser]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{"unset-user-property": "variable_name"}' http://localhost:8983/solr/techproducts/config
+----
+====
+
+[example.tab-pane#v2unsetuser]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{"unset-user-property": "variable_name"}' http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
+
+=== What about updateRequestProcessorChain?
+
+The Config API does not let you create or edit `updateRequestProcessorChain` elements. However, it is possible to create `updateProcessor` entries and use them by name to create a chain.
+
+For example:
+
+[.dynamic-tabs]
+--
+[example.tab-pane#v1addupdateproc]
+====
+[.tab-label]*V1 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{"add-updateprocessor":
+  {"name": "firstFld",
+  "class": "solr.FirstFieldValueUpdateProcessorFactory",
+  "fieldName": "test_s"}
+}' http://localhost:8983/solr/techproducts/config
+----
+====
+
+[example.tab-pane#v2addupdateproc]
+====
+[.tab-label]*V2 API*
+
+[source,bash]
+----
+curl -X POST -H 'Content-type:application/json' -d '{"add-updateprocessor":
+  {"name": "firstFld",
+  "class": "solr.FirstFieldValueUpdateProcessorFactory",
+  "fieldName": "test_s"}
+}' http://localhost:8983/api/collections/techproducts/config
+----
+====
+--
+
+You can use this directly in your request by adding a parameter in the `updateRequestProcessorChain` for the specific update processor called `processor=firstFld`.
+
 == How to Map solrconfig.xml Properties to JSON
 
 By using this API, you will be generating JSON representations of properties defined in `solrconfig.xml`. To understand how properties should be represented with the API, let's take a look at a few examples.
@@ -223,10 +686,10 @@ The same request handler defined with the Config API would look like this:
 ----
 {
   "add-requesthandler":{
-    "name":"/query",
-    "class":"solr.SearchHandler",
+    "name": "/query",
+    "class": "solr.SearchHandler",
     "defaults":{
-      "echoParams":"explicit",
+      "echoParams": "explicit",
       "rows": 10
     }
   }
@@ -249,10 +712,10 @@ And the same searchComponent with the Config API:
 ----
 {
   "add-searchcomponent":{
-    "name":"elevator",
-    "class":"solr.QueryElevationComponent",
-    "queryFieldType":"string",
-    "config-file":"elevate.xml"
+    "name": "elevator",
+    "class": "solr.QueryElevationComponent",
+    "queryFieldType": "string",
+    "config-file": "elevate.xml"
   }
 }
 ----
@@ -262,7 +725,7 @@ Removing the searchComponent with the Config API:
 [source,json]
 ----
 {
-  "delete-searchcomponent":"elevator"
+  "delete-searchcomponent": "elevator"
 }
 ----
 
@@ -354,154 +817,41 @@ Define the same properties with the Config API:
 
 The Config API always allows changing the configuration of any component by name. However, some configurations such as `listener` or `initParams` do not require a name in `solrconfig.xml`. In order to be able to `update` and `delete` of the same item in `configoverlay.json`, the name attribute becomes mandatory.
 
-== Config API Examples
-
-=== Creating and Updating Common Properties
-
-This change sets the `query.filterCache.autowarmCount` to 1000 items and unsets the `query.filterCache.size`.
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json' -d'{
-    "set-property" : {"query.filterCache.autowarmCount":1000},
-    "unset-property" :"query.filterCache.size"}'
-----
-
-Using the `/config/overlay` endpoint, you can verify the changes with a request like this:
-
-[source,bash]
-----
-curl http://localhost:8983/solr/gettingstarted/config/overlay?omitHeader=true
-----
-
-And you should get a response like this:
-
-[source,json]
-----
-{
-  "overlay":{
-    "znodeVersion":1,
-    "props":{"query":{"filterCache":{
-          "autowarmCount":1000,
-          "size":25}}}}}
-----
-
-=== Creating and Updating Request Handlers
-
-To create a request handler, we can use the `add-requesthandler` command:
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json'  -d '{
-  "add-requesthandler" : {
-    "name": "/mypath",
-    "class":"solr.DumpRequestHandler",
-    "defaults":{ "x":"y" ,"a":"b", "rows":10 },
-    "useParams":"x"
-  }
-}'
-----
-
-Make a call to the new request handler to check if it is registered:
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/mypath?omitHeader=true
-----
-
-And you should see the following as output:
-
-[source,json]
-----
-{
-  "params":{
-    "indent":"true",
-    "a":"b",
-    "x":"y",
-    "rows":"10"},
-  "context":{
-    "webapp":"/solr",
-    "path":"/mypath",
-    "httpMethod":"GET"}}
-----
-
-To update a request handler, you should use the `update-requesthandler` command:
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json'  -d '{
-  "update-requesthandler": {
-    "name": "/mypath",
-    "class":"solr.DumpRequestHandler",
-    "defaults": {"x":"new value for X", "rows":"20"},
-    "useParams":"x"
-  }
-}'
-----
-
-As another example, we'll create another request handler, this time adding the 'terms' component as part of the definition:
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json' -d '{
-  "add-requesthandler": {
-    "name": "/myterms",
-    "class":"solr.SearchHandler",
-    "defaults": {"terms":true, "distrib":false},
-    "components": [ "terms" ]
-  }
-}'
-----
-
-=== Creating and Updating User-Defined Properties
 
-This command sets a user property.
+== How the Config API Works
 
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H'Content-type:application/json' -d '{
-    "set-user-property" : {"variable_name":"some_value"}}'
-----
+Every core watches the ZooKeeper directory for the configset being used with that core. In standalone mode, however, there is no watch (because ZooKeeper is not running). If there are multiple cores in the same node using the same configset, only one ZooKeeper watch is used.
 
-Again, we can use the `/config/overlay` endpoint to verify the changes have been made:
+For instance, if the configset 'myconf' is used by a core, the node would watch `/configs/myconf`. Every write operation performed through the API would 'touch' the directory and all watchers are notified. Every core would check if the schema file, `solrconfig.xml`, or `configoverlay.json` has been modified by comparing the `znode` versions. If any have been modified, the core is reloaded.
 
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config/overlay?omitHeader=true
-----
+If `params.json` is modified, the params object is just updated without a core reload (see <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>> for more information about `params.json`).
 
-And we would expect to see output like this:
+=== Empty Command
 
-[source,json]
-----
-{"overlay":{
-   "znodeVersion":5,
-   "userProps":{
-     "variable_name":"some_value"}}
-}
-----
+If an empty command is sent to the `/config` endpoint, the watch is triggered on all cores using this configset. For example:
 
-To unset the variable, issue a command like this:
+[.dynamic-tabs]
+--
+[example.tab-pane#v1empty]
+====
+[.tab-label]*V1 API*
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/techproducts/config -H'Content-type:application/json' -d '{"unset-user-property" : "variable_name"}'
+curl -X POST -H 'Content-type:application/json' -d '{}' http://localhost:8983/solr/techproducts/config
 ----
+====
 
-== How the Config API Works
-
-Every core watches the ZooKeeper directory for the configset being used with that core. In standalone mode, however, there is no watch (because ZooKeeper is not running). If there are multiple cores in the same node using the same configset, only one ZooKeeper watch is used. For instance, if the configset 'myconf' is used by a core, the node would watch `/configs/myconf`. Every write operation performed through the API would 'touch' the directory (sets an empty byte[] to trigger watches) and all watchers are notified. Every core would check if the Schema file, `solrconfig.xml` or `configoverlay.json` is modified by comparing the `znode` versions and if modified, the core is reloaded.
-
-If `params.json` is modified, the params object is just updated without a core reload (see the section <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>> for more information about `params.json`).
-
-=== Empty Command
-
-If an empty command is sent to the `/config` endpoint, the watch is triggered on all cores using this configset. For example:
+[example.tab-pane#v2empty]
+====
+[.tab-label]*V2 API*
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/techproducts/config -H'Content-type:application/json' -d '{}'
+curl -X POST -H 'Content-type:application/json' -d '{}' http://localhost:8983/api/collections/techproducts/config
 ----
+====
+--
 
 Directly editing any files without 'touching' the directory *will not* make it visible to all nodes.
 
@@ -513,4 +863,4 @@ Any component can register a listener using:
 
 `SolrCore#addConfListener(Runnable listener)`
 
-to get notified for config changes. This is not very useful if the files modified result in core reloads (i.e., `configoverlay.xml` or Schema). Components can use this to reload the files they are interested in.
+to get notified for config changes. This is not very useful if the files modified result in core reloads (i.e., `configoverlay.xml` or the schema). Components can use this to reload the files they are interested in.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d08e62d5/solr/solrj/src/resources/apispec/cluster.configs.Commands.json
----------------------------------------------------------------------
diff --git a/solr/solrj/src/resources/apispec/cluster.configs.Commands.json b/solr/solrj/src/resources/apispec/cluster.configs.Commands.json
index 065f175..3792686 100644
--- a/solr/solrj/src/resources/apispec/cluster.configs.Commands.json
+++ b/solr/solrj/src/resources/apispec/cluster.configs.Commands.json
@@ -1,6 +1,6 @@
 {
-  "documentation": "https://lucene.apache.org/solr/guide/configsets-api.html",
-  "description": "Create ConfigSets.",
+  "documentation": "https://lucene.apache.org/solr/guide/configsets-api.html#configsets-create",
+  "description": "Create configsets.",
   "methods": [
     "POST"
   ],
@@ -11,20 +11,20 @@
   "commands": {
     "create": {
       "type" :"object",
-      "description": "Create a ConfigSet, based on another ConfigSet already in ZooKeeper.",
+      "description": "Create a configset, based on another configset already in ZooKeeper.",
       "documentation": "https://lucene.apache.org/solr/guide/configsets-api.html#configsets-create",
       "properties": {
         "name" :{
           "type" :"string",
-          "description" : "The name of the ConfigSet to be created."
+          "description" : "The name of the configset to be created."
         },
         "baseConfigSet":{
           "type" : "string",
-          "description" :"The existing ConfigSet to copy as the basis for the new one."
+          "description" :"The existing configset to copy as the basis for the new one."
         },
         "properties" : {
           "type":"object",
-          "description": "Additional key-value pairs, in the form of 'ConfigSetProp.<key>=<value>', as needed. These properties will override the same properties in the base ConfigSet.",
+          "description": "Additional key-value pairs, in the form of 'ConfigSetProp.<key>=<value>', as needed. These properties will override the same properties in the base configset.",
           "additionalProperties" : true
         }
       },

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d08e62d5/solr/solrj/src/resources/apispec/cluster.configs.delete.json
----------------------------------------------------------------------
diff --git a/solr/solrj/src/resources/apispec/cluster.configs.delete.json b/solr/solrj/src/resources/apispec/cluster.configs.delete.json
index a03ba4b..20985b8 100644
--- a/solr/solrj/src/resources/apispec/cluster.configs.delete.json
+++ b/solr/solrj/src/resources/apispec/cluster.configs.delete.json
@@ -1,6 +1,6 @@
 {
   "documentation": "https://lucene.apache.org/solr/guide/configsets-api.html#configsets-delete",
-  "description": "Delete ConfigSets. The name of the ConfigSet to delete must be provided as a path parameter.",
+  "description": "Delete configsets. The name of the configset to delete must be provided as a path parameter.",
   "methods": [
     "DELETE"
   ],

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d08e62d5/solr/solrj/src/resources/apispec/cluster.configs.json
----------------------------------------------------------------------
diff --git a/solr/solrj/src/resources/apispec/cluster.configs.json b/solr/solrj/src/resources/apispec/cluster.configs.json
index 55fc8b6..45d91d9 100644
--- a/solr/solrj/src/resources/apispec/cluster.configs.json
+++ b/solr/solrj/src/resources/apispec/cluster.configs.json
@@ -1,6 +1,6 @@
 {
   "documentation": "https://lucene.apache.org/solr/guide/configsets-api.html#configsets-list",
-  "description": "List all ConfigSets in the cluster.",
+  "description": "List all configsets in the cluster.",
   "methods": [
     "GET"
   ],


[07/40] lucene-solr:jira/solr-11833: SOLR-12187: fix precommit

Posted by ab...@apache.org.
SOLR-12187: fix precommit


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/507c4395
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/507c4395
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/507c4395

Branch: refs/heads/jira/solr-11833
Commit: 507c439558d3824a9072ff35ea6eaffae086a89e
Parents: 1d24414
Author: Mikhail Khludnev <mk...@apache.org>
Authored: Wed Apr 18 12:43:25 2018 +0300
Committer: Mikhail Khludnev <mk...@apache.org>
Committed: Wed Apr 18 12:43:25 2018 +0300

----------------------------------------------------------------------
 .../core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java | 1 -
 solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java  | 6 ------
 solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java  | 3 ---
 .../solr/common/cloud/TestCloudCollectionsListeners.java       | 4 ----
 4 files changed, 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/507c4395/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java b/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
index 08e9a37..c727fb2 100644
--- a/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
@@ -28,7 +28,6 @@ import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
-import java.util.function.Supplier;
 
 import org.apache.solr.client.solrj.embedded.JettySolrRunner;
 import org.apache.solr.client.solrj.request.CollectionAdminRequest;

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/507c4395/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java b/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
index 013434c..b710c8a 100644
--- a/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
@@ -22,23 +22,17 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 import java.util.Properties;
-import java.util.concurrent.TimeUnit;
-import java.util.stream.Collectors;
 
 import org.apache.solr.client.solrj.SolrClient;
 import org.apache.solr.client.solrj.SolrServerException;
 import org.apache.solr.client.solrj.embedded.JettySolrRunner;
 import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.cloud.overseer.OverseerAction;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrInputDocument;
 import org.apache.solr.common.cloud.ClusterState;
 import org.apache.solr.common.cloud.Replica;
 import org.apache.solr.common.cloud.Replica.State;
-import org.apache.solr.common.cloud.ZkNodeProps;
-import org.apache.solr.common.cloud.ZkStateReader;
 import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.util.Utils;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.KeeperException.NoNodeException;
 import org.junit.Ignore;

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/507c4395/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java b/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
index 652a2e2..8d21bee 100644
--- a/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
@@ -22,7 +22,6 @@ import java.lang.invoke.MethodHandles;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
-import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
@@ -39,11 +38,9 @@ import org.apache.solr.client.solrj.request.CoreAdminRequest;
 import org.apache.solr.client.solrj.response.CoreAdminResponse;
 import org.apache.solr.client.solrj.response.RequestStatusState;
 import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.cloud.CollectionStateWatcher;
 import org.apache.solr.common.cloud.DocCollection;
 import org.apache.solr.common.cloud.Replica;
 import org.apache.solr.common.cloud.Slice;
-import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 import org.apache.solr.common.params.CollectionParams;
 import org.apache.solr.common.params.ModifiableSolrParams;
 import org.apache.solr.common.params.SolrParams;

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/507c4395/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java b/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java
index 6d08180..60cce59 100644
--- a/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java
+++ b/solr/solrj/src/test/org/apache/solr/common/cloud/TestCloudCollectionsListeners.java
@@ -21,12 +21,8 @@ import java.lang.invoke.MethodHandles;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
 
 import org.apache.solr.client.solrj.impl.CloudSolrClient;
 import org.apache.solr.client.solrj.request.CollectionAdminRequest;


[05/40] lucene-solr:jira/solr-11833: SOLR-11924: Updates solr/CHANGES.txt for v7.4

Posted by ab...@apache.org.
SOLR-11924: Updates solr/CHANGES.txt for v7.4


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/8c60be44
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/8c60be44
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/8c60be44

Branch: refs/heads/jira/solr-11833
Commit: 8c60be448921f3bb59a1d6de1b3655a1dc1d75f0
Parents: ae0190b
Author: Dennis Gove <dp...@gmail.com>
Authored: Tue Apr 17 18:58:42 2018 -0400
Committer: Dennis Gove <dp...@gmail.com>
Committed: Tue Apr 17 18:58:42 2018 -0400

----------------------------------------------------------------------
 solr/CHANGES.txt | 3 +++
 1 file changed, 3 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8c60be44/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index 1107c56..c1efc85 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -99,6 +99,9 @@ New Features
 * SOLR-11913: SolrJ SolrParams now implements Iterable<Map.Entry<String, String[]>> and also has a stream() method
   using it for convenience. (David Smiley, Tapan Vaishnav)
 
+* SOLR-11924: Added the ability to listen to changes in the set of active collections in a cloud
+  in the ZkStateReader, through the CloudCollectionsListener. (Houston Putman, Dennis Gove)
+
 Bug Fixes
 ----------------------
 


[30/40] lucene-solr:jira/solr-11833: SOLR-12256: AliasesManager.update() should call ZooKeeper.sync() * SetAliasPropCmd now calls AliasesManager.update() first. * SetAliasPropCmd now more efficiently updates multiple values. * Tests: Commented out BadApp

Posted by ab...@apache.org.
SOLR-12256: AliasesManager.update() should call ZooKeeper.sync()
* SetAliasPropCmd now calls AliasesManager.update() first.
* SetAliasPropCmd now more efficiently updates multiple values.
* Tests: Commented out BadApple annotations on alias related stuff.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/8f296d0c
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/8f296d0c
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/8f296d0c

Branch: refs/heads/jira/solr-11833
Commit: 8f296d0ccf82174f9c612920ce25b928196a1fa8
Parents: 22c4b9c
Author: David Smiley <ds...@apache.org>
Authored: Fri Apr 20 16:22:16 2018 -0400
Committer: David Smiley <ds...@apache.org>
Committed: Fri Apr 20 16:22:16 2018 -0400

----------------------------------------------------------------------
 solr/CHANGES.txt                                |  2 +
 .../cloud/api/collections/SetAliasPropCmd.java  | 45 +++++++++++---------
 .../apache/solr/cloud/AliasIntegrationTest.java | 33 +++++++-------
 .../solr/cloud/CreateRoutedAliasTest.java       | 10 +++--
 .../TimeRoutedAliasUpdateProcessorTest.java     | 31 ++++++++------
 .../apache/solr/common/cloud/ZkStateReader.java | 22 ++++++----
 6 files changed, 78 insertions(+), 65 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f296d0c/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index ed36d79..efa6000 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -259,6 +259,8 @@ Bug Fixes
 
 * SOLR-12204: Upgrade commons-fileupload dependency to 1.3.3 to address CVE-2016-1000031.  (Steve Rowe)
 
+* SOLR-12256: Fixed some eventual-consistency issues with collection aliases by using ZooKeeper.sync(). (David Smiley)
+
 ==================  7.3.0 ==================
 
 Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f296d0c/solr/core/src/java/org/apache/solr/cloud/api/collections/SetAliasPropCmd.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/cloud/api/collections/SetAliasPropCmd.java b/solr/core/src/java/org/apache/solr/cloud/api/collections/SetAliasPropCmd.java
index 618b72d..fdee1d1 100644
--- a/solr/core/src/java/org/apache/solr/cloud/api/collections/SetAliasPropCmd.java
+++ b/solr/core/src/java/org/apache/solr/cloud/api/collections/SetAliasPropCmd.java
@@ -18,6 +18,7 @@
 package org.apache.solr.cloud.api.collections;
 
 import java.lang.invoke.MethodHandles;
+import java.util.LinkedHashMap;
 import java.util.Locale;
 import java.util.Map;
 
@@ -29,7 +30,7 @@ import org.apache.solr.common.util.NamedList;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import static org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.*;
+import static org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.Cmd;
 import static org.apache.solr.common.SolrException.ErrorCode.BAD_REQUEST;
 import static org.apache.solr.common.params.CommonParams.NAME;
 
@@ -49,33 +50,35 @@ public class SetAliasPropCmd implements Cmd {
   public void call(ClusterState state, ZkNodeProps message, NamedList results) throws Exception {
     String aliasName = message.getStr(NAME);
 
+    final ZkStateReader.AliasesManager aliasesManager = messageHandler.zkStateReader.aliasesManager;
 
-    ZkStateReader zkStateReader = messageHandler.zkStateReader;
-    if (zkStateReader.getAliases().getCollectionAliasMap().get(aliasName) == null) {
+    // Ensure we see the alias.  This may be redundant but SetAliasPropCmd isn't expected to be called very frequently
+    aliasesManager.update();
+
+    if (aliasesManager.getAliases().getCollectionAliasMap().get(aliasName) == null) {
       // nicer than letting aliases object throw later on...
       throw new SolrException(BAD_REQUEST,
           String.format(Locale.ROOT,  "Can't modify non-existent alias %s", aliasName));
     }
 
     @SuppressWarnings("unchecked")
-    Map<String, String> properties = (Map<String, String>) message.get(PROPERTIES);
-
-    zkStateReader.aliasesManager.applyModificationAndExportToZk(aliases1 -> {
-      for (Map.Entry<String, String> entry : properties.entrySet()) {
-        String key = entry.getKey();
-        if ("".equals(key.trim())) {
-          throw new SolrException(BAD_REQUEST, "property keys must not be pure whitespace");
-        }
-        if (!key.equals(key.trim())) {
-          throw new SolrException(BAD_REQUEST, "property keys should not begin or end with whitespace");
-        }
-        String value = entry.getValue();
-        if ("".equals(value)) {
-          value = null;
-        }
-        aliases1 = aliases1.cloneWithCollectionAliasProperties(aliasName, key, value);
+    Map<String, String> properties = new LinkedHashMap<>((Map<String, String>) message.get(PROPERTIES));
+
+    // check & cleanup properties.  It's a mutable copy.
+    for (Map.Entry<String, String> entry : properties.entrySet()) {
+      String key = entry.getKey();
+      if ("".equals(key.trim())) {
+        throw new SolrException(BAD_REQUEST, "property keys must not be pure whitespace");
+      }
+      if (!key.equals(key.trim())) {
+        throw new SolrException(BAD_REQUEST, "property keys should not begin or end with whitespace");
       }
-      return aliases1;
-    });
+      String value = entry.getValue();
+      if ("".equals(value)) {
+        entry.setValue(null);
+      }
+    }
+
+    aliasesManager.applyModificationAndExportToZk(aliases1 -> aliases1.cloneWithCollectionAliasProperties(aliasName, properties));
   }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f296d0c/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java b/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java
index 9858ea7..2a82894 100644
--- a/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java
@@ -93,7 +93,7 @@ public class AliasIntegrationTest extends SolrCloudTestCase {
   }
 
   @Test
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028")
+  //@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028")
   public void testProperties() throws Exception {
     CollectionAdminRequest.createCollection("collection1meta", "conf", 2, 1).process(cluster.getSolrClient());
     CollectionAdminRequest.createCollection("collection2meta", "conf", 1, 1).process(cluster.getSolrClient());
@@ -118,16 +118,16 @@ public class AliasIntegrationTest extends SolrCloudTestCase {
     assertTrue(((Map<String,Map<String,?>>)Utils.fromJSON(rawBytes)).get("collection").get("meta1") instanceof String);
 
     // set properties
-    UnaryOperator<Aliases> op5 = a -> a.cloneWithCollectionAliasProperties("meta1", "foo", "bar");
-    aliasesManager.applyModificationAndExportToZk(op5);
+    aliasesManager.applyModificationAndExportToZk(a1 ->
+        a1.cloneWithCollectionAliasProperties("meta1", "foo", "bar"));
     Map<String, String> meta = zkStateReader.getAliases().getCollectionAliasProperties("meta1");
     assertNotNull(meta);
     assertTrue(meta.containsKey("foo"));
     assertEquals("bar", meta.get("foo"));
 
     // set more properties
-    UnaryOperator<Aliases> op4 = a -> a.cloneWithCollectionAliasProperties("meta1", "foobar", "bazbam");
-    aliasesManager.applyModificationAndExportToZk(op4);
+    aliasesManager.applyModificationAndExportToZk( a1 ->
+        a1.cloneWithCollectionAliasProperties("meta1", "foobar", "bazbam"));
     meta = zkStateReader.getAliases().getCollectionAliasProperties("meta1");
     assertNotNull(meta);
 
@@ -140,8 +140,8 @@ public class AliasIntegrationTest extends SolrCloudTestCase {
     assertEquals("bazbam", meta.get("foobar"));
 
     // remove properties
-    UnaryOperator<Aliases> op3 = a -> a.cloneWithCollectionAliasProperties("meta1", "foo", null);
-    aliasesManager.applyModificationAndExportToZk(op3);
+    aliasesManager.applyModificationAndExportToZk(a1 ->
+        a1.cloneWithCollectionAliasProperties("meta1", "foo", null));
     meta = zkStateReader.getAliases().getCollectionAliasProperties("meta1");
     assertNotNull(meta);
 
@@ -153,18 +153,17 @@ public class AliasIntegrationTest extends SolrCloudTestCase {
     assertEquals("bazbam", meta.get("foobar"));
 
     // removal of non existent key should succeed.
-    UnaryOperator<Aliases> op2 = a -> a.cloneWithCollectionAliasProperties("meta1", "foo", null);
-    aliasesManager.applyModificationAndExportToZk(op2);
+    aliasesManager.applyModificationAndExportToZk(a2 ->
+        a2.cloneWithCollectionAliasProperties("meta1", "foo", null));
 
     // chained invocations
-    UnaryOperator<Aliases> op1 = a ->
-        a.cloneWithCollectionAliasProperties("meta1", "foo2", "bazbam")
-        .cloneWithCollectionAliasProperties("meta1", "foo3", "bazbam2");
-    aliasesManager.applyModificationAndExportToZk(op1);
+    aliasesManager.applyModificationAndExportToZk(a1 ->
+        a1.cloneWithCollectionAliasProperties("meta1", "foo2", "bazbam")
+        .cloneWithCollectionAliasProperties("meta1", "foo3", "bazbam2"));
 
     // some other independent update (not overwritten)
-    UnaryOperator<Aliases> op = a -> a.cloneWithCollectionAlias("meta3", "collection1meta,collection2meta");
-    aliasesManager.applyModificationAndExportToZk(op);
+    aliasesManager.applyModificationAndExportToZk(a1 ->
+        a1.cloneWithCollectionAlias("meta3", "collection1meta,collection2meta"));
 
     // competing went through
     assertEquals("collection1meta,collection2meta", zkStateReader.getAliases().getCollectionAliasMap().get("meta3"));
@@ -240,7 +239,7 @@ public class AliasIntegrationTest extends SolrCloudTestCase {
   }
 
   @Test
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
+  //@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
   public void testModifyPropertiesV1() throws Exception {
     // note we don't use TZ in this test, thus it's UTC
     final String aliasName = getTestName();
@@ -256,7 +255,7 @@ public class AliasIntegrationTest extends SolrCloudTestCase {
   }
 
   @Test
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028")
+  //@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028")
   public void testModifyPropertiesCAR() throws Exception {
     // note we don't use TZ in this test, thus it's UTC
     final String aliasName = getTestName();

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f296d0c/solr/core/src/test/org/apache/solr/cloud/CreateRoutedAliasTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/cloud/CreateRoutedAliasTest.java b/solr/core/src/test/org/apache/solr/cloud/CreateRoutedAliasTest.java
index 92135d6..4b81445 100644
--- a/solr/core/src/test/org/apache/solr/cloud/CreateRoutedAliasTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/CreateRoutedAliasTest.java
@@ -99,7 +99,7 @@ public class CreateRoutedAliasTest extends SolrCloudTestCase {
   // This is a fairly complete test where we set many options and see that it both affected the created
   //  collection and that the alias metadata was saved accordingly
   @Test
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
+  //@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
   public void testV2() throws Exception {
     // note we don't use TZ in this test, thus it's UTC
     final String aliasName = getTestName();
@@ -181,7 +181,7 @@ public class CreateRoutedAliasTest extends SolrCloudTestCase {
   }
 
   @Test
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
+  //@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
   public void testV1() throws Exception {
     final String aliasName = getTestName();
     final String baseUrl = cluster.getRandomJetty(random()).getBaseUrl().toString();
@@ -225,7 +225,7 @@ public class CreateRoutedAliasTest extends SolrCloudTestCase {
 
   // TZ should not affect the first collection name if absolute date given for start
   @Test
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
+  //@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
   public void testTimezoneAbsoluteDate() throws Exception {
     final String aliasName = getTestName();
     try (SolrClient client = getCloudSolrClient(cluster)) {
@@ -244,7 +244,7 @@ public class CreateRoutedAliasTest extends SolrCloudTestCase {
   }
 
   @Test
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
+  //@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 09-Apr-2018
   public void testCollectionNamesMustBeAbsent() throws Exception {
     CollectionAdminRequest.createCollection("collection1meta", "_default", 2, 1).process(cluster.getSolrClient());
     CollectionAdminRequest.createCollection("collection2meta", "_default", 1, 1).process(cluster.getSolrClient());
@@ -330,6 +330,7 @@ public class CreateRoutedAliasTest extends SolrCloudTestCase {
         "&create-collection.numShards=1");
     assertFailure(get, "Unit not recognized");
   }
+
   @Test
   public void testNegativeFutureFails() throws Exception {
     final String aliasName = getTestName();
@@ -346,6 +347,7 @@ public class CreateRoutedAliasTest extends SolrCloudTestCase {
         "&create-collection.numShards=1");
     assertFailure(get, "must be >= 0");
   }
+
   @Test
   public void testUnParseableFutureFails() throws Exception {
     final String aliasName = "testAlias";

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f296d0c/solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java b/solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
index ef8e1a5..cc7b7ce 100644
--- a/solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
+++ b/solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
@@ -37,7 +37,6 @@ import org.apache.solr.client.solrj.impl.CloudSolrClient;
 import org.apache.solr.client.solrj.request.CollectionAdminRequest;
 import org.apache.solr.client.solrj.request.ConfigSetAdminRequest;
 import org.apache.solr.client.solrj.request.V2Request;
-import org.apache.solr.client.solrj.response.ConfigSetAdminResponse;
 import org.apache.solr.client.solrj.response.FieldStatsInfo;
 import org.apache.solr.client.solrj.response.QueryResponse;
 import org.apache.solr.client.solrj.response.UpdateResponse;
@@ -71,6 +70,11 @@ public class TimeRoutedAliasUpdateProcessorTest extends SolrCloudTestCase {
   public static void setupCluster() throws Exception {
     configureCluster(2).configure();
     solrClient = getCloudSolrClient(cluster);
+    //log this to help debug potential causes of problems
+    System.out.println("SolrClient: " + solrClient);
+    if (solrClient instanceof CloudSolrClient) {
+      System.out.println(((CloudSolrClient)solrClient).getClusterStateProvider());
+    }
   }
 
   @AfterClass
@@ -85,16 +89,17 @@ public class TimeRoutedAliasUpdateProcessorTest extends SolrCloudTestCase {
     // Then we create a collection with the name of the eventual config.
     // We configure it, and ultimately delete the collection, leaving a modified config-set behind.
     // Then when we create the "real" collections referencing this modified config-set.
-    final ConfigSetAdminRequest.Create adminRequest = new ConfigSetAdminRequest.Create();
-        adminRequest.setConfigSetName(configName);
-        adminRequest.setBaseConfigSetName("_default");
-        ConfigSetAdminResponse adminResponse = adminRequest.process(solrClient);
-        assertEquals(adminResponse.getStatus(), 0);
+    assertEquals(0, new ConfigSetAdminRequest.Create()
+        .setConfigSetName(configName)
+        .setBaseConfigSetName("_default")
+        .process(solrClient).getStatus());
 
-    CollectionAdminRequest.createCollection(configName, configName,1, 1).process(solrClient);
-    // manipulate the config...
+    CollectionAdminRequest.createCollection(configName, configName, 1, 1).process(solrClient);
 
-        String conf = "{" +
+    // manipulate the config...
+    checkNoError(solrClient.request(new V2Request.Builder("/collections/" + configName + "/config")
+        .withMethod(SolrRequest.METHOD.POST)
+        .withPayload("{" +
             "  'set-user-property' : {'update.autoCreateFields':false}," + // no data driven
             "  'add-updateprocessor' : {" +
             "    'name':'tolerant', 'class':'solr.TolerantUpdateProcessorFactory'" +
@@ -103,10 +108,8 @@ public class TimeRoutedAliasUpdateProcessorTest extends SolrCloudTestCase {
             "    'name':'inc', 'class':'" + IncrementURPFactory.class.getName() + "'," +
             "    'fieldName':'" + intField + "'" +
             "  }," +
-            "}";
-    checkNoError(solrClient.request(new V2Request.Builder("/collections/" + configName + "/config")
-        .withMethod(SolrRequest.METHOD.POST)
-        .withPayload(conf).build()));    // only sometimes test with "tolerant" URP
+            "}").build()));
+    // only sometimes test with "tolerant" URP:
     final String urpNames = "inc" + (random().nextBoolean() ? ",tolerant" : "");
     checkNoError(solrClient.request(new V2Request.Builder("/collections/" + configName + "/config/params")
         .withMethod(SolrRequest.METHOD.POST)
@@ -115,8 +118,8 @@ public class TimeRoutedAliasUpdateProcessorTest extends SolrCloudTestCase {
             "    '_UPDATE' : {'processor':'" + urpNames + "'}" +
             "  }" +
             "}").build()));
-    CollectionAdminRequest.deleteCollection(configName).process(solrClient);
 
+    CollectionAdminRequest.deleteCollection(configName).process(solrClient);
     assertTrue(
         new ConfigSetAdminRequest.List().process(solrClient).getConfigSets()
             .contains(configName)

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f296d0c/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
----------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
index cfae849..9f1ddc6 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
@@ -16,14 +16,6 @@
  */
 package org.apache.solr.common.cloud;
 
-import static java.util.Arrays.asList;
-import static java.util.Collections.EMPTY_MAP;
-import static java.util.Collections.emptyMap;
-import static java.util.Collections.emptySet;
-import static java.util.Collections.emptySortedSet;
-import static java.util.Collections.unmodifiableSet;
-import static org.apache.solr.common.util.Utils.fromJSON;
-
 import java.io.Closeable;
 import java.lang.invoke.MethodHandles;
 import java.util.ArrayList;
@@ -51,6 +43,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicReference;
 import java.util.function.UnaryOperator;
 import java.util.stream.Collectors;
+
 import org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig;
 import org.apache.solr.common.Callable;
 import org.apache.solr.common.SolrException;
@@ -69,6 +62,14 @@ import org.apache.zookeeper.data.Stat;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static java.util.Arrays.asList;
+import static java.util.Collections.EMPTY_MAP;
+import static java.util.Collections.emptyMap;
+import static java.util.Collections.emptySet;
+import static java.util.Collections.emptySortedSet;
+import static java.util.Collections.unmodifiableSet;
+import static org.apache.solr.common.util.Utils.fromJSON;
+
 public class ZkStateReader implements Closeable {
   public static final int STATE_UPDATE_DELAY = Integer.getInteger("solr.OverseerStateUpdateDelay", 2000);  // delay between cloud state updates
   private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
@@ -1705,7 +1706,7 @@ public class ZkStateReader implements Closeable {
             LOG.debug(e.toString(), e);
             LOG.warn("Couldn't save aliases due to race with another modification; will update and retry until timeout");
             // considered a backoff here, but we really do want to compete strongly since the normal case is
-            // that we will do one update and succeed. This is left as a hot loop for 5 tries intentionally.
+            // that we will do one update and succeed. This is left as a hot loop for limited tries intentionally.
             // More failures than that here probably indicate a bug or a very strange high write frequency usage for
             // aliases.json, timeouts mean zk is being very slow to respond, or this node is being crushed
             // by other processing and just can't find any cpu cycles at all.
@@ -1733,6 +1734,9 @@ public class ZkStateReader implements Closeable {
      * @return true if an update was performed
      */
     public boolean update() throws KeeperException, InterruptedException {
+      LOG.debug("Checking ZK for most up to date Aliases " + ALIASES);
+      // Call sync() first to ensure the subsequent read (getData) is up to date.
+      zkClient.getSolrZooKeeper().sync(ALIASES, null, null);
       Stat stat = new Stat();
       final byte[] data = zkClient.getData(ALIASES, null, stat, true);
       return setIfNewer(Aliases.fromJSON(data, stat.getVersion()));


[35/40] lucene-solr:jira/solr-11833: LUCENE-8260: Extract ReaderPool from IndexWriter

Posted by ab...@apache.org.
LUCENE-8260: Extract ReaderPool from IndexWriter

ReaderPool plays a central role in the IndexWriter pooling NRT readers
and making sure we write buffered deletes and updates to disk. This class
used to be a non-static inner class accessing many aspects including locks
from the IndexWriter itself. This change moves the class outside of IW and
defines it's responsibility in a clear way with respect to locks etc. Now
IndexWriter doesn't need to share ReaderPool anymore and reacts on writes done
inside the pool by checkpointing internally. This also removes acquiring the IW
lock inside the reader pool which makes reasoning about concurrency difficult.

This change also add javadocs and dedicated tests for the ReaderPool class.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/89756929
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/89756929
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/89756929

Branch: refs/heads/jira/solr-11833
Commit: 8975692953713923bd1cc67766cf92565183c2b8
Parents: 4136fe0
Author: Simon Willnauer <si...@apache.org>
Authored: Mon Apr 23 10:29:10 2018 +0200
Committer: GitHub <no...@github.com>
Committed: Mon Apr 23 10:29:10 2018 +0200

----------------------------------------------------------------------
 .../lucene/index/BufferedUpdatesStream.java     |  45 +-
 .../apache/lucene/index/DocumentsWriter.java    |  43 +-
 .../lucene/index/FrozenBufferedUpdates.java     |   8 +-
 .../org/apache/lucene/index/IndexWriter.java    | 489 ++++---------------
 .../org/apache/lucene/index/ReaderPool.java     | 390 +++++++++++++++
 .../apache/lucene/index/ReadersAndUpdates.java  |   6 +-
 .../lucene/index/StandardDirectoryReader.java   |   4 +-
 .../lucene/index/TestIndexWriterDelete.java     |  24 +
 .../org/apache/lucene/index/TestReaderPool.java | 223 +++++++++
 9 files changed, 783 insertions(+), 449 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
index 32ee256..7a93cfd 100644
--- a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
+++ b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
@@ -258,7 +258,7 @@ class BufferedUpdatesStream implements Accountable {
   }
 
   /** Holds all per-segment internal state used while resolving deletions. */
-  public static final class SegmentState {
+  static final class SegmentState {
     final long delGen;
     final ReadersAndUpdates rld;
     final SegmentReader reader;
@@ -268,21 +268,13 @@ class BufferedUpdatesStream implements Accountable {
     PostingsEnum postingsEnum;
     BytesRef term;
 
-    public SegmentState(IndexWriter.ReaderPool pool, SegmentCommitInfo info) throws IOException {
-      rld = pool.get(info, true);
+    SegmentState(ReadersAndUpdates rld, SegmentCommitInfo info) throws IOException {
+      this.rld = rld;
       startDelCount = rld.getPendingDeleteCount();
       reader = rld.getReader(IOContext.READ);
       delGen = info.getBufferedDeletesGen();
     }
 
-    public void finish(IndexWriter.ReaderPool pool) throws IOException {
-      try {
-        rld.release(reader);
-      } finally {
-        pool.release(rld);
-      }
-    }
-
     @Override
     public String toString() {
       return "SegmentState(" + rld.info + ")";
@@ -290,23 +282,21 @@ class BufferedUpdatesStream implements Accountable {
   }
 
   /** Opens SegmentReader and inits SegmentState for each segment. */
-  public SegmentState[] openSegmentStates(IndexWriter.ReaderPool pool, List<SegmentCommitInfo> infos,
+  public SegmentState[] openSegmentStates(List<SegmentCommitInfo> infos,
                                           Set<SegmentCommitInfo> alreadySeenSegments, long delGen) throws IOException {
     List<SegmentState> segStates = new ArrayList<>();
     try {
       for (SegmentCommitInfo info : infos) {
         if (info.getBufferedDeletesGen() <= delGen && alreadySeenSegments.contains(info) == false) {
-          segStates.add(new SegmentState(pool, info));
+          segStates.add(new SegmentState(writer.getPooledInstance(info, true), info));
           alreadySeenSegments.add(info);
         }
       }
     } catch (Throwable t) {
-      for(SegmentState segState : segStates) {
-        try {
-          segState.finish(pool);
-        } catch (Throwable th) {
-          t.addSuppressed(th);
-        }
+      try {
+        finishSegmentStates(segStates);
+      } catch (Throwable t1) {
+        t.addSuppressed(t1);
       }
       throw t;
     }
@@ -314,8 +304,19 @@ class BufferedUpdatesStream implements Accountable {
     return segStates.toArray(new SegmentState[0]);
   }
 
+  private void finishSegmentStates(List<SegmentState> segStates) throws IOException {
+    IOUtils.applyToAll(segStates, s -> {
+      ReadersAndUpdates rld = s.rld;
+      try {
+        rld.release(s.reader);
+      } finally {
+        writer.release(s.rld);
+      }
+    });
+  }
+
   /** Close segment states previously opened with openSegmentStates. */
-  public ApplyDeletesResult closeSegmentStates(IndexWriter.ReaderPool pool, SegmentState[] segStates, boolean success) throws IOException {
+  public ApplyDeletesResult closeSegmentStates(SegmentState[] segStates, boolean success) throws IOException {
     List<SegmentCommitInfo> allDeleted = null;
     long totDelCount = 0;
     final List<SegmentState> segmentStates = Arrays.asList(segStates);
@@ -332,9 +333,9 @@ class BufferedUpdatesStream implements Accountable {
         }
       }
     }
-    IOUtils.applyToAll(segmentStates, s -> s.finish(pool));
+    finishSegmentStates(segmentStates);
     if (infoStream.isEnabled("BD")) {
-      infoStream.message("BD", "closeSegmentStates: " + totDelCount + " new deleted documents; pool " + updates.size() + " packets; bytesUsed=" + pool.ramBytesUsed());
+      infoStream.message("BD", "closeSegmentStates: " + totDelCount + " new deleted documents; pool " + updates.size() + " packets; bytesUsed=" + writer.getReaderPoolRamBytesUsed());
     }
 
     return new ApplyDeletesResult(totDelCount > 0, allDeleted);      

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
index f848b2a..0042dab 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
@@ -27,6 +27,7 @@ import java.util.Queue;
 import java.util.concurrent.ConcurrentLinkedQueue;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.ToLongFunction;
 
 import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.index.DocumentsWriterFlushQueue.SegmentFlushTicket;
@@ -140,40 +141,26 @@ final class DocumentsWriter implements Closeable, Accountable {
     flushControl = new DocumentsWriterFlushControl(this, config, writer.bufferedUpdatesStream);
   }
   
-  synchronized long deleteQueries(final Query... queries) throws IOException {
-    // TODO why is this synchronized?
-    final DocumentsWriterDeleteQueue deleteQueue = this.deleteQueue;
-    long seqNo = deleteQueue.addDelete(queries);
-    flushControl.doOnDelete();
-    lastSeqNo = Math.max(lastSeqNo, seqNo);
-    if (applyAllDeletes(deleteQueue)) {
-      seqNo = -seqNo;
-    }
-    return seqNo;
+  long deleteQueries(final Query... queries) throws IOException {
+    return applyDeleteOrUpdate(q -> q.addDelete(queries));
   }
 
-  synchronized void setLastSeqNo(long seqNo) {
+  void setLastSeqNo(long seqNo) {
     lastSeqNo = seqNo;
   }
 
-  // TODO: we could check w/ FreqProxTermsWriter: if the
-  // term doesn't exist, don't bother buffering into the
-  // per-DWPT map (but still must go into the global map)
-  synchronized long deleteTerms(final Term... terms) throws IOException {
-    // TODO why is this synchronized?
-    final DocumentsWriterDeleteQueue deleteQueue = this.deleteQueue;
-    long seqNo = deleteQueue.addDelete(terms);
-    flushControl.doOnDelete();
-    lastSeqNo = Math.max(lastSeqNo, seqNo);
-    if (applyAllDeletes(deleteQueue)) {
-      seqNo = -seqNo;
-    }
-    return seqNo;
+  long deleteTerms(final Term... terms) throws IOException {
+    return applyDeleteOrUpdate(q -> q.addDelete(terms));
   }
 
-  synchronized long updateDocValues(DocValuesUpdate... updates) throws IOException {
+  long updateDocValues(DocValuesUpdate... updates) throws IOException {
+    return applyDeleteOrUpdate(q -> q.addDocValuesUpdates(updates));
+  }
+
+  private synchronized long applyDeleteOrUpdate(ToLongFunction<DocumentsWriterDeleteQueue> function) throws IOException {
+    // TODO why is this synchronized?
     final DocumentsWriterDeleteQueue deleteQueue = this.deleteQueue;
-    long seqNo = deleteQueue.addDocValuesUpdates(updates);
+    long seqNo = function.applyAsLong(deleteQueue);
     flushControl.doOnDelete();
     lastSeqNo = Math.max(lastSeqNo, seqNo);
     if (applyAllDeletes(deleteQueue)) {
@@ -182,10 +169,6 @@ final class DocumentsWriter implements Closeable, Accountable {
     return seqNo;
   }
   
-  DocumentsWriterDeleteQueue currentDeleteSession() {
-    return deleteQueue;
-  }
-
   /** If buffered deletes are using too much heap, resolve them and write disk and return true. */
   private boolean applyAllDeletes(DocumentsWriterDeleteQueue deleteQueue) throws IOException {
     if (flushControl.getAndResetApplyAllDeletes()) {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
index fc268df..586afa7 100644
--- a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
+++ b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
@@ -297,7 +297,7 @@ class FrozenBufferedUpdates {
 
         // Must open while holding IW lock so that e.g. segments are not merged
         // away, dropped from 100% deletions, etc., before we can open the readers
-        segStates = writer.bufferedUpdatesStream.openSegmentStates(writer.readerPool, infos, seenSegments, delGen());
+        segStates = writer.bufferedUpdatesStream.openSegmentStates(infos, seenSegments, delGen());
 
         if (segStates.length == 0) {
 
@@ -328,8 +328,8 @@ class FrozenBufferedUpdates {
         success.set(true);
       }
 
-      // Since we jus resolved some more deletes/updates, now is a good time to write them:
-      writer.readerPool.writeSomeDocValuesUpdates();
+      // Since we just resolved some more deletes/updates, now is a good time to write them:
+      writer.writeSomeDocValuesUpdates();
 
       // It's OK to add this here, even if the while loop retries, because delCount only includes newly
       // deleted documents, on the segments we didn't already do in previous iterations:
@@ -399,7 +399,7 @@ class FrozenBufferedUpdates {
 
       BufferedUpdatesStream.ApplyDeletesResult result;
       try {
-        result = writer.bufferedUpdatesStream.closeSegmentStates(writer.readerPool, segStates, success);
+        result = writer.bufferedUpdatesStream.closeSegmentStates(segStates, success);
       } finally {
         // Matches the incRef we did above, but we must do the decRef after closing segment states else
         // IFD can't delete still-open files

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
index d6237e1..974f6c5 100644
--- a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
@@ -25,7 +25,6 @@ import java.util.Collections;
 import java.util.Date;
 import java.util.HashMap;
 import java.util.HashSet;
-import java.util.Iterator;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Locale;
@@ -324,24 +323,13 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
 
   final AtomicInteger flushDeletesCount = new AtomicInteger();
 
-  final ReaderPool readerPool = new ReaderPool();
+  private final ReaderPool readerPool;
   final BufferedUpdatesStream bufferedUpdatesStream;
 
   /** Counts how many merges have completed; this is used by {@link FrozenBufferedUpdates#apply}
    *  to handle concurrently apply deletes/updates with merges completing. */
   final AtomicLong mergeFinishedGen = new AtomicLong();
 
-  // This is a "write once" variable (like the organic dye
-  // on a DVD-R that may or may not be heated by a laser and
-  // then cooled to permanently record the event): it's
-  // false, until getReader() is called for the first time,
-  // at which point it's switched to true and never changes
-  // back to false.  Once this is true, we hold open and
-  // reuse SegmentReader instances internally for applying
-  // deletes, doing merges, and reopening near real-time
-  // readers.
-  private volatile boolean poolReaders;
-
   // The instance that was passed to the constructor. It is saved only in order
   // to allow users to query an IndexWriter settings.
   private final LiveIndexWriterConfig config;
@@ -434,7 +422,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     // Do this up front before flushing so that the readers
     // obtained during this flush are pooled, the first time
     // this method is called:
-    poolReaders = true;
+    readerPool.enableReaderPooling();
     DirectoryReader r = null;
     doBeforeFlush();
     boolean anyChanges = false;
@@ -477,11 +465,15 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
 
             // TODO: we could instead just clone SIS and pull/incref readers in sync'd block, and then do this w/o IW's lock?
             // Must do this sync'd on IW to prevent a merge from completing at the last second and failing to write its DV updates:
-            readerPool.writeAllDocValuesUpdates();
+            if (readerPool.writeAllDocValuesUpdates()) {
+              checkpoint();
+            }
 
             if (writeAllDeletes) {
               // Must move the deletes to disk:
-              readerPool.commit(segmentInfos);
+              if (readerPool.commit(segmentInfos)) {
+                checkpointNoSIS();
+              }
             }
 
             // Prevent segmentInfos from changing while opening the
@@ -536,339 +528,62 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     return docWriter.ramBytesUsed();
   }
 
-  /** Holds shared SegmentReader instances. IndexWriter uses
-   *  SegmentReaders for 1) applying deletes/DV updates, 2) doing
-   *  merges, 3) handing out a real-time reader.  This pool
-   *  reuses instances of the SegmentReaders in all these
-   *  places if it is in "near real-time mode" (getReader()
-   *  has been called on this instance). */
-
-  class ReaderPool implements Closeable {
-    
-    private final Map<SegmentCommitInfo,ReadersAndUpdates> readerMap = new HashMap<>();
-
-    /** Asserts this info still exists in IW's segment infos */
-    public synchronized boolean assertInfoIsLive(SegmentCommitInfo info) {
-      int idx = segmentInfos.indexOf(info);
-      assert idx != -1: "info=" + info + " isn't live";
-      assert segmentInfos.info(idx) == info: "info=" + info + " doesn't match live info in segmentInfos";
-      return true;
-    }
-
-    public synchronized boolean drop(SegmentCommitInfo info) throws IOException {
-      final ReadersAndUpdates rld = readerMap.get(info);
-      if (rld != null) {
-        assert info == rld.info;
-        readerMap.remove(info);
-        rld.dropReaders();
-        return true;
-      }
-      return false;
-    }
-
-    public synchronized long ramBytesUsed() {
-      long bytes = 0;
-      for (ReadersAndUpdates rld : readerMap.values()) {
-        bytes += rld.ramBytesUsed.get();
-      }
-      return bytes;
-    }
-
-    public synchronized boolean anyPendingDeletes() {
-      for(ReadersAndUpdates rld : readerMap.values()) {
-        if (rld.getPendingDeleteCount() != 0) {
-          return true;
-        }
-      }
-
-      return false;
-    }
-
-    public synchronized void release(ReadersAndUpdates rld) throws IOException {
-      release(rld, true);
-    }
-
-    public synchronized void release(ReadersAndUpdates rld, boolean assertInfoLive) throws IOException {
-
-      // Matches incRef in get:
-      rld.decRef();
-
-      if (rld.refCount() == 0) {
-        // This happens if the segment was just merged away, while a buffered deletes packet was still applying deletes/updates to it.
-        assert readerMap.containsKey(rld.info) == false: "seg=" + rld.info + " has refCount 0 but still unexpectedly exists in the reader pool";
-      } else {
-
-        // Pool still holds a ref:
-        assert rld.refCount() > 0: "refCount=" + rld.refCount() + " reader=" + rld.info;
-
-        if (!poolReaders && rld.refCount() == 1 && readerMap.containsKey(rld.info)) {
-          // This is the last ref to this RLD, and we're not
-          // pooling, so remove it:
-          if (rld.writeLiveDocs(directory)) {
-            // Make sure we only write del docs for a live segment:
-            assert assertInfoLive == false || assertInfoIsLive(rld.info);
-            // Must checkpoint because we just
-            // created new _X_N.del and field updates files;
-            // don't call IW.checkpoint because that also
-            // increments SIS.version, which we do not want to
-            // do here: it was done previously (after we
-            // invoked BDS.applyDeletes), whereas here all we
-            // did was move the state to disk:
-            checkpointNoSIS();
-          }
-          if (rld.writeFieldUpdates(directory, globalFieldNumberMap, bufferedUpdatesStream.getCompletedDelGen(), infoStream)) {
-            checkpointNoSIS();
-          }
-          if (rld.getNumDVUpdates() == 0) {
-            rld.dropReaders();
-            readerMap.remove(rld.info);
-          } else {
-            // We are forced to pool this segment until its deletes fully apply (no delGen gaps)
-          }
-        }
-      }
-    }
-    
-    @Override
-    public void close() throws IOException {
-      dropAll(false);
-    }
-
-    void writeAllDocValuesUpdates() throws IOException {
-      assert Thread.holdsLock(IndexWriter.this);
-      Collection<ReadersAndUpdates> copy;
-      synchronized (this) {
-        // this needs to be protected by the reader pool lock otherwise we hit ConcurrentModificationException
-        copy = new HashSet<>(readerMap.values());
-      }
-      boolean any = false;
-      for (ReadersAndUpdates rld : copy) {
-        any |= rld.writeFieldUpdates(directory, globalFieldNumberMap, bufferedUpdatesStream.getCompletedDelGen(), infoStream);
-      }
-      if (any) {
-        checkpoint();
-      }
-    }
-
-    void writeDocValuesUpdatesForMerge(List<SegmentCommitInfo> infos) throws IOException {
-      assert Thread.holdsLock(IndexWriter.this);
-      boolean any = false;
-      for (SegmentCommitInfo info : infos) {
-        ReadersAndUpdates rld = get(info, false);
-        if (rld != null) {
-          any |= rld.writeFieldUpdates(directory, globalFieldNumberMap, bufferedUpdatesStream.getCompletedDelGen(), infoStream);
-          rld.setIsMerging();
-        }
-      }
-      if (any) {
-        checkpoint();
-      }
-    }
-
-    private final AtomicBoolean writeDocValuesLock = new AtomicBoolean();
-
-    void writeSomeDocValuesUpdates() throws IOException {
+  final long getReaderPoolRamBytesUsed() {
+    return readerPool.ramBytesUsed();
+  }
 
-      assert Thread.holdsLock(IndexWriter.this) == false;
+  private final AtomicBoolean writeDocValuesLock = new AtomicBoolean();
 
-      if (writeDocValuesLock.compareAndSet(false, true)) {
-        try {
+  void writeSomeDocValuesUpdates() throws IOException {
+    if (writeDocValuesLock.compareAndSet(false, true)) {
+      try {
+        final double ramBufferSizeMB = config.getRAMBufferSizeMB();
+        // If the reader pool is > 50% of our IW buffer, then write the updates:
+        if (ramBufferSizeMB != IndexWriterConfig.DISABLE_AUTO_FLUSH) {
+          long startNS = System.nanoTime();
+
+          long ramBytesUsed = getReaderPoolRamBytesUsed();
+          if (ramBytesUsed > 0.5 * ramBufferSizeMB * 1024 * 1024) {
+            if (infoStream.isEnabled("BD")) {
+              infoStream.message("BD", String.format(Locale.ROOT, "now write some pending DV updates: %.2f MB used vs IWC Buffer %.2f MB",
+                  ramBytesUsed/1024./1024., ramBufferSizeMB));
+            }
 
-          LiveIndexWriterConfig config = getConfig();
-          double mb = config.getRAMBufferSizeMB();
-          // If the reader pool is > 50% of our IW buffer, then write the updates:
-          if (mb != IndexWriterConfig.DISABLE_AUTO_FLUSH) {
-            long startNS = System.nanoTime();
-            
-            long ramBytesUsed = ramBytesUsed();
-            if (ramBytesUsed > 0.5 * mb * 1024 * 1024) {
-              if (infoStream.isEnabled("BD")) {
-                infoStream.message("BD", String.format(Locale.ROOT, "now write some pending DV updates: %.2f MB used vs IWC Buffer %.2f MB",
-                                                       ramBytesUsed/1024./1024., mb));
-              }
-          
-              // Sort by largest ramBytesUsed:
-              PriorityQueue<ReadersAndUpdates> queue = new PriorityQueue<>(readerMap.size(), (a, b) -> Long.compare(b.ramBytesUsed.get(), a.ramBytesUsed.get()));
-              synchronized (this) {
-                for (ReadersAndUpdates rld : readerMap.values()) {
-                  queue.add(rld);
-                }
+            // Sort by largest ramBytesUsed:
+            PriorityQueue<ReadersAndUpdates> queue = readerPool.getReadersByRam();
+            int count = 0;
+            while (ramBytesUsed > 0.5 * ramBufferSizeMB * 1024 * 1024) {
+              ReadersAndUpdates rld = queue.poll();
+              if (rld == null) {
+                break;
               }
 
-              int count = 0;
-              while (ramBytesUsed > 0.5 * mb * 1024 * 1024) {
-                ReadersAndUpdates rld = queue.poll();
-                if (rld == null) {
-                  break;
-                }
+              // We need to do before/after because not all RAM in this RAU is used by DV updates, and
+              // not all of those bytes can be written here:
+              long bytesUsedBefore = rld.ramBytesUsed.get();
 
-                // We need to do before/after because not all RAM in this RAU is used by DV updates, and
-                // not all of those bytes can be written here:
-                long bytesUsedBefore = rld.ramBytesUsed.get();
-
-                // Only acquire IW lock on each write, since this is a time consuming operation.  This way
-                // other threads get a chance to run in between our writes.
-                synchronized (IndexWriter.this) {
-                  if (rld.writeFieldUpdates(directory, globalFieldNumberMap, bufferedUpdatesStream.getCompletedDelGen(), infoStream)) {
-                    checkpointNoSIS();
-                  }
+              // Only acquire IW lock on each write, since this is a time consuming operation.  This way
+              // other threads get a chance to run in between our writes.
+              synchronized (this) {
+                if (rld.writeFieldUpdates(directory, globalFieldNumberMap, bufferedUpdatesStream.getCompletedDelGen(), infoStream)) {
+                  checkpointNoSIS();
                 }
-                long bytesUsedAfter = rld.ramBytesUsed.get();
-                ramBytesUsed -= bytesUsedBefore - bytesUsedAfter;
-                count++;
-              }
-
-              if (infoStream.isEnabled("BD")) {
-                infoStream.message("BD", String.format(Locale.ROOT, "done write some DV updates for %d segments: now %.2f MB used vs IWC Buffer %.2f MB; took %.2f sec",
-                                                       count, ramBytesUsed()/1024./1024., mb, ((System.nanoTime() - startNS)/1000000000.)));
               }
+              long bytesUsedAfter = rld.ramBytesUsed.get();
+              ramBytesUsed -= bytesUsedBefore - bytesUsedAfter;
+              count++;
             }
-          }
-        } finally {
-          writeDocValuesLock.set(false);
-        }
-      }
-    }
 
-    /** Remove all our references to readers, and commits
-     *  any pending changes. */
-    synchronized void dropAll(boolean doSave) throws IOException {
-      Throwable priorE = null;
-      final Iterator<Map.Entry<SegmentCommitInfo,ReadersAndUpdates>> it = readerMap.entrySet().iterator();
-      while(it.hasNext()) {
-        final ReadersAndUpdates rld = it.next().getValue();
-        try {
-          if (doSave && rld.writeLiveDocs(directory)) {
-            // Make sure we only write del docs and field updates for a live segment:
-            assert assertInfoIsLive(rld.info);
-            // Must checkpoint because we just
-            // created new _X_N.del and field updates files;
-            // don't call IW.checkpoint because that also
-            // increments SIS.version, which we do not want to
-            // do here: it was done previously (after we
-            // invoked BDS.applyDeletes), whereas here all we
-            // did was move the state to disk:
-            checkpointNoSIS();
-          }
-        } catch (Throwable t) {
-          priorE = IOUtils.useOrSuppress(priorE, t);
-          if (doSave) {
-            throw t;
-          }
-        }
-
-        // Important to remove as-we-go, not with .clear()
-        // in the end, in case we hit an exception;
-        // otherwise we could over-decref if close() is
-        // called again:
-        it.remove();
-
-        // NOTE: it is allowed that these decRefs do not
-        // actually close the SRs; this happens when a
-        // near real-time reader is kept open after the
-        // IndexWriter instance is closed:
-        try {
-          rld.dropReaders();
-        } catch (Throwable t) {
-          priorE = IOUtils.useOrSuppress(priorE, t);
-          if (doSave) {
-            throw t;
-          }
-        }
-      }
-      assert readerMap.size() == 0;
-      if (priorE != null) {
-        throw IOUtils.rethrowAlways(priorE);
-      }
-    }
-
-    /**
-     * Commit live docs changes for the segment readers for
-     * the provided infos.
-     *
-     * @throws IOException If there is a low-level I/O error
-     */
-    public synchronized void commit(SegmentInfos infos) throws IOException {
-      for (SegmentCommitInfo info : infos) {
-        final ReadersAndUpdates rld = readerMap.get(info);
-        if (rld != null) {
-          assert rld.info == info;
-          boolean changed = rld.writeLiveDocs(directory);
-          changed |= rld.writeFieldUpdates(directory, globalFieldNumberMap, bufferedUpdatesStream.getCompletedDelGen(), infoStream);
-
-          if (changed) {
-            // Make sure we only write del docs for a live segment:
-            assert assertInfoIsLive(info);
-
-            // Must checkpoint because we just
-            // created new _X_N.del and field updates files;
-            // don't call IW.checkpoint because that also
-            // increments SIS.version, which we do not want to
-            // do here: it was done previously (after we
-            // invoked BDS.applyDeletes), whereas here all we
-            // did was move the state to disk:
-            checkpointNoSIS();
+            if (infoStream.isEnabled("BD")) {
+              infoStream.message("BD", String.format(Locale.ROOT, "done write some DV updates for %d segments: now %.2f MB used vs IWC Buffer %.2f MB; took %.2f sec",
+                  count, getReaderPoolRamBytesUsed()/1024./1024., ramBufferSizeMB, ((System.nanoTime() - startNS)/1000000000.)));
+            }
           }
-
         }
+      } finally {
+        writeDocValuesLock.set(false);
       }
     }
-
-    public synchronized boolean anyChanges() {
-      for (ReadersAndUpdates rld : readerMap.values()) {
-        // NOTE: we don't check for pending deletes because deletes carry over in RAM to NRT readers
-        if (rld.getNumDVUpdates() != 0) {
-          return true;
-        }
-      }
-
-      return false;
-    }
-
-    /**
-     * Obtain a ReadersAndLiveDocs instance from the
-     * readerPool.  If create is true, you must later call
-     * {@link #release(ReadersAndUpdates)}.
-     */
-    public synchronized ReadersAndUpdates get(SegmentCommitInfo info, boolean create) {
-
-      // Make sure no new readers can be opened if another thread just closed us:
-      ensureOpen(false);
-
-      assert info.info.dir == directoryOrig: "info.dir=" + info.info.dir + " vs " + directoryOrig;
-
-      ReadersAndUpdates rld = readerMap.get(info);
-      if (rld == null) {
-        if (create == false) {
-          return null;
-        }
-        rld = new ReadersAndUpdates(segmentInfos.getIndexCreatedVersionMajor(), info, newPendingDeletes(info));
-        // Steal initial reference:
-        readerMap.put(info, rld);
-      } else {
-        assert rld.info == info: "rld.info=" + rld.info + " info=" + info + " isLive?=" + assertInfoIsLive(rld.info) + " vs " + assertInfoIsLive(info);
-      }
-
-      if (create) {
-        // Return ref to caller:
-        rld.incRef();
-      }
-
-      assert noDups();
-
-      return rld;
-    }
-
-    // Make sure that every segment appears only once in the
-    // pool:
-    private boolean noDups() {
-      Set<String> seen = new HashSet<>();
-      for(SegmentCommitInfo info : readerMap.keySet()) {
-        assert !seen.contains(info.info.name);
-        seen.add(info.info.name);
-      }
-      return true;
-    }
   }
 
   /**
@@ -880,7 +595,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     ensureOpen(false);
     int delCount = info.getDelCount();
 
-    final ReadersAndUpdates rld = readerPool.get(info, false);
+    final ReadersAndUpdates rld = getPooledInstance(info, false);
     if (rld != null) {
       delCount += rld.getPendingDeleteCount();
     }
@@ -965,7 +680,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       codec = config.getCodec();
 
       bufferedUpdatesStream = new BufferedUpdatesStream(this);
-      poolReaders = config.getReaderPooling();
 
       OpenMode mode = config.getOpenMode();
       boolean create;
@@ -1021,7 +735,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
         }
         
         segmentInfos = sis;
-
         rollbackSegments = segmentInfos.createBackupSegmentInfos();
 
         // Record that we have a change (zero out all
@@ -1066,11 +779,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
         }
 
         rollbackSegments = lastCommit.createBackupSegmentInfos();
-
-        if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "init from reader " + reader);
-          messageState();
-        }
       } else {
         // Init from either the latest commit point, or an explicit prior commit point:
 
@@ -1118,7 +826,11 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       config.getFlushPolicy().init(config);
       docWriter = new DocumentsWriter(this, config, directoryOrig, directory);
       eventQueue = docWriter.eventQueue();
-
+      readerPool = new ReaderPool(directory, directoryOrig, segmentInfos, globalFieldNumberMap,
+          bufferedUpdatesStream::getCompletedDelGen, infoStream, conf.getSoftDeletesField(), reader);
+      if (config.getReaderPooling()) {
+        readerPool.enableReaderPooling();
+      }
       // Default deleter (for backwards compatibility) is
       // KeepOnlyLastCommitDeleter:
 
@@ -1142,26 +854,13 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       }
 
       if (reader != null) {
-        // Pre-enroll all segment readers into the reader pool; this is necessary so
-        // any in-memory NRT live docs are correctly carried over, and so NRT readers
-        // pulled from this IW share the same segment reader:
-        List<LeafReaderContext> leaves = reader.leaves();
-        assert segmentInfos.size() == leaves.size();
-
-        for (int i=0;i<leaves.size();i++) {
-          LeafReaderContext leaf = leaves.get(i);
-          SegmentReader segReader = (SegmentReader) leaf.reader();
-          SegmentReader newReader = new SegmentReader(segmentInfos.info(i), segReader, segReader.getLiveDocs(), segReader.numDocs());
-          readerPool.readerMap.put(newReader.getSegmentInfo(), new ReadersAndUpdates(segmentInfos.getIndexCreatedVersionMajor(), newReader, newPendingDeletes(newReader, newReader.getSegmentInfo())));
-        }
-
         // We always assume we are carrying over incoming changes when opening from reader:
         segmentInfos.changed();
         changed();
       }
 
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "init: create=" + create);
+        infoStream.message("IW", "init: create=" + create + " reader=" + reader);
         messageState();
       }
 
@@ -1638,7 +1337,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     // cost:
 
     if (segmentInfos.indexOf(info) != -1) {
-      ReadersAndUpdates rld = readerPool.get(info, false);
+      ReadersAndUpdates rld = getPooledInstance(info, false);
       if (rld != null) {
         synchronized(bufferedUpdatesStream) {
           if (rld.delete(docID)) {
@@ -2478,8 +2177,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
             notifyAll();
           }
         }
-        // Don't bother saving any changes in our segmentInfos
-        readerPool.dropAll(false);
         final int totalMaxDoc = segmentInfos.totalMaxDoc();
         // Keep the same segmentInfos instance but replace all
         // of its SegmentInfo instances so IFD below will remove
@@ -2505,7 +2202,8 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
         }
 
         lastCommitChangeCount = changeCount.get();
-
+        // Don't bother saving any changes in our segmentInfos
+        readerPool.close();
         // Must set closed while inside same sync block where we call deleter.refresh, else concurrent threads may try to sneak a flush in,
         // after we leave this sync block and before we enter the sync block in the finally clause below that sets closed:
         closed = true;
@@ -2630,7 +2328,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
                * We will remove the files incrementally as we go...
                */
               // Don't bother saving any changes in our segmentInfos
-              readerPool.dropAll(false);
+              readerPool.dropAll();
               // Mark that the index has changed
               changeCount.incrementAndGet();
               segmentInfos.changed();
@@ -2810,7 +2508,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       if (packet != null && packet.any() && sortMap != null) {
         // TODO: not great we do this heavyish op while holding IW's monitor lock,
         // but it only applies if you are using sorted indices and updating doc values:
-        ReadersAndUpdates rld = readerPool.get(newSegment, true);
+        ReadersAndUpdates rld = getPooledInstance(newSegment, true);
         rld.sortMap = sortMap;
         // DON't release this ReadersAndUpdates we need to stick with that sortMap
       }
@@ -2828,13 +2526,13 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       if (hasInitialSoftDeleted || isFullyHardDeleted){
         // this operation is only really executed if needed an if soft-deletes are not configured it only be executed
         // if we deleted all docs in this newly flushed segment.
-        ReadersAndUpdates rld = readerPool.get(newSegment, true);
+        ReadersAndUpdates rld = getPooledInstance(newSegment, true);
         try {
           if (isFullyDeleted(rld)) {
             dropDeletedSegment(newSegment);
           }
         } finally {
-          readerPool.release(rld);
+          release(rld);
         }
       }
 
@@ -3381,7 +3079,9 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
             applyAllDeletesAndUpdates();
             synchronized(this) {
 
-              readerPool.commit(segmentInfos);
+              if (readerPool.commit(segmentInfos)) {
+                checkpointNoSIS();
+              }
 
               if (changeCount.get() != lastCommitChangeCount) {
                 // There are changes to commit, so we will write a new segments_N in startCommit.
@@ -3831,7 +3531,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     long minGen = Long.MAX_VALUE;
 
     // Lazy init (only when we find a delete or update to carry over):
-    final ReadersAndUpdates mergedDeletesAndUpdates = readerPool.get(merge.info, true);
+    final ReadersAndUpdates mergedDeletesAndUpdates = getPooledInstance(merge.info, true);
     
     // field -> delGen -> dv field updates
     Map<String,Map<Long,DocValuesFieldUpdates>> mappedDVUpdates = new HashMap<>();
@@ -3844,7 +3544,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       minGen = Math.min(info.getBufferedDeletesGen(), minGen);
       final int maxDoc = info.info.maxDoc();
       final Bits prevLiveDocs = merge.readers.get(i).getLiveDocs();
-      final ReadersAndUpdates rld = readerPool.get(info, false);
+      final ReadersAndUpdates rld = getPooledInstance(info, false);
       // We hold a ref, from when we opened the readers during mergeInit, so it better still be in the pool:
       assert rld != null: "seg=" + info.info.name;
       final Bits currentLiveDocs = rld.getLiveDocs();
@@ -4055,7 +3755,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
         // Pass false for assertInfoLive because the merged
         // segment is not yet live (only below do we commit it
         // to the segmentInfos):
-        readerPool.release(mergedUpdates, false);
+        release(mergedUpdates, false);
         success = true;
       } finally {
         if (!success) {
@@ -4350,7 +4050,9 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     // Must move the pending doc values updates to disk now, else the newly merged segment will not see them:
     // TODO: we could fix merging to pull the merged DV iterator so we don't have to move these updates to disk first, i.e. just carry them
     // in memory:
-    readerPool.writeDocValuesUpdatesForMerge(merge.segments);
+    if (readerPool.writeDocValuesUpdatesForMerge(merge.segments)) {
+      checkpoint();
+    }
     
     // Bind a new segment name here so even with
     // ConcurrentMergePolicy we keep deterministic segment
@@ -4419,7 +4121,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     final boolean drop = suppressExceptions == false;
     try (Closeable finalizer = merge::mergeFinished) {
       IOUtils.applyToAll(merge.readers, sr -> {
-        final ReadersAndUpdates rld = readerPool.get(sr.getSegmentInfo(), false);
+        final ReadersAndUpdates rld = getPooledInstance(sr.getSegmentInfo(), false);
         // We still hold a ref so it should not have been removed:
         assert rld != null;
         if (drop) {
@@ -4428,7 +4130,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
           rld.dropMergingUpdates();
         }
         rld.release(sr);
-        readerPool.release(rld);
+        release(rld);
         if (drop) {
           readerPool.drop(rld.info);
         }
@@ -4468,7 +4170,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
 
         // Hold onto the "live" reader; we will use this to
         // commit merged deletes
-        final ReadersAndUpdates rld = readerPool.get(info, true);
+        final ReadersAndUpdates rld = getPooledInstance(info, true);
         rld.setIsMerging();
 
         SegmentReader reader = rld.getReaderForMerge(context);
@@ -4644,15 +4346,15 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       }
 
       final IndexReaderWarmer mergedSegmentWarmer = config.getMergedSegmentWarmer();
-      if (poolReaders && mergedSegmentWarmer != null) {
-        final ReadersAndUpdates rld = readerPool.get(merge.info, true);
+      if (readerPool.isReaderPoolingEnabled() && mergedSegmentWarmer != null) {
+        final ReadersAndUpdates rld = getPooledInstance(merge.info, true);
         final SegmentReader sr = rld.getReader(IOContext.READ);
         try {
           mergedSegmentWarmer.warm(sr);
         } finally {
           synchronized(this) {
             rld.release(sr);
-            readerPool.release(rld);
+            release(rld);
           }
         }
       }
@@ -4998,7 +4700,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     boolean isCurrent = infos.getVersion() == segmentInfos.getVersion()
       && docWriter.anyChanges() == false
       && bufferedUpdatesStream.any() == false
-      && readerPool.anyChanges() == false;
+      && readerPool.anyDocValuesChanges() == false;
     if (infoStream.isEnabled("IW")) {
       if (isCurrent == false) {
         infoStream.message("IW", "nrtIsCurrent: infoVersion matches: " + (infos.getVersion() == segmentInfos.getVersion()) + "; DW changes: " + docWriter.anyChanges() + "; BD changes: "+ bufferedUpdatesStream.any());
@@ -5222,16 +4924,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     return count;
   }
 
-  private PendingDeletes newPendingDeletes(SegmentCommitInfo info) {
-    String softDeletesField = config.getSoftDeletesField();
-    return softDeletesField == null ? new PendingDeletes(info) : new PendingSoftDeletes(softDeletesField, info);
-  }
-
-  private PendingDeletes newPendingDeletes(SegmentReader reader, SegmentCommitInfo info) {
-    String softDeletesField = config.getSoftDeletesField();
-    return softDeletesField == null ? new PendingDeletes(reader, info) : new PendingSoftDeletes(softDeletesField, reader, info);
-  }
-
   final boolean isFullyDeleted(ReadersAndUpdates readersAndUpdates) throws IOException {
     if (readersAndUpdates.isFullyDeleted()) {
       assert Thread.holdsLock(this);
@@ -5240,7 +4932,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     return false;
   }
 
-
   /**
    * Returns the number of deletes a merge would claim back if the given segment is merged.
    * @see MergePolicy#numDeletesToMerge(SegmentCommitInfo, int, org.apache.lucene.util.IOSupplier)
@@ -5248,8 +4939,9 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
    * @lucene.experimental
    */
   public final int numDeletesToMerge(SegmentCommitInfo info) throws IOException {
+    ensureOpen(false);
     MergePolicy mergePolicy = config.getMergePolicy();
-    final ReadersAndUpdates rld = readerPool.get(info, false);
+    final ReadersAndUpdates rld = getPooledInstance(info, false);
     int numDeletesToMerge;
     if (rld != null) {
       numDeletesToMerge = rld.numDeletesToMerge(mergePolicy);
@@ -5260,6 +4952,23 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     assert numDeletesToMerge <= info.info.maxDoc() :
         "numDeletesToMerge: " + numDeletesToMerge + " > maxDoc: " + info.info.maxDoc();
     return numDeletesToMerge;
+  }
 
+  void release(ReadersAndUpdates readersAndUpdates) throws IOException {
+    release(readersAndUpdates, true);
+  }
+
+  private void release(ReadersAndUpdates readersAndUpdates, boolean assertLiveInfo) throws IOException {
+    assert Thread.holdsLock(this);
+    if (readerPool.release(readersAndUpdates, assertLiveInfo)) {
+      // if we write anything here we have to hold the lock otherwise IDF will delete files underneath us
+      assert Thread.holdsLock(this);
+      checkpointNoSIS();
+    }
+  }
+
+  ReadersAndUpdates getPooledInstance(SegmentCommitInfo info, boolean create) {
+    ensureOpen(false);
+    return readerPool.get(info, create);
   }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java b/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java
new file mode 100644
index 0000000..cecc310
--- /dev/null
+++ b/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java
@@ -0,0 +1,390 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.index;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.function.LongSupplier;
+
+import org.apache.lucene.store.AlreadyClosedException;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.IOUtils;
+import org.apache.lucene.util.InfoStream;
+
+/** Holds shared SegmentReader instances. IndexWriter uses
+ *  SegmentReaders for 1) applying deletes/DV updates, 2) doing
+ *  merges, 3) handing out a real-time reader.  This pool
+ *  reuses instances of the SegmentReaders in all these
+ *  places if it is in "near real-time mode" (getReader()
+ *  has been called on this instance). */
+final class ReaderPool implements Closeable {
+
+  private final Map<SegmentCommitInfo,ReadersAndUpdates> readerMap = new HashMap<>();
+  private final Directory directory;
+  private final Directory originalDirectory;
+  private final FieldInfos.FieldNumbers fieldNumbers;
+  private final LongSupplier completedDelGenSupplier;
+  private final InfoStream infoStream;
+  private final SegmentInfos segmentInfos;
+  private final String softDeletesField;
+  // This is a "write once" variable (like the organic dye
+  // on a DVD-R that may or may not be heated by a laser and
+  // then cooled to permanently record the event): it's
+  // false, by default until {@link #enableReaderPooling()}
+  // is called for the first time,
+  // at which point it's switched to true and never changes
+  // back to false.  Once this is true, we hold open and
+  // reuse SegmentReader instances internally for applying
+  // deletes, doing merges, and reopening near real-time
+  // readers.
+  // in practice this should be called once the readers are likely
+  // to be needed and reused ie if IndexWriter#getReader is called.
+  private volatile boolean poolReaders;
+  private final AtomicBoolean closed = new AtomicBoolean(false);
+
+  ReaderPool(Directory directory, Directory originalDirectory, SegmentInfos segmentInfos,
+             FieldInfos.FieldNumbers fieldNumbers, LongSupplier completedDelGenSupplier, InfoStream infoStream,
+             String softDeletesField, StandardDirectoryReader reader) throws IOException {
+    this.directory = directory;
+    this.originalDirectory = originalDirectory;
+    this.segmentInfos = segmentInfos;
+    this.fieldNumbers = fieldNumbers;
+    this.completedDelGenSupplier = completedDelGenSupplier;
+    this.infoStream = infoStream;
+    this.softDeletesField = softDeletesField;
+    if (reader != null) {
+      // Pre-enroll all segment readers into the reader pool; this is necessary so
+      // any in-memory NRT live docs are correctly carried over, and so NRT readers
+      // pulled from this IW share the same segment reader:
+      List<LeafReaderContext> leaves = reader.leaves();
+      assert segmentInfos.size() == leaves.size();
+      for (int i=0;i<leaves.size();i++) {
+        LeafReaderContext leaf = leaves.get(i);
+        SegmentReader segReader = (SegmentReader) leaf.reader();
+        SegmentReader newReader = new SegmentReader(segmentInfos.info(i), segReader, segReader.getLiveDocs(),
+            segReader.numDocs());
+        readerMap.put(newReader.getSegmentInfo(), new ReadersAndUpdates(segmentInfos.getIndexCreatedVersionMajor(),
+            newReader, newPendingDeletes(newReader, newReader.getSegmentInfo())));
+      }
+    }
+  }
+
+  /** Asserts this info still exists in IW's segment infos */
+  synchronized boolean assertInfoIsLive(SegmentCommitInfo info) {
+    int idx = segmentInfos.indexOf(info);
+    assert idx != -1: "info=" + info + " isn't live";
+    assert segmentInfos.info(idx) == info: "info=" + info + " doesn't match live info in segmentInfos";
+    return true;
+  }
+
+  /**
+   * Drops reader for the given {@link SegmentCommitInfo} if it's pooled
+   * @return <code>true</code> if a reader is pooled
+   */
+  synchronized boolean drop(SegmentCommitInfo info) throws IOException {
+    final ReadersAndUpdates rld = readerMap.get(info);
+    if (rld != null) {
+      assert info == rld.info;
+      readerMap.remove(info);
+      rld.dropReaders();
+      return true;
+    }
+    return false;
+  }
+
+  /**
+   * Returns the sum of the ram used by all the buffered readers and updates in MB
+   */
+  synchronized long ramBytesUsed() {
+    long bytes = 0;
+    for (ReadersAndUpdates rld : readerMap.values()) {
+      bytes += rld.ramBytesUsed.get();
+    }
+    return bytes;
+  }
+
+  /**
+   * Returns <code>true</code> iff any of the buffered readers and updates has at least one pending delete
+   */
+  synchronized boolean anyPendingDeletes() {
+    for(ReadersAndUpdates rld : readerMap.values()) {
+      if (rld.getPendingDeleteCount() != 0) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  /**
+   * Enables reader pooling for this pool. This should be called once the readers in this pool are shared with an
+   * outside resource like an NRT reader. Once reader pooling is enabled a {@link ReadersAndUpdates} will be kept around
+   * in the reader pool on calling {@link #release(ReadersAndUpdates, boolean)} until the segment get dropped via calls
+   * to {@link #drop(SegmentCommitInfo)} or {@link #dropAll()} or {@link #close()}.
+   * Reader pooling is disabled upon construction but can't be disabled again once it's enabled.
+   */
+  void enableReaderPooling() {
+    poolReaders = true;
+  }
+
+  boolean isReaderPoolingEnabled() {
+    return poolReaders;
+  }
+
+  /**
+   * Releases the {@link ReadersAndUpdates}. This should only be called if the {@link #get(SegmentCommitInfo, boolean)}
+   * is called with the create paramter set to true.
+   * @return <code>true</code> if any files were written by this release call.
+   */
+  synchronized boolean release(ReadersAndUpdates rld, boolean assertInfoLive) throws IOException {
+    boolean changed = false;
+    // Matches incRef in get:
+    rld.decRef();
+
+    if (rld.refCount() == 0) {
+      // This happens if the segment was just merged away,
+      // while a buffered deletes packet was still applying deletes/updates to it.
+      assert readerMap.containsKey(rld.info) == false: "seg=" + rld.info
+          + " has refCount 0 but still unexpectedly exists in the reader pool";
+    } else {
+
+      // Pool still holds a ref:
+      assert rld.refCount() > 0: "refCount=" + rld.refCount() + " reader=" + rld.info;
+
+      if (poolReaders == false && rld.refCount() == 1 && readerMap.containsKey(rld.info)) {
+        // This is the last ref to this RLD, and we're not
+        // pooling, so remove it:
+        if (rld.writeLiveDocs(directory)) {
+          // Make sure we only write del docs for a live segment:
+          assert assertInfoLive == false || assertInfoIsLive(rld.info);
+          // Must checkpoint because we just
+          // created new _X_N.del and field updates files;
+          // don't call IW.checkpoint because that also
+          // increments SIS.version, which we do not want to
+          // do here: it was done previously (after we
+          // invoked BDS.applyDeletes), whereas here all we
+          // did was move the state to disk:
+          changed = true;
+        }
+        if (rld.writeFieldUpdates(directory, fieldNumbers, completedDelGenSupplier.getAsLong(), infoStream)) {
+          changed = true;
+        }
+        if (rld.getNumDVUpdates() == 0) {
+          rld.dropReaders();
+          readerMap.remove(rld.info);
+        } else {
+          // We are forced to pool this segment until its deletes fully apply (no delGen gaps)
+        }
+      }
+    }
+    return changed;
+  }
+
+  @Override
+  public synchronized void close() throws IOException {
+    if (closed.compareAndSet(false, true)) {
+      dropAll();
+    }
+  }
+
+  /**
+   * Writes all doc values updates to disk if there are any.
+   * @return <code>true</code> iff any files where written
+   */
+  boolean writeAllDocValuesUpdates() throws IOException {
+    Collection<ReadersAndUpdates> copy;
+    synchronized (this) {
+      // this needs to be protected by the reader pool lock otherwise we hit ConcurrentModificationException
+      copy = new HashSet<>(readerMap.values());
+    }
+    boolean any = false;
+    for (ReadersAndUpdates rld : copy) {
+      any |= rld.writeFieldUpdates(directory, fieldNumbers, completedDelGenSupplier.getAsLong(), infoStream);
+    }
+    return any;
+  }
+
+  /**
+   * Writes all doc values updates to disk if there are any.
+   * @return <code>true</code> iff any files where written
+   */
+  boolean writeDocValuesUpdatesForMerge(List<SegmentCommitInfo> infos) throws IOException {
+    boolean any = false;
+    for (SegmentCommitInfo info : infos) {
+      ReadersAndUpdates rld = get(info, false);
+      if (rld != null) {
+        any |= rld.writeFieldUpdates(directory, fieldNumbers, completedDelGenSupplier.getAsLong(), infoStream);
+        rld.setIsMerging();
+      }
+    }
+    return any;
+  }
+
+  PriorityQueue<ReadersAndUpdates> getReadersByRam() {
+    // Sort by largest ramBytesUsed:
+    PriorityQueue<ReadersAndUpdates> queue = new PriorityQueue<>(readerMap.size(),
+        (a, b) -> Long.compare(b.ramBytesUsed.get(), a.ramBytesUsed.get()));
+    synchronized (this) {
+      for (ReadersAndUpdates rld : readerMap.values()) {
+        queue.add(rld);
+      }
+    }
+    return queue;
+  }
+
+
+  /** Remove all our references to readers, and commits
+   *  any pending changes. */
+  synchronized void dropAll() throws IOException {
+    Throwable priorE = null;
+    final Iterator<Map.Entry<SegmentCommitInfo,ReadersAndUpdates>> it = readerMap.entrySet().iterator();
+    while(it.hasNext()) {
+      final ReadersAndUpdates rld = it.next().getValue();
+
+      // Important to remove as-we-go, not with .clear()
+      // in the end, in case we hit an exception;
+      // otherwise we could over-decref if close() is
+      // called again:
+      it.remove();
+
+      // NOTE: it is allowed that these decRefs do not
+      // actually close the SRs; this happens when a
+      // near real-time reader is kept open after the
+      // IndexWriter instance is closed:
+      try {
+        rld.dropReaders();
+      } catch (Throwable t) {
+        priorE = IOUtils.useOrSuppress(priorE, t);
+      }
+    }
+    assert readerMap.size() == 0;
+    if (priorE != null) {
+      throw IOUtils.rethrowAlways(priorE);
+    }
+  }
+
+  /**
+   * Commit live docs changes for the segment readers for
+   * the provided infos.
+   *
+   * @throws IOException If there is a low-level I/O error
+   */
+  synchronized boolean commit(SegmentInfos infos) throws IOException {
+    boolean atLeastOneChange = false;
+    for (SegmentCommitInfo info : infos) {
+      final ReadersAndUpdates rld = readerMap.get(info);
+      if (rld != null) {
+        assert rld.info == info;
+        boolean changed = rld.writeLiveDocs(directory);
+        changed |= rld.writeFieldUpdates(directory, fieldNumbers, completedDelGenSupplier.getAsLong(), infoStream);
+
+        if (changed) {
+          // Make sure we only write del docs for a live segment:
+          assert assertInfoIsLive(info);
+
+          // Must checkpoint because we just
+          // created new _X_N.del and field updates files;
+          // don't call IW.checkpoint because that also
+          // increments SIS.version, which we do not want to
+          // do here: it was done previously (after we
+          // invoked BDS.applyDeletes), whereas here all we
+          // did was move the state to disk:
+          atLeastOneChange = true;
+        }
+      }
+    }
+    return atLeastOneChange;
+  }
+
+  /**
+   * Returns <code>true</code> iff there are any buffered doc values updates. Otherwise <code>false</code>.
+   * @see #anyPendingDeletes()
+   */
+  synchronized boolean anyDocValuesChanges() {
+    for (ReadersAndUpdates rld : readerMap.values()) {
+      // NOTE: we don't check for pending deletes because deletes carry over in RAM to NRT readers
+      if (rld.getNumDVUpdates() != 0) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  /**
+   * Obtain a ReadersAndLiveDocs instance from the
+   * readerPool.  If create is true, you must later call
+   * {@link #release(ReadersAndUpdates, boolean)}.
+   */
+  synchronized ReadersAndUpdates get(SegmentCommitInfo info, boolean create) {
+    assert info.info.dir ==  originalDirectory: "info.dir=" + info.info.dir + " vs " + originalDirectory;
+    if (closed.get()) {
+      assert readerMap.isEmpty() : "Reader map is not empty: " + readerMap;
+      throw new AlreadyClosedException("ReaderPool is already closed");
+    }
+
+    ReadersAndUpdates rld = readerMap.get(info);
+    if (rld == null) {
+      if (create == false) {
+        return null;
+      }
+      rld = new ReadersAndUpdates(segmentInfos.getIndexCreatedVersionMajor(), info, newPendingDeletes(info));
+      // Steal initial reference:
+      readerMap.put(info, rld);
+    } else {
+      assert rld.info == info: "rld.info=" + rld.info + " info=" + info + " isLive?=" + assertInfoIsLive(rld.info)
+          + " vs " + assertInfoIsLive(info);
+    }
+
+    if (create) {
+      // Return ref to caller:
+      rld.incRef();
+    }
+
+    assert noDups();
+
+    return rld;
+  }
+
+  private PendingDeletes newPendingDeletes(SegmentCommitInfo info) {
+    return softDeletesField == null ? new PendingDeletes(info) : new PendingSoftDeletes(softDeletesField, info);
+  }
+
+  private PendingDeletes newPendingDeletes(SegmentReader reader, SegmentCommitInfo info) {
+    return softDeletesField == null ? new PendingDeletes(reader, info) :
+        new PendingSoftDeletes(softDeletesField, reader, info);
+  }
+
+  // Make sure that every segment appears only once in the
+  // pool:
+  private boolean noDups() {
+    Set<String> seen = new HashSet<>();
+    for(SegmentCommitInfo info : readerMap.keySet()) {
+      assert !seen.contains(info.info.name);
+      seen.add(info.info.name);
+    }
+    return true;
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java b/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
index 76a28e2..dd20910 100644
--- a/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
+++ b/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java
@@ -750,7 +750,7 @@ final class ReadersAndUpdates {
     liveDocsSharedPending = true;
   }
 
-  synchronized public void setIsMerging() {
+  synchronized void setIsMerging() {
     // This ensures any newly resolved doc value updates while we are merging are
     // saved for re-applying after this segment is done merging:
     if (isMerging == false) {
@@ -759,6 +759,10 @@ final class ReadersAndUpdates {
     }
   }
 
+  synchronized boolean isMerging() {
+    return isMerging;
+  }
+
   /** Returns a reader for merge, with the latest doc values updates and deletions. */
   synchronized SegmentReader getReaderForMerge(IOContext context) throws IOException {
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java b/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
index 63c6d95..3b1b72f 100644
--- a/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
+++ b/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
@@ -100,7 +100,7 @@ public final class StandardDirectoryReader extends DirectoryReader {
         // IndexWriter's segmentInfos:
         final SegmentCommitInfo info = infos.info(i);
         assert info.info.dir == dir;
-        final ReadersAndUpdates rld = writer.readerPool.get(info, true);
+        final ReadersAndUpdates rld = writer.getPooledInstance(info, true);
         try {
           final SegmentReader reader = rld.getReadOnlyClone(IOContext.READ);
           if (reader.numDocs() > 0 || writer.getConfig().mergePolicy.keepFullyDeletedSegment(() -> reader)) {
@@ -112,7 +112,7 @@ public final class StandardDirectoryReader extends DirectoryReader {
             segmentInfos.remove(infosUpto);
           }
         } finally {
-          writer.readerPool.release(rld);
+          writer.release(rld);
         }
       }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
index 8bc3f42..fe951f3 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
@@ -1207,7 +1207,9 @@ public class TestIndexWriterDelete extends LuceneTestCase {
     w = new IndexWriter(d, iwc);
     IndexReader r = DirectoryReader.open(w, false, false);
     assertTrue(w.tryDeleteDocument(r, 1) != -1);
+    assertFalse(((StandardDirectoryReader)r).isCurrent());
     assertTrue(w.tryDeleteDocument(r.leaves().get(0).reader(), 0) != -1);
+    assertFalse(((StandardDirectoryReader)r).isCurrent());
     r.close();
     w.close();
 
@@ -1218,6 +1220,28 @@ public class TestIndexWriterDelete extends LuceneTestCase {
     d.close();
   }
 
+  public void testNRTIsCurrentAfterDelete() throws Exception {
+    Directory d = newDirectory();
+    IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random()));
+    IndexWriter w = new IndexWriter(d, iwc);
+    Document doc = new Document();
+    w.addDocument(doc);
+    w.addDocument(doc);
+    w.addDocument(doc);
+    doc.add(new StringField("id", "1", Field.Store.YES));
+    w.addDocument(doc);
+    w.close();
+    iwc = new IndexWriterConfig(new MockAnalyzer(random()));
+    iwc.setOpenMode(IndexWriterConfig.OpenMode.APPEND);
+    w = new IndexWriter(d, iwc);
+    IndexReader r = DirectoryReader.open(w, false, false);
+    w.deleteDocuments(new Term("id", "1"));
+    IndexReader r2 = DirectoryReader.open(w, true, true);
+    assertFalse(((StandardDirectoryReader)r).isCurrent());
+    assertTrue(((StandardDirectoryReader)r2).isCurrent());
+    IOUtils.close(r, r2, w, d);
+  }
+
   public void testOnlyDeletesTriggersMergeOnClose() throws Exception {
     Directory dir = newDirectory();
     IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random()));

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/89756929/lucene/core/src/test/org/apache/lucene/index/TestReaderPool.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestReaderPool.java b/lucene/core/src/test/org/apache/lucene/index/TestReaderPool.java
new file mode 100644
index 0000000..29c5dd3
--- /dev/null
+++ b/lucene/core/src/test/org/apache/lucene/index/TestReaderPool.java
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.index;
+
+import java.io.IOException;
+import java.util.Collections;
+
+import com.carrotsearch.randomizedtesting.generators.RandomPicks;
+import org.apache.lucene.document.Document;
+import org.apache.lucene.document.Field;
+import org.apache.lucene.document.NumericDocValuesField;
+import org.apache.lucene.document.StringField;
+import org.apache.lucene.search.DocIdSetIterator;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.util.IOUtils;
+import org.apache.lucene.util.LuceneTestCase;
+import org.apache.lucene.util.NullInfoStream;
+
+public class TestReaderPool extends LuceneTestCase {
+
+  public void testDrop() throws IOException {
+    Directory directory = newDirectory();
+    FieldInfos.FieldNumbers fieldNumbers = buildIndex(directory);
+    StandardDirectoryReader reader = (StandardDirectoryReader) DirectoryReader.open(directory);
+    SegmentInfos segmentInfos = reader.segmentInfos.clone();
+
+    ReaderPool pool = new ReaderPool(directory, directory, segmentInfos, fieldNumbers, () -> 0l, null, null, null);
+    SegmentCommitInfo commitInfo = RandomPicks.randomFrom(random(), segmentInfos.asList());
+    ReadersAndUpdates readersAndUpdates = pool.get(commitInfo, true);
+    assertSame(readersAndUpdates, pool.get(commitInfo, false));
+    assertTrue(pool.drop(commitInfo));
+    if (random().nextBoolean()) {
+      assertFalse(pool.drop(commitInfo));
+    }
+    assertNull(pool.get(commitInfo, false));
+    pool.release(readersAndUpdates, random().nextBoolean());
+    IOUtils.close(pool, reader, directory);
+  }
+
+  public void testPoolReaders() throws IOException {
+    Directory directory = newDirectory();
+    FieldInfos.FieldNumbers fieldNumbers = buildIndex(directory);
+    StandardDirectoryReader reader = (StandardDirectoryReader) DirectoryReader.open(directory);
+    SegmentInfos segmentInfos = reader.segmentInfos.clone();
+
+    ReaderPool pool = new ReaderPool(directory, directory, segmentInfos, fieldNumbers, () -> 0l, null, null, null);
+    SegmentCommitInfo commitInfo = RandomPicks.randomFrom(random(), segmentInfos.asList());
+    assertFalse(pool.isReaderPoolingEnabled());
+    pool.release(pool.get(commitInfo, true), random().nextBoolean());
+    assertNull(pool.get(commitInfo, false));
+    // now start pooling
+    pool.enableReaderPooling();
+    assertTrue(pool.isReaderPoolingEnabled());
+    pool.release(pool.get(commitInfo, true), random().nextBoolean());
+    assertNotNull(pool.get(commitInfo, false));
+    assertSame(pool.get(commitInfo, false), pool.get(commitInfo, false));
+    pool.drop(commitInfo);
+    long ramBytesUsed = 0;
+    assertEquals(0, pool.ramBytesUsed());
+    for (SegmentCommitInfo info : segmentInfos) {
+      pool.release(pool.get(info, true), random().nextBoolean());
+      assertEquals(" used: " + ramBytesUsed + " actual: " + pool.ramBytesUsed(), 0, pool.ramBytesUsed());
+      ramBytesUsed = pool.ramBytesUsed();
+      assertSame(pool.get(info, false), pool.get(info, false));
+    }
+    assertNotSame(0, pool.ramBytesUsed());
+    pool.dropAll();
+    for (SegmentCommitInfo info : segmentInfos) {
+      assertNull(pool.get(info, false));
+    }
+    assertEquals(0, pool.ramBytesUsed());
+    IOUtils.close(pool, reader, directory);
+  }
+
+
+  public void testUpdate() throws IOException {
+    Directory directory = newDirectory();
+    FieldInfos.FieldNumbers fieldNumbers = buildIndex(directory);
+    StandardDirectoryReader reader = (StandardDirectoryReader) DirectoryReader.open(directory);
+    SegmentInfos segmentInfos = reader.segmentInfos.clone();
+    ReaderPool pool = new ReaderPool(directory, directory, segmentInfos, fieldNumbers, () -> 0l,
+        new NullInfoStream(), null, null);
+    int id = random().nextInt(10);
+    if (random().nextBoolean()) {
+      pool.enableReaderPooling();
+    }
+    for (SegmentCommitInfo commitInfo : segmentInfos) {
+      ReadersAndUpdates readersAndUpdates = pool.get(commitInfo, true);
+      SegmentReader readOnlyClone = readersAndUpdates.getReadOnlyClone(IOContext.READ);
+      PostingsEnum postings = readOnlyClone.postings(new Term("id", "" + id));
+      boolean expectUpdate = false;
+      int doc = -1;
+      if (postings != null && postings.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {
+        NumericDocValuesFieldUpdates number = new NumericDocValuesFieldUpdates(0, "number", commitInfo.info.maxDoc());
+        number.add(doc = postings.docID(), 1000l);
+        number.finish();
+        readersAndUpdates.addDVUpdate(number);
+        expectUpdate = true;
+        assertEquals(DocIdSetIterator.NO_MORE_DOCS, postings.nextDoc());
+        assertTrue(pool.anyDocValuesChanges());
+      } else {
+        assertFalse(pool.anyDocValuesChanges());
+      }
+      readOnlyClone.close();
+      boolean writtenToDisk;
+      if (pool.isReaderPoolingEnabled()) {
+        if (random().nextBoolean()) {
+          writtenToDisk = pool.writeAllDocValuesUpdates();
+          assertFalse(readersAndUpdates.isMerging());
+        } else if (random().nextBoolean()) {
+          writtenToDisk = pool.commit(segmentInfos);
+          assertFalse(readersAndUpdates.isMerging());
+        } else {
+          writtenToDisk = pool.writeDocValuesUpdatesForMerge(Collections.singletonList(commitInfo));
+          assertTrue(readersAndUpdates.isMerging());
+        }
+        assertFalse(pool.release(readersAndUpdates, random().nextBoolean()));
+      } else {
+        if (random().nextBoolean()) {
+          writtenToDisk = pool.release(readersAndUpdates, random().nextBoolean());
+          assertFalse(readersAndUpdates.isMerging());
+        } else {
+          writtenToDisk = pool.writeDocValuesUpdatesForMerge(Collections.singletonList(commitInfo));
+          assertTrue(readersAndUpdates.isMerging());
+          assertFalse(pool.release(readersAndUpdates, random().nextBoolean()));
+        }
+      }
+      assertFalse(pool.anyDocValuesChanges());
+      assertEquals(expectUpdate, writtenToDisk);
+      if (expectUpdate) {
+        readersAndUpdates = pool.get(commitInfo, true);
+        SegmentReader updatedReader = readersAndUpdates.getReadOnlyClone(IOContext.READ);
+        assertNotSame(-1, doc);
+        NumericDocValues number = updatedReader.getNumericDocValues("number");
+        assertEquals(doc, number.advance(doc));
+        assertEquals(1000l, number.longValue());
+       readersAndUpdates.release(updatedReader);
+       assertFalse(pool.release(readersAndUpdates, random().nextBoolean()));
+      }
+    }
+    IOUtils.close(pool, reader, directory);
+  }
+
+  public void testDeletes() throws IOException {
+    Directory directory = newDirectory();
+    FieldInfos.FieldNumbers fieldNumbers = buildIndex(directory);
+    StandardDirectoryReader reader = (StandardDirectoryReader) DirectoryReader.open(directory);
+    SegmentInfos segmentInfos = reader.segmentInfos.clone();
+    ReaderPool pool = new ReaderPool(directory, directory, segmentInfos, fieldNumbers, () -> 0l,
+        new NullInfoStream(), null, null);
+    int id = random().nextInt(10);
+    if (random().nextBoolean()) {
+      pool.enableReaderPooling();
+    }
+    for (SegmentCommitInfo commitInfo : segmentInfos) {
+      ReadersAndUpdates readersAndUpdates = pool.get(commitInfo, true);
+      SegmentReader readOnlyClone = readersAndUpdates.getReadOnlyClone(IOContext.READ);
+      PostingsEnum postings = readOnlyClone.postings(new Term("id", "" + id));
+      boolean expectUpdate = false;
+      int doc = -1;
+      if (postings != null && postings.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {
+        readersAndUpdates.delete(doc = postings.docID());
+        expectUpdate = true;
+        assertEquals(DocIdSetIterator.NO_MORE_DOCS, postings.nextDoc());
+        assertTrue(pool.anyPendingDeletes());
+      } else {
+        assertFalse(pool.anyPendingDeletes());
+      }
+      assertFalse(pool.anyDocValuesChanges()); // deletes are not accounted here
+      readOnlyClone.close();
+      boolean writtenToDisk;
+      if (pool.isReaderPoolingEnabled()) {
+        writtenToDisk = pool.commit(segmentInfos);
+        assertFalse(pool.release(readersAndUpdates, random().nextBoolean()));
+      } else {
+        writtenToDisk = pool.release(readersAndUpdates, random().nextBoolean());
+      }
+      assertFalse(pool.anyDocValuesChanges());
+      assertEquals(expectUpdate, writtenToDisk);
+      if (expectUpdate) {
+        readersAndUpdates = pool.get(commitInfo, true);
+        SegmentReader updatedReader = readersAndUpdates.getReadOnlyClone(IOContext.READ);
+        assertNotSame(-1, doc);
+        assertFalse(updatedReader.getLiveDocs().get(doc));
+        readersAndUpdates.release(updatedReader);
+        assertFalse(pool.release(readersAndUpdates, random().nextBoolean()));
+      }
+    }
+    IOUtils.close(pool, reader, directory);
+  }
+
+  private FieldInfos.FieldNumbers buildIndex(Directory directory) throws IOException {
+    IndexWriter writer = new IndexWriter(directory, newIndexWriterConfig());
+    for (int i = 0; i < 10; i++) {
+      Document document = new Document();
+      document.add(new StringField("id", "" + i, Field.Store.YES));
+      document.add(new NumericDocValuesField("number", i));
+      writer.addDocument(document);
+      if (random().nextBoolean()) {
+        writer.flush();
+      }
+    }
+    writer.commit();
+    writer.close();
+    return writer.globalFieldNumberMap;
+  }
+}


[15/40] lucene-solr:jira/solr-11833: SOLR-11646: change tab-pane padding to align better under tabs

Posted by ab...@apache.org.
SOLR-11646: change tab-pane padding to align better under tabs


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/aab2c770
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/aab2c770
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/aab2c770

Branch: refs/heads/jira/solr-11833
Commit: aab2c770c6f934745b23f14649ce476d582f7afb
Parents: a033759
Author: Cassandra Targett <ct...@apache.org>
Authored: Thu Apr 12 12:17:04 2018 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Thu Apr 19 08:53:24 2018 -0500

----------------------------------------------------------------------
 solr/solr-ref-guide/src/css/customstyles.css | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/aab2c770/solr/solr-ref-guide/src/css/customstyles.css
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/css/customstyles.css b/solr/solr-ref-guide/src/css/customstyles.css
index 9a166c1..016b09f 100755
--- a/solr/solr-ref-guide/src/css/customstyles.css
+++ b/solr/solr-ref-guide/src/css/customstyles.css
@@ -453,7 +453,7 @@ div#toc ul li ul li {
 }
 
 .tab-content {
-    padding: 15px;
+    padding: 0px;
 }
 
 span.tagTitle {font-weight: 500;}


[36/40] lucene-solr:jira/solr-11833: LUCENE-8266: Detect bogus tiles when creating a standard polygon and throw a TileException

Posted by ab...@apache.org.
LUCENE-8266: Detect bogus tiles when creating a standard polygon and throw a TileException


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/e8c36f48
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/e8c36f48
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/e8c36f48

Branch: refs/heads/jira/solr-11833
Commit: e8c36f489e25af7194c5b9b3ac8355db5a6132cc
Parents: 8975692
Author: Ignacio Vera <iv...@apache.org>
Authored: Mon Apr 23 11:52:01 2018 +0200
Committer: Ignacio Vera <iv...@apache.org>
Committed: Mon Apr 23 11:52:01 2018 +0200

----------------------------------------------------------------------
 lucene/CHANGES.txt                              |  3 +
 .../spatial3d/geom/GeoPolygonFactory.java       | 20 +++++--
 .../lucene/spatial3d/geom/GeoPolygonTest.java   | 61 +++++++++++++++++++-
 3 files changed, 79 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e8c36f48/lucene/CHANGES.txt
----------------------------------------------------------------------
diff --git a/lucene/CHANGES.txt b/lucene/CHANGES.txt
index 1b790e4..301b360 100644
--- a/lucene/CHANGES.txt
+++ b/lucene/CHANGES.txt
@@ -152,6 +152,9 @@ New Features
 
 Bug Fixes
 
+* LUCENE-8266: Detect bogus tiles when creating a standard polygon and
+  throw a TileException. (Ignacio Vera)
+
 * LUCENE-8234: Fixed bug in how spatial relationship is computed for
   GeoStandardCircle when it covers the whole world. (Ignacio Vera)
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e8c36f48/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoPolygonFactory.java
----------------------------------------------------------------------
diff --git a/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoPolygonFactory.java b/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoPolygonFactory.java
index 0bbae80..af5d8ef 100755
--- a/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoPolygonFactory.java
+++ b/lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/GeoPolygonFactory.java
@@ -1237,14 +1237,14 @@ public class GeoPolygonFactory {
         break;
       }
       final Edge newLastEdge = edgeBuffer.getNext(lastEdge);
+      if (Plane.arePointsCoplanar(lastEdge.startPoint, lastEdge.endPoint, newLastEdge.endPoint)) {
+        break;
+      }
       // Planes that are almost identical cannot be properly handled by the standard polygon logic.  Detect this case and, if found,
       // give up on the tiling -- we'll need to create a large poly instead.
       if (lastEdge.plane.isFunctionallyIdentical(newLastEdge.plane)) {
         throw new TileException("Two adjacent edge planes are effectively parallel despite filtering; give up on tiling");
       }
-      if (Plane.arePointsCoplanar(lastEdge.startPoint, lastEdge.endPoint, newLastEdge.endPoint)) {
-        break;
-      }
       if (isWithin(newLastEdge.endPoint, includedEdges)) {
         //System.out.println(" maybe can extend to next edge");
         // Found a candidate for extension.  But do some other checks first.  Basically, we need to know if we construct a polygon
@@ -1308,6 +1308,11 @@ public class GeoPolygonFactory {
       if (Plane.arePointsCoplanar(newFirstEdge.startPoint, newFirstEdge.endPoint, firstEdge.endPoint)) {
         break;
       }
+      // Planes that are almost identical cannot be properly handled by the standard polygon logic.  Detect this case and, if found,
+      // give up on the tiling -- we'll need to create a large poly instead.
+      if (firstEdge.plane.isFunctionallyIdentical(newFirstEdge.plane)) {
+        throw new TileException("Two adjacent edge planes are effectively parallel despite filtering; give up on tiling");
+      }
       if (isWithin(newFirstEdge.startPoint, includedEdges)) {
         //System.out.println(" maybe can extend to previous edge");
         // Found a candidate for extension.  But do some other checks first.  Basically, we need to know if we construct a polygon
@@ -1387,6 +1392,10 @@ public class GeoPolygonFactory {
         // has no contents, so we generate no polygon.
         return false;
       }
+
+      if (firstEdge.plane.isFunctionallyIdentical(lastEdge.plane)) {
+        throw new TileException("Two adjacent edge planes are effectively parallel despite filtering; give up on tiling");
+      }
       
       // Now look for completely planar points.  This too is a degeneracy condition that we should
       // return "false" for.
@@ -1407,7 +1416,10 @@ public class GeoPolygonFactory {
       // Build the return edge (internal, of course)
       final SidedPlane returnSidedPlane = new SidedPlane(firstEdge.endPoint, false, firstEdge.startPoint, lastEdge.endPoint);
       final Edge returnEdge = new Edge(firstEdge.startPoint, lastEdge.endPoint, returnSidedPlane, true);
-
+      if (returnEdge.plane.isFunctionallyIdentical(lastEdge.plane) ||
+          returnEdge.plane.isFunctionallyIdentical(firstEdge.plane)) {
+        throw new TileException("Two adjacent edge planes are effectively parallel despite filtering; give up on tiling");
+      }
       // Build point list and edge list
       final List<Edge> edges = new ArrayList<Edge>(includedEdges.size());
       returnIsInternal = true;

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e8c36f48/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
----------------------------------------------------------------------
diff --git a/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java b/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
index 09ae776..3eafb5a 100755
--- a/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
+++ b/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/GeoPolygonTest.java
@@ -1626,5 +1626,64 @@ shape:
     final GeoPoint point = new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(2.104316138623836E-4), Geo3DUtil.fromDegrees(1.413E-321));
     assertTrue(polygon.isWithin(point) == largePolygon.isWithin(point));
   }
-  
+
+  @Test
+  public void testLUCENE8266_case1() {
+    //POLYGON((-6.35093158794635E-11 -4.965517818537545E-11,0.0 3.113E-321,-60.23538585411111 18.46706692248612, 162.37100340450482 -25.988383239097754,-6.35093158794635E-11 -4.965517818537545E-11))
+    final List<GeoPoint> points = new ArrayList<>();
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(-4.965517818537545E-11), Geo3DUtil.fromDegrees(-6.35093158794635E-11)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(3.113E-321), Geo3DUtil.fromDegrees(0.0)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(18.46706692248612), Geo3DUtil.fromDegrees(-60.23538585411111)));
+    points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(-25.988383239097754), Geo3DUtil.fromDegrees(162.37100340450482)));
+    final GeoPolygonFactory.PolygonDescription description = new GeoPolygonFactory.PolygonDescription(points);
+    final GeoPolygon polygon = GeoPolygonFactory.makeGeoPolygon(PlanetModel.SPHERE, description);
+    final GeoPolygon largePolygon = GeoPolygonFactory.makeLargeGeoPolygon(PlanetModel.SPHERE, Collections.singletonList(description));
+
+    //POINT(-179.99999999999974 2.4432260684194717E-11)
+    final GeoPoint point = new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees(2.4432260684194717E-11), Geo3DUtil.fromDegrees(-179.99999999999974));
+    assertFalse(polygon.isWithin(point));
+    assertFalse(largePolygon.isWithin(point));
+  }
+
+  @Test
+  public void testLUCENE8266_case2() {
+    //POLYGON((7.885596306952593 -42.25131029665893,1.5412637897085604 -6.829581354691802,34.03338913004999 27.583811665797796,0.0 5.7E-322,-8.854664233194431E-12 7.132883127401669E-11,-40.20723013296905 15.679563923063258,7.885596306952593 -42.25131029665893))
+    final List<GeoPoint> points = new ArrayList<>();
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-42.25131029665893), Geo3DUtil.fromDegrees(7.885596306952593)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-6.829581354691802), Geo3DUtil.fromDegrees(1.5412637897085604)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(27.583811665797796), Geo3DUtil.fromDegrees(34.03338913004999)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(5.7E-322), Geo3DUtil.fromDegrees(0.0)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(7.132883127401669E-11), Geo3DUtil.fromDegrees( -8.854664233194431E-12)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(15.679563923063258), Geo3DUtil.fromDegrees(-40.20723013296905)));
+    final GeoPolygonFactory.PolygonDescription description = new GeoPolygonFactory.PolygonDescription(points);
+    final GeoPolygon polygon = GeoPolygonFactory.makeGeoPolygon(PlanetModel.WGS84, description);
+    final GeoPolygon largePolygon = GeoPolygonFactory.makeLargeGeoPolygon(PlanetModel.WGS84, Collections.singletonList(description));
+
+    //POINT(-179.99999999999983 -8.474427850967216E-12)
+    final GeoPoint point = new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-8.474427850967216E-12), Geo3DUtil.fromDegrees(-179.99999999999983));
+    assertFalse(polygon.isWithin(point));
+    assertFalse(largePolygon.isWithin(point));
+  }
+
+  @Test
+  public void testLUCENE8266_case3() {
+    //POLYGON((-98.38897266664411 7.286530349760722,-169.07259176302364 -7.410435277740526,8E-123,-179.9999999999438 -1.298973436027626E-10,66.2759716901292 -52.84327866278771,-98.38897266664411 7.286530349760722))
+    final List<GeoPoint> points = new ArrayList<>();
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(7.286530349760722), Geo3DUtil.fromDegrees(-98.38897266664411)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-7.410435277740526), Geo3DUtil.fromDegrees(-169.07259176302364)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-8.136646215781618E-123), Geo3DUtil.fromDegrees(-180.0)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-1.298973436027626E-10), Geo3DUtil.fromDegrees(-179.9999999999438)));
+    points.add(new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(-52.84327866278771), Geo3DUtil.fromDegrees(66.2759716901292)));
+    final GeoPolygonFactory.PolygonDescription description = new GeoPolygonFactory.PolygonDescription(points);
+    final GeoPolygon polygon = GeoPolygonFactory.makeGeoPolygon(PlanetModel.WGS84, description);
+    final GeoPolygon largePolygon = GeoPolygonFactory.makeLargeGeoPolygon(PlanetModel.WGS84, Collections.singletonList(description));
+
+    //POINT(3.4279315107728157E-122 2.694960611439045E-11)
+    final GeoPoint point = new GeoPoint(PlanetModel.WGS84, Geo3DUtil.fromDegrees(2.694960611439045E-11), Geo3DUtil.fromDegrees(3.4279315107728157E-122));
+    assertFalse(polygon.isWithin(point));
+    assertFalse(largePolygon.isWithin(point));
+  }
+
+
+
 }


[40/40] lucene-solr:jira/solr-11833: Merge branch 'master' into jira/solr-11833

Posted by ab...@apache.org.
Merge branch 'master' into jira/solr-11833


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/880ce3f9
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/880ce3f9
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/880ce3f9

Branch: refs/heads/jira/solr-11833
Commit: 880ce3f903e3d94ca0e078a99c6c0659cb067a8b
Parents: 14824ca 1409ab8
Author: Andrzej Bialecki <ab...@apache.org>
Authored: Mon Apr 23 19:34:54 2018 +0200
Committer: Andrzej Bialecki <ab...@apache.org>
Committed: Mon Apr 23 19:34:54 2018 +0200

----------------------------------------------------------------------
 lucene/CHANGES.txt                              |   9 +-
 .../lucene/index/BufferedUpdatesStream.java     | 134 +---
 .../apache/lucene/index/DocumentsWriter.java    | 224 +++---
 .../index/DocumentsWriterFlushControl.java      |   4 +-
 .../lucene/index/DocumentsWriterFlushQueue.java |   5 +-
 .../lucene/index/DocumentsWriterPerThread.java  |  32 +-
 .../apache/lucene/index/FilterMergePolicy.java  |   4 +-
 .../lucene/index/FrozenBufferedUpdates.java     |  77 +-
 .../org/apache/lucene/index/IndexWriter.java    | 699 +++++++-----------
 .../org/apache/lucene/index/MergePolicy.java    |   2 +-
 .../org/apache/lucene/index/NoMergePolicy.java  |   4 +-
 .../org/apache/lucene/index/PendingDeletes.java |   2 +-
 .../apache/lucene/index/PendingSoftDeletes.java |  12 +-
 .../org/apache/lucene/index/ReaderPool.java     | 390 ++++++++++
 .../apache/lucene/index/ReadersAndUpdates.java  |  43 +-
 .../index/SoftDeletesRetentionMergePolicy.java  |   5 +-
 .../lucene/index/StandardDirectoryReader.java   |   6 +-
 .../search/DisjunctionMatchesIterator.java      |  10 +-
 .../apache/lucene/search/MatchesIterator.java   |   8 -
 .../lucene/search/TermMatchesIterator.java      |   9 +-
 .../org/apache/lucene/search/TermQuery.java     |   2 +-
 .../apache/lucene/index/TestIndexWriter.java    |   3 +-
 .../lucene/index/TestIndexWriterDelete.java     |  24 +
 .../lucene/index/TestIndexWriterExceptions.java |   9 +-
 .../lucene/index/TestIndexWriterOnDiskFull.java |   3 +-
 .../org/apache/lucene/index/TestInfoStream.java |   8 +-
 .../apache/lucene/index/TestMultiFields.java    |   3 +-
 .../apache/lucene/index/TestPendingDeletes.java |   4 +-
 .../lucene/index/TestPendingSoftDeletes.java    |   5 +-
 .../org/apache/lucene/index/TestReaderPool.java | 223 ++++++
 .../TestSoftDeletesDirectoryReaderWrapper.java  |  32 -
 .../TestSoftDeletesRetentionMergePolicy.java    | 101 ++-
 .../lucene/search/TestMatchesIterator.java      |  73 +-
 lucene/ivy-versions.properties                  |   2 +-
 .../search/TestInetAddressRangeQueries.java     |   2 +
 .../spatial3d/geom/GeoComplexPolygon.java       | 347 +++++----
 .../spatial3d/geom/GeoPolygonFactory.java       |  20 +-
 .../lucene/spatial3d/geom/GeoPolygonTest.java   |  81 ++-
 .../apache/lucene/index/RandomIndexWriter.java  |   8 +-
 .../lucene/search/AssertingMatchesIterator.java |   7 -
 solr/CHANGES.txt                                |  51 +-
 solr/bin/solr                                   |   4 +
 solr/bin/solr.cmd                               |   3 +
 solr/bin/solr.in.cmd                            |  12 +-
 solr/bin/solr.in.sh                             |  16 +-
 ...ractNamedEntitiesUpdateProcessorFactory.java |   6 +
 .../carrot2/CarrotClusteringEngine.java         |   2 +-
 ...anguageIdentifierUpdateProcessorFactory.java |   8 +-
 ...OpenNLPLangDetectUpdateProcessorFactory.java |   8 +-
 ...anguageIdentifierUpdateProcessorFactory.java |   8 +-
 .../solrj/embedded/EmbeddedSolrServer.java      |  24 +-
 .../org/apache/solr/cloud/ZkController.java     | 136 +++-
 .../cloud/api/collections/SetAliasPropCmd.java  |  45 +-
 .../apache/solr/core/HdfsDirectoryFactory.java  |  13 +-
 .../apache/solr/core/MMapDirectoryFactory.java  |   2 +-
 .../solr/core/NRTCachingDirectoryFactory.java   |   2 +-
 .../src/java/org/apache/solr/core/SolrCore.java |  24 +-
 .../java/org/apache/solr/core/ZkContainer.java  |  16 -
 .../apache/solr/handler/CdcrRequestHandler.java |   8 +-
 .../apache/solr/handler/RequestHandlerBase.java |   4 +-
 .../solr/handler/UpdateRequestHandler.java      |   2 +-
 .../solr/handler/admin/CollectionsHandler.java  | 183 +++--
 .../solr/handler/admin/ConfigSetsHandler.java   |  18 +-
 .../handler/admin/MetricsCollectorHandler.java  |   2 +-
 .../component/QueryElevationComponent.java      |   2 +-
 .../solr/highlight/HighlightingPluginBase.java  |   2 +-
 .../solr/request/LocalSolrQueryRequest.java     |  10 +-
 .../solr/response/XSLTResponseWriter.java       |   2 +-
 .../org/apache/solr/schema/IndexSchema.java     |   2 +-
 .../solr/schema/ManagedIndexSchemaFactory.java  |   2 +-
 .../solr/spelling/DirectSolrSpellChecker.java   |   2 +-
 .../org/apache/solr/update/SolrIndexConfig.java |   4 +
 .../org/apache/solr/update/TransactionLog.java  |   3 +-
 .../ClassificationUpdateProcessorFactory.java   |   2 +-
 ...oreCommitOptimizeUpdateProcessorFactory.java |   2 +-
 .../processor/LogUpdateProcessorFactory.java    |   2 +-
 .../processor/RegexpBoostProcessorFactory.java  |   2 +-
 .../SignatureUpdateProcessorFactory.java        |   2 +-
 .../processor/URLClassifyProcessorFactory.java  |   2 +-
 .../solrconfig-concurrentmergescheduler.xml     |  37 +
 .../org/apache/solr/BasicFunctionalityTest.java |   2 +-
 .../apache/solr/cloud/AliasIntegrationTest.java |  33 +-
 .../solr/cloud/CreateRoutedAliasTest.java       |  10 +-
 .../apache/solr/cloud/DeleteReplicaTest.java    |  83 ++-
 .../org/apache/solr/cloud/ForceLeaderTest.java  |  81 ---
 .../org/apache/solr/cloud/MoveReplicaTest.java  |  20 -
 .../solr/cloud/TestMiniSolrCloudClusterSSL.java |  59 ++
 .../cloud/autoscaling/NodeLostTriggerTest.java  |   1 +
 .../autoscaling/sim/TestTriggerIntegration.java |   1 +
 .../solr/handler/TestReplicationHandler.java    |  36 +-
 .../solr/handler/admin/TestCollectionAPIs.java  |  23 +
 .../request/TestUnInvertedFieldException.java   |   8 +-
 .../apache/solr/update/SolrIndexConfigTest.java |  24 +
 .../apache/solr/update/TransactionLogTest.java  |  47 ++
 .../TimeRoutedAliasUpdateProcessorTest.java     |  31 +-
 solr/licenses/commons-fileupload-1.3.2.jar.sha1 |   1 -
 solr/licenses/commons-fileupload-1.3.3.jar.sha1 |   1 +
 solr/solr-ref-guide/src/about-this-guide.adoc   |   2 +
 solr/solr-ref-guide/src/blob-store-api.adoc     |  96 ++-
 solr/solr-ref-guide/src/config-api.adoc         | 720 ++++++++++++++-----
 solr/solr-ref-guide/src/config-sets.adoc        |  36 +-
 solr/solr-ref-guide/src/configsets-api.adoc     | 244 ++++---
 .../src/configuring-solrconfig-xml.adoc         |  42 +-
 solr/solr-ref-guide/src/css/customstyles.css    |   2 +-
 solr/solr-ref-guide/src/enabling-ssl.adoc       |  21 +-
 .../src/implicit-requesthandlers.adoc           | 374 ++++++++--
 solr/solr-ref-guide/src/learning-to-rank.adoc   |   2 +
 .../src/requestdispatcher-in-solrconfig.adoc    |   2 +-
 solr/solr-ref-guide/src/schema-api.adoc         |   2 +-
 ...tting-up-an-external-zookeeper-ensemble.adoc | 403 ++++++++---
 .../src/update-request-processors.adoc          |   2 +-
 .../client/solrj/cloud/autoscaling/Clause.java  |  21 +-
 .../autoscaling/DelegatingCloudManager.java     |   2 +-
 .../client/solrj/cloud/autoscaling/Operand.java |   2 +-
 .../client/solrj/cloud/autoscaling/Policy.java  |  38 +-
 .../solrj/cloud/autoscaling/ReplicaCount.java   |   6 +
 .../solrj/cloud/autoscaling/Suggestion.java     |   4 +-
 .../solrj/cloud/autoscaling/Violation.java      |   2 +-
 .../solr/client/solrj/impl/HttpClientUtil.java  |  59 +-
 .../solrj/impl/SolrClientNodeStateProvider.java |   4 +-
 .../org/apache/solr/client/solrj/io/Lang.java   |   1 +
 .../client/solrj/io/eval/MemsetEvaluator.java   | 167 +++++
 .../solrj/io/graph/GatherNodesStream.java       |  20 +-
 .../client/solrj/io/stream/FacetStream.java     |   6 +-
 .../solr/client/solrj/io/stream/LetStream.java  |  11 +-
 .../solr/client/solrj/io/stream/SqlStream.java  |   3 +-
 .../solrj/io/stream/TimeSeriesStream.java       |   4 +-
 .../request/JavaBinUpdateRequestCodec.java      |   6 +-
 .../java/org/apache/solr/common/MapWriter.java  |   9 +-
 .../common/cloud/CloudCollectionsListener.java  |  40 ++
 .../apache/solr/common/cloud/ZkStateReader.java |  82 ++-
 .../apache/solr/common/params/SolrParams.java   |  22 +-
 .../org/apache/solr/common/util/NamedList.java  |  30 +
 .../apispec/cluster.configs.Commands.json       |  12 +-
 .../apispec/cluster.configs.delete.json         |   2 +-
 .../src/resources/apispec/cluster.configs.json  |   2 +-
 .../solrj/cloud/autoscaling/TestPolicy.java     |  22 +-
 .../client/solrj/impl/HttpClientUtilTest.java   | 108 +++
 .../apache/solr/client/solrj/io/TestLang.java   |   2 +-
 .../solrj/io/stream/MathExpressionTest.java     | 106 +++
 .../cloud/TestCloudCollectionsListeners.java    | 307 ++++++++
 .../solr/common/params/SolrParamTest.java       |  38 +-
 .../org/apache/solr/util/SSLTestConfig.java     |  89 ++-
 ...estConfig.hostname-and-ip-missmatch.keystore | Bin 0 -> 2246 bytes
 .../resources/SSLTestConfig.testing.keystore    | Bin 2208 -> 2207 bytes
 .../src/resources/create-keystores.sh           |  37 +
 solr/webapp/web/css/angular/collections.css     |   4 -
 solr/webapp/web/css/angular/cores.css           |   8 -
 solr/webapp/web/partials/cores.html             |   2 -
 149 files changed, 4890 insertions(+), 2108 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/880ce3f9/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestTriggerIntegration.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/880ce3f9/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
----------------------------------------------------------------------
diff --cc solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
index 048050a,cbdb2a7..05c9c20
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java
@@@ -465,13 -467,11 +467,13 @@@ public class Policy implements MapWrite
    private static final Map<CollectionAction, Supplier<Suggester>> ops = new HashMap<>();
  
    static {
-     ops.put(CollectionAction.ADDREPLICA, () -> new AddReplicaSuggester());
-     ops.put(CollectionAction.DELETEREPLICA, () -> new DeleteReplicaSuggester());
-     ops.put(CollectionAction.DELETENODE, () -> new DeleteNodeSuggester());
-     ops.put(CollectionAction.MOVEREPLICA, () -> new MoveReplicaSuggester());
-     ops.put(CollectionAction.SPLITSHARD, () -> new SplitShardSuggester());
+     ops.put(CollectionAction.ADDREPLICA, AddReplicaSuggester::new);
 -    ops.put(CollectionAction.DELETEREPLICA, () -> new UnsupportedSuggester(CollectionAction.DELETEREPLICA));
++    ops.put(CollectionAction.DELETEREPLICA, DeleteReplicaSuggester::new);
++    ops.put(CollectionAction.DELETENODE, DeleteNodeSuggester::new);
+     ops.put(CollectionAction.MOVEREPLICA, MoveReplicaSuggester::new);
+     ops.put(CollectionAction.SPLITSHARD, SplitShardSuggester::new);
      ops.put(CollectionAction.MERGESHARDS, () -> new UnsupportedSuggester(CollectionAction.MERGESHARDS));
 +    ops.put(CollectionAction.NONE, () -> new UnsupportedSuggester(CollectionAction.NONE));
    }
  
    public Map<String, List<Clause>> getPolicies() {


[37/40] lucene-solr:jira/solr-11833: LUCENE-8269: Detach downstream classes from IndexWriter

Posted by ab...@apache.org.
LUCENE-8269: Detach downstream classes from IndexWriter

IndexWriter today is shared with many classes like BufferedUpdateStream,
DocumentsWriter and DocumentsWriterPerThread. Some of them even acquire locks
on the writer instance or assert that the current thread doesn't hold a lock.
This makes it very difficult to have a manageable threading model.

This change separates out the IndexWriter from those classes and makes them all
independent of IW. IW now implements a new interface for DocumentsWriter to communicate
on failed or successful flushes and tragic events. This allows IW to make it's critical
methods private and execute all lock critical actions on it's private queue that ensures
that the IW lock is not held. Follow-up changes will try to detach more code like
publishing flushed segments to ensure we never call back into IW in an uncontrolled way.

Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/6f0a8845
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/6f0a8845
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/6f0a8845

Branch: refs/heads/jira/solr-11833
Commit: 6f0a884582a3d58342f98dc1df2c06418defb317
Parents: e8c36f4
Author: Simon Willnauer <si...@apache.org>
Authored: Mon Apr 23 17:17:40 2018 +0200
Committer: GitHub <no...@github.com>
Committed: Mon Apr 23 17:17:40 2018 +0200

----------------------------------------------------------------------
 .../lucene/index/BufferedUpdatesStream.java     | 133 ++++----------
 .../apache/lucene/index/DocumentsWriter.java    | 181 ++++++++-----------
 .../index/DocumentsWriterFlushControl.java      |   4 +-
 .../lucene/index/DocumentsWriterPerThread.java  |  32 ++--
 .../lucene/index/FrozenBufferedUpdates.java     |  72 ++++++--
 .../org/apache/lucene/index/IndexWriter.java    | 157 +++++++++++-----
 .../lucene/index/TestIndexWriterExceptions.java |   9 +-
 .../org/apache/lucene/index/TestInfoStream.java |   8 +-
 .../apache/lucene/index/RandomIndexWriter.java  |   8 +-
 9 files changed, 315 insertions(+), 289 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
index 7a93cfd..c93e4b6 100644
--- a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
+++ b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdatesStream.java
@@ -17,9 +17,8 @@
 
 package org.apache.lucene.index;
 
+import java.io.Closeable;
 import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Locale;
@@ -48,7 +47,7 @@ import org.apache.lucene.util.InfoStream;
  * track which BufferedDeletes packets to apply to any given
  * segment. */
 
-class BufferedUpdatesStream implements Accountable {
+final class BufferedUpdatesStream implements Accountable {
 
   private final Set<FrozenBufferedUpdates> updates = new HashSet<>();
 
@@ -56,22 +55,19 @@ class BufferedUpdatesStream implements Accountable {
   // deletes applied (whose bufferedDelGen defaults to 0)
   // will be correct:
   private long nextGen = 1;
-
   private final FinishedSegments finishedSegments;
   private final InfoStream infoStream;
   private final AtomicLong bytesUsed = new AtomicLong();
   private final AtomicInteger numTerms = new AtomicInteger();
-  private final IndexWriter writer;
 
-  public BufferedUpdatesStream(IndexWriter writer) {
-    this.writer = writer;
-    this.infoStream = writer.infoStream;
+  BufferedUpdatesStream(InfoStream infoStream) {
+    this.infoStream = infoStream;
     this.finishedSegments = new FinishedSegments(infoStream);
   }
 
   // Appends a new packet of buffered deletes to the stream,
   // setting its generation:
-  public synchronized long push(FrozenBufferedUpdates packet) {
+  synchronized long push(FrozenBufferedUpdates packet) {
     /*
      * The insert operation must be atomic. If we let threads increment the gen
      * and push the packet afterwards we risk that packets are out of order.
@@ -94,12 +90,12 @@ class BufferedUpdatesStream implements Accountable {
     return packet.delGen();
   }
 
-  public synchronized int getPendingUpdatesCount() {
+  synchronized int getPendingUpdatesCount() {
     return updates.size();
   }
 
   /** Only used by IW.rollback */
-  public synchronized void clear() {
+  synchronized void clear() {
     updates.clear();
     nextGen = 1;
     finishedSegments.clear();
@@ -107,11 +103,11 @@ class BufferedUpdatesStream implements Accountable {
     bytesUsed.set(0);
   }
 
-  public boolean any() {
+  boolean any() {
     return bytesUsed.get() != 0;
   }
 
-  public int numTerms() {
+  int numTerms() {
     return numTerms.get();
   }
 
@@ -120,13 +116,13 @@ class BufferedUpdatesStream implements Accountable {
     return bytesUsed.get();
   }
 
-  public static class ApplyDeletesResult {
+  static class ApplyDeletesResult {
     
     // True if any actual deletes took place:
-    public final boolean anyDeletes;
+    final boolean anyDeletes;
 
     // If non-null, contains segments that are 100% deleted
-    public final List<SegmentCommitInfo> allDeleted;
+    final List<SegmentCommitInfo> allDeleted;
 
     ApplyDeletesResult(boolean anyDeletes, List<SegmentCommitInfo> allDeleted) {
       this.anyDeletes = anyDeletes;
@@ -137,26 +133,22 @@ class BufferedUpdatesStream implements Accountable {
   /** Waits for all in-flight packets, which are already being resolved concurrently
    *  by indexing threads, to finish.  Returns true if there were any 
    *  new deletes or updates.  This is called for refresh, commit. */
-  public void waitApplyAll() throws IOException {
-
+  void waitApplyAll(IndexWriter writer) throws IOException {
     assert Thread.holdsLock(writer) == false;
-    
-    final long t0 = System.nanoTime();
-
     Set<FrozenBufferedUpdates> waitFor;
     synchronized (this) {
       waitFor = new HashSet<>(updates);
     }
 
-    waitApply(waitFor);
+    waitApply(waitFor, writer);
   }
 
   /** Returns true if this delGen is still running. */
-  public boolean stillRunning(long delGen) {
+  boolean stillRunning(long delGen) {
     return finishedSegments.stillRunning(delGen);
   }
 
-  public void finishedSegment(long delGen) {
+  void finishedSegment(long delGen) {
     finishedSegments.finishedSegment(delGen);
   }
   
@@ -164,7 +156,7 @@ class BufferedUpdatesStream implements Accountable {
    *  delGen.  We track the completed delGens and record the maximum delGen for which all prior
    *  delGens, inclusive, are completed, so that it's safe for doc values updates to apply and write. */
 
-  public synchronized void finished(FrozenBufferedUpdates packet) {
+  synchronized void finished(FrozenBufferedUpdates packet) {
     // TODO: would be a bit more memory efficient to track this per-segment, so when each segment writes it writes all packets finished for
     // it, rather than only recording here, across all segments.  But, more complex code, and more CPU, and maybe not so much impact in
     // practice?
@@ -182,18 +174,14 @@ class BufferedUpdatesStream implements Accountable {
   }
 
   /** All frozen packets up to and including this del gen are guaranteed to be finished. */
-  public long getCompletedDelGen() {
+  long getCompletedDelGen() {
     return finishedSegments.getCompletedDelGen();
   }   
 
   /** Waits only for those in-flight packets that apply to these merge segments.  This is
    *  called when a merge needs to finish and must ensure all deletes to the merging
    *  segments are resolved. */
-  public void waitApplyForMerge(List<SegmentCommitInfo> mergeInfos) throws IOException {
-    assert Thread.holdsLock(writer) == false;
-
-    final long t0 = System.nanoTime();
-
+  void waitApplyForMerge(List<SegmentCommitInfo> mergeInfos, IndexWriter writer) throws IOException {
     long maxDelGen = Long.MIN_VALUE;
     for (SegmentCommitInfo info : mergeInfos) {
       maxDelGen = Math.max(maxDelGen, info.getBufferedDeletesGen());
@@ -214,10 +202,10 @@ class BufferedUpdatesStream implements Accountable {
       infoStream.message("BD", "waitApplyForMerge: " + waitFor.size() + " packets, " + mergeInfos.size() + " merging segments");
     }
     
-    waitApply(waitFor);
+    waitApply(waitFor, writer);
   }
 
-  private void waitApply(Set<FrozenBufferedUpdates> waitFor) throws IOException {
+  private void waitApply(Set<FrozenBufferedUpdates> waitFor, IndexWriter writer) throws IOException {
 
     long startNS = System.nanoTime();
 
@@ -258,87 +246,34 @@ class BufferedUpdatesStream implements Accountable {
   }
 
   /** Holds all per-segment internal state used while resolving deletions. */
-  static final class SegmentState {
+  static final class SegmentState implements Closeable {
     final long delGen;
     final ReadersAndUpdates rld;
     final SegmentReader reader;
     final int startDelCount;
+    private final IOUtils.IOConsumer<ReadersAndUpdates> onClose;
 
     TermsEnum termsEnum;
     PostingsEnum postingsEnum;
     BytesRef term;
 
-    SegmentState(ReadersAndUpdates rld, SegmentCommitInfo info) throws IOException {
+    SegmentState(ReadersAndUpdates rld, IOUtils.IOConsumer<ReadersAndUpdates> onClose, SegmentCommitInfo info) throws IOException {
       this.rld = rld;
       startDelCount = rld.getPendingDeleteCount();
-      reader = rld.getReader(IOContext.READ);
       delGen = info.getBufferedDeletesGen();
+      this.onClose = onClose;
+      reader = rld.getReader(IOContext.READ);
     }
 
     @Override
     public String toString() {
       return "SegmentState(" + rld.info + ")";
     }
-  }
-
-  /** Opens SegmentReader and inits SegmentState for each segment. */
-  public SegmentState[] openSegmentStates(List<SegmentCommitInfo> infos,
-                                          Set<SegmentCommitInfo> alreadySeenSegments, long delGen) throws IOException {
-    List<SegmentState> segStates = new ArrayList<>();
-    try {
-      for (SegmentCommitInfo info : infos) {
-        if (info.getBufferedDeletesGen() <= delGen && alreadySeenSegments.contains(info) == false) {
-          segStates.add(new SegmentState(writer.getPooledInstance(info, true), info));
-          alreadySeenSegments.add(info);
-        }
-      }
-    } catch (Throwable t) {
-      try {
-        finishSegmentStates(segStates);
-      } catch (Throwable t1) {
-        t.addSuppressed(t1);
-      }
-      throw t;
-    }
-    
-    return segStates.toArray(new SegmentState[0]);
-  }
-
-  private void finishSegmentStates(List<SegmentState> segStates) throws IOException {
-    IOUtils.applyToAll(segStates, s -> {
-      ReadersAndUpdates rld = s.rld;
-      try {
-        rld.release(s.reader);
-      } finally {
-        writer.release(s.rld);
-      }
-    });
-  }
 
-  /** Close segment states previously opened with openSegmentStates. */
-  public ApplyDeletesResult closeSegmentStates(SegmentState[] segStates, boolean success) throws IOException {
-    List<SegmentCommitInfo> allDeleted = null;
-    long totDelCount = 0;
-    final List<SegmentState> segmentStates = Arrays.asList(segStates);
-    for (SegmentState segState : segmentStates) {
-      if (success) {
-        totDelCount += segState.rld.getPendingDeleteCount() - segState.startDelCount;
-        int fullDelCount = segState.rld.info.getDelCount() + segState.rld.getPendingDeleteCount();
-        assert fullDelCount <= segState.rld.info.info.maxDoc() : fullDelCount + " > " + segState.rld.info.info.maxDoc();
-        if (segState.rld.isFullyDeleted() && writer.getConfig().mergePolicy.keepFullyDeletedSegment(() -> segState.reader) == false) {
-          if (allDeleted == null) {
-            allDeleted = new ArrayList<>();
-          }
-          allDeleted.add(segState.reader.getSegmentInfo());
-        }
-      }
-    }
-    finishSegmentStates(segmentStates);
-    if (infoStream.isEnabled("BD")) {
-      infoStream.message("BD", "closeSegmentStates: " + totDelCount + " new deleted documents; pool " + updates.size() + " packets; bytesUsed=" + writer.getReaderPoolRamBytesUsed());
+    @Override
+    public void close() throws IOException {
+      IOUtils.close(() -> rld.release(reader), () -> onClose.accept(rld));
     }
-
-    return new ApplyDeletesResult(totDelCount > 0, allDeleted);      
   }
 
   // only for assert
@@ -368,24 +303,24 @@ class BufferedUpdatesStream implements Accountable {
 
     private final InfoStream infoStream;
 
-    public FinishedSegments(InfoStream infoStream) {
+    FinishedSegments(InfoStream infoStream) {
       this.infoStream = infoStream;
     }
 
-    public synchronized void clear() {
+    synchronized void clear() {
       finishedDelGens.clear();
       completedDelGen = 0;
     }
 
-    public synchronized boolean stillRunning(long delGen) {
+    synchronized boolean stillRunning(long delGen) {
       return delGen > completedDelGen && finishedDelGens.contains(delGen) == false;
     }
 
-    public synchronized long getCompletedDelGen() {
+    synchronized long getCompletedDelGen() {
       return completedDelGen;
     }
 
-    public synchronized void finishedSegment(long delGen) {
+    synchronized void finishedSegment(long delGen) {
       finishedDelGens.add(delGen);
       while (true) {
         if (finishedDelGens.contains(completedDelGen + 1)) {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
index 0042dab..5e7cdce 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
@@ -23,17 +23,17 @@ import java.util.ArrayList;
 import java.util.Collection;
 import java.util.List;
 import java.util.Locale;
-import java.util.Queue;
-import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.Set;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Supplier;
 import java.util.function.ToLongFunction;
 
 import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.index.DocumentsWriterFlushQueue.SegmentFlushTicket;
 import org.apache.lucene.index.DocumentsWriterPerThread.FlushedSegment;
 import org.apache.lucene.index.DocumentsWriterPerThreadPool.ThreadState;
-import org.apache.lucene.index.IndexWriter.Event;
 import org.apache.lucene.search.Query;
 import org.apache.lucene.store.AlreadyClosedException;
 import org.apache.lucene.store.Directory;
@@ -101,6 +101,12 @@ import org.apache.lucene.util.InfoStream;
 final class DocumentsWriter implements Closeable, Accountable {
   private final Directory directoryOrig; // no wrapping, for infos
   private final Directory directory;
+  private final FieldInfos.FieldNumbers globalFieldNumberMap;
+  private final int indexCreatedVersionMajor;
+  private final AtomicLong pendingNumDocs;
+  private final boolean enableTestPoints;
+  private final Supplier<String> segmentNameSupplier;
+  private final FlushNotifications flushNotifications;
 
   private volatile boolean closed;
 
@@ -124,11 +130,12 @@ final class DocumentsWriter implements Closeable, Accountable {
   final DocumentsWriterPerThreadPool perThreadPool;
   final FlushPolicy flushPolicy;
   final DocumentsWriterFlushControl flushControl;
-  private final IndexWriter writer;
-  private final Queue<Event> events;
   private long lastSeqNo;
   
-  DocumentsWriter(IndexWriter writer, LiveIndexWriterConfig config, Directory directoryOrig, Directory directory) {
+  DocumentsWriter(FlushNotifications flushNotifications, int indexCreatedVersionMajor, AtomicLong pendingNumDocs, boolean enableTestPoints,
+                  Supplier<String> segmentNameSupplier, LiveIndexWriterConfig config, Directory directoryOrig, Directory directory,
+                  FieldInfos.FieldNumbers globalFieldNumberMap) {
+    this.indexCreatedVersionMajor = indexCreatedVersionMajor;
     this.directoryOrig = directoryOrig;
     this.directory = directory;
     this.config = config;
@@ -136,9 +143,12 @@ final class DocumentsWriter implements Closeable, Accountable {
     this.deleteQueue = new DocumentsWriterDeleteQueue(infoStream);
     this.perThreadPool = config.getIndexerThreadPool();
     flushPolicy = config.getFlushPolicy();
-    this.writer = writer;
-    this.events = new ConcurrentLinkedQueue<>();
-    flushControl = new DocumentsWriterFlushControl(this, config, writer.bufferedUpdatesStream);
+    this.globalFieldNumberMap = globalFieldNumberMap;
+    this.pendingNumDocs = pendingNumDocs;
+    flushControl = new DocumentsWriterFlushControl(this, config);
+    this.segmentNameSupplier = segmentNameSupplier;
+    this.enableTestPoints = enableTestPoints;
+    this.flushNotifications = flushNotifications;
   }
   
   long deleteQueries(final Query... queries) throws IOException {
@@ -175,7 +185,7 @@ final class DocumentsWriter implements Closeable, Accountable {
       if (deleteQueue != null) {
         ticketQueue.addDeletes(deleteQueue);
       }
-      putEvent(ApplyDeletesEvent.INSTANCE); // apply deletes event forces a purge
+      flushNotifications.onDeletesApplied(); // apply deletes event forces a purge
       return true;
     }
     return false;
@@ -409,10 +419,10 @@ final class DocumentsWriter implements Closeable, Accountable {
   
   private void ensureInitialized(ThreadState state) throws IOException {
     if (state.dwpt == null) {
-      final FieldInfos.Builder infos = new FieldInfos.Builder(writer.globalFieldNumberMap);
-      state.dwpt = new DocumentsWriterPerThread(writer, writer.newSegmentName(), directoryOrig,
+      final FieldInfos.Builder infos = new FieldInfos.Builder(globalFieldNumberMap);
+      state.dwpt = new DocumentsWriterPerThread(indexCreatedVersionMajor, segmentNameSupplier.get(), directoryOrig,
                                                 directory, config, infoStream, deleteQueue, infos,
-                                                writer.pendingNumDocs, writer.enableTestPoints);
+                                                pendingNumDocs, enableTestPoints);
     }
   }
 
@@ -433,7 +443,7 @@ final class DocumentsWriter implements Closeable, Accountable {
       final DocumentsWriterPerThread dwpt = perThread.dwpt;
       final int dwptNumDocs = dwpt.getNumDocsInRAM();
       try {
-        seqNo = dwpt.updateDocuments(docs, analyzer, delNode);
+        seqNo = dwpt.updateDocuments(docs, analyzer, delNode, flushNotifications);
       } finally {
         if (dwpt.isAborted()) {
           flushControl.doOnAbort(perThread);
@@ -460,7 +470,7 @@ final class DocumentsWriter implements Closeable, Accountable {
   }
 
   long updateDocument(final Iterable<? extends IndexableField> doc, final Analyzer analyzer,
-      final DocumentsWriterDeleteQueue.Node<?> delNode) throws IOException {
+                      final DocumentsWriterDeleteQueue.Node<?> delNode) throws IOException {
 
     boolean hasEvents = preUpdate();
 
@@ -477,7 +487,7 @@ final class DocumentsWriter implements Closeable, Accountable {
       final DocumentsWriterPerThread dwpt = perThread.dwpt;
       final int dwptNumDocs = dwpt.getNumDocsInRAM();
       try {
-        seqNo = dwpt.updateDocument(doc, analyzer, delNode);
+        seqNo = dwpt.updateDocument(doc, analyzer, delNode, flushNotifications);
       } finally {
         if (dwpt.isAborted()) {
           flushControl.doOnAbort(perThread);
@@ -536,17 +546,18 @@ final class DocumentsWriter implements Closeable, Accountable {
           boolean dwptSuccess = false;
           try {
             // flush concurrently without locking
-            final FlushedSegment newSegment = flushingDWPT.flush();
+            final FlushedSegment newSegment = flushingDWPT.flush(flushNotifications);
             ticketQueue.addSegment(ticket, newSegment);
             dwptSuccess = true;
           } finally {
             subtractFlushedNumDocs(flushingDocsInRam);
             if (flushingDWPT.pendingFilesToDelete().isEmpty() == false) {
-              putEvent(new DeleteNewFilesEvent(flushingDWPT.pendingFilesToDelete()));
+              Set<String> files = flushingDWPT.pendingFilesToDelete();
+              flushNotifications.deleteUnusedFiles(files);
               hasEvents = true;
             }
             if (dwptSuccess == false) {
-              putEvent(new FlushFailedEvent(flushingDWPT.getSegmentInfo()));
+              flushNotifications.flushFailed(flushingDWPT.getSegmentInfo());
               hasEvents = true;
             }
           }
@@ -569,7 +580,7 @@ final class DocumentsWriter implements Closeable, Accountable {
           // thread in innerPurge can't keep up with all
           // other threads flushing segments.  In this case
           // we forcefully stall the producers.
-          putEvent(ForcedPurgeEvent.INSTANCE);
+          flushNotifications.onTicketBacklog();
           break;
         }
       } finally {
@@ -580,7 +591,7 @@ final class DocumentsWriter implements Closeable, Accountable {
     }
 
     if (hasEvents) {
-      writer.doAfterSegmentFlushed(false, false);
+      flushNotifications.afterSegmentsFlushed();
     }
 
     // If deletes alone are consuming > 1/2 our RAM
@@ -597,12 +608,52 @@ final class DocumentsWriter implements Closeable, Accountable {
                                                  flushControl.getDeleteBytesUsed()/(1024.*1024.),
                                                  ramBufferSizeMB));
         }
-        putEvent(ApplyDeletesEvent.INSTANCE);
+        flushNotifications.onDeletesApplied();
       }
     }
 
     return hasEvents;
   }
+
+  interface FlushNotifications { // TODO maybe we find a better name for this?
+
+    /**
+     * Called when files were written to disk that are not used anymore. It's the implementations responsibilty
+     * to clean these files up
+     */
+    void deleteUnusedFiles(Collection<String> files);
+
+    /**
+     * Called when a segment failed to flush.
+     */
+    void flushFailed(SegmentInfo info);
+
+    /**
+     * Called after one or more segments were flushed to disk.
+     */
+    void afterSegmentsFlushed() throws IOException;
+
+    /**
+     * Should be called if a flush or an indexing operation caused a tragic / unrecoverable event.
+     */
+    void onTragicEvent(Throwable event, String message);
+
+    /**
+     * Called once deletes have been applied either after a flush or on a deletes call
+     */
+    void onDeletesApplied();
+
+    /**
+     * Called once the DocumentsWriter ticket queue has a backlog. This means there is an inner thread
+     * that tries to publish flushed segments but can't keep up with the other threads flushing new segments.
+     * This likely requires other thread to forcefully purge the buffer to help publishing. This
+     * can't be done in-place since we might hold index writer locks when this is called. The caller must ensure
+     * that the purge happens without an index writer lock hold
+     *
+     * @see DocumentsWriter#purgeBuffer(IndexWriter, boolean)
+     */
+    void onTicketBacklog();
+  }
   
   void subtractFlushedNumDocs(int numFlushed) {
     int oldValue = numDocsInRAM.get();
@@ -626,7 +677,7 @@ final class DocumentsWriter implements Closeable, Accountable {
    * two stage operation; the caller must ensure (in try/finally) that finishFlush
    * is called after this method, to release the flush lock in DWFlushControl
    */
-  long flushAllThreads()
+  long flushAllThreads(IndexWriter writer)
     throws IOException {
     final DocumentsWriterDeleteQueue flushingDeleteQueue;
     if (infoStream.isEnabled("DW")) {
@@ -695,92 +746,8 @@ final class DocumentsWriter implements Closeable, Accountable {
     }
   }
 
-  void putEvent(Event event) {
-    events.add(event);
-  }
-
   @Override
   public long ramBytesUsed() {
     return flushControl.ramBytesUsed();
   }
-
-  static final class ResolveUpdatesEvent implements Event {
-
-    private final FrozenBufferedUpdates packet;
-    
-    ResolveUpdatesEvent(FrozenBufferedUpdates packet) {
-      this.packet = packet;
-    }
-
-    @Override
-    public void process(IndexWriter writer) throws IOException {
-      try {
-        packet.apply(writer);
-      } catch (Throwable t) {
-        try {
-          writer.onTragicEvent(t, "applyUpdatesPacket");
-        } catch (Throwable t1) {
-          t.addSuppressed(t1);
-        }
-        throw t;
-      }
-      writer.flushDeletesCount.incrementAndGet();
-    }
-  }
-
-  static final class ApplyDeletesEvent implements Event {
-    static final Event INSTANCE = new ApplyDeletesEvent();
-
-    private ApplyDeletesEvent() {
-      // only one instance
-    }
-    
-    @Override
-    public void process(IndexWriter writer) throws IOException {
-      writer.applyDeletesAndPurge(true); // we always purge!
-    }
-  }
-
-  static final class ForcedPurgeEvent implements Event {
-    static final Event INSTANCE = new ForcedPurgeEvent();
-
-    private ForcedPurgeEvent() {
-      // only one instance
-    }
-    
-    @Override
-    public void process(IndexWriter writer) throws IOException {
-      writer.purge(true);
-    }
-  }
-  
-  static class FlushFailedEvent implements Event {
-    private final SegmentInfo info;
-    
-    public FlushFailedEvent(SegmentInfo info) {
-      this.info = info;
-    }
-    
-    @Override
-    public void process(IndexWriter writer) throws IOException {
-      writer.flushFailed(info);
-    }
-  }
-  
-  static class DeleteNewFilesEvent implements Event {
-    private final Collection<String>  files;
-    
-    public DeleteNewFilesEvent(Collection<String>  files) {
-      this.files = files;
-    }
-    
-    @Override
-    public void process(IndexWriter writer) throws IOException {
-      writer.deleteNewFiles(files);
-    }
-  }
-
-  public Queue<Event> eventQueue() {
-    return events;
-  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
index 8aea232..ad5b7e4 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
@@ -70,10 +70,9 @@ final class DocumentsWriterFlushControl implements Accountable {
   private boolean closed = false;
   private final DocumentsWriter documentsWriter;
   private final LiveIndexWriterConfig config;
-  private final BufferedUpdatesStream bufferedUpdatesStream;
   private final InfoStream infoStream;
 
-  DocumentsWriterFlushControl(DocumentsWriter documentsWriter, LiveIndexWriterConfig config, BufferedUpdatesStream bufferedUpdatesStream) {
+  DocumentsWriterFlushControl(DocumentsWriter documentsWriter, LiveIndexWriterConfig config) {
     this.infoStream = config.getInfoStream();
     this.stallControl = new DocumentsWriterStallControl();
     this.perThreadPool = documentsWriter.perThreadPool;
@@ -81,7 +80,6 @@ final class DocumentsWriterFlushControl implements Accountable {
     this.config = config;
     this.hardMaxBytesPerDWPT = config.getRAMPerThreadHardLimitMB() * 1024 * 1024;
     this.documentsWriter = documentsWriter;
-    this.bufferedUpdatesStream = bufferedUpdatesStream;
   }
 
   public synchronized long activeBytes() {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThread.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThread.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThread.java
index 32a783a..04ab493 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThread.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThread.java
@@ -169,11 +169,10 @@ class DocumentsWriterPerThread {
   private final AtomicLong pendingNumDocs;
   private final LiveIndexWriterConfig indexWriterConfig;
   private final boolean enableTestPoints;
-  private final IndexWriter indexWriter;
-  
-  public DocumentsWriterPerThread(IndexWriter writer, String segmentName, Directory directoryOrig, Directory directory, LiveIndexWriterConfig indexWriterConfig, InfoStream infoStream, DocumentsWriterDeleteQueue deleteQueue,
+  private final int indexVersionCreated;
+
+  public DocumentsWriterPerThread(int indexVersionCreated, String segmentName, Directory directoryOrig, Directory directory, LiveIndexWriterConfig indexWriterConfig, InfoStream infoStream, DocumentsWriterDeleteQueue deleteQueue,
                                   FieldInfos.Builder fieldInfos, AtomicLong pendingNumDocs, boolean enableTestPoints) throws IOException {
-    this.indexWriter = writer;
     this.directoryOrig = directoryOrig;
     this.directory = new TrackingDirectoryWrapper(directory);
     this.fieldInfos = fieldInfos;
@@ -200,6 +199,7 @@ class DocumentsWriterPerThread {
     // it really sucks that we need to pull this within the ctor and pass this ref to the chain!
     consumer = indexWriterConfig.getIndexingChain().getChain(this);
     this.enableTestPoints = enableTestPoints;
+    this.indexVersionCreated = indexVersionCreated;
   }
   
   public FieldInfos.Builder getFieldInfosBuilder() {
@@ -207,7 +207,7 @@ class DocumentsWriterPerThread {
   }
 
   public int getIndexCreatedVersionMajor() {
-    return indexWriter.segmentInfos.getIndexCreatedVersionMajor();
+    return indexVersionCreated;
   }
 
   final void testPoint(String message) {
@@ -227,7 +227,7 @@ class DocumentsWriterPerThread {
     }
   }
 
-  public long updateDocument(Iterable<? extends IndexableField> doc, Analyzer analyzer, DocumentsWriterDeleteQueue.Node<?> deleteNode) throws IOException {
+  public long updateDocument(Iterable<? extends IndexableField> doc, Analyzer analyzer, DocumentsWriterDeleteQueue.Node<?> deleteNode, DocumentsWriter.FlushNotifications flushNotifications) throws IOException {
     try {
       assert hasHitAbortingException() == false: "DWPT has hit aborting exception but is still indexing";
       testPoint("DocumentsWriterPerThread addDocument start");
@@ -263,11 +263,11 @@ class DocumentsWriterPerThread {
 
       return finishDocument(deleteNode);
     } finally {
-      maybeAbort("updateDocument");
+      maybeAbort("updateDocument", flushNotifications);
     }
   }
 
-  public long updateDocuments(Iterable<? extends Iterable<? extends IndexableField>> docs, Analyzer analyzer, DocumentsWriterDeleteQueue.Node<?> deleteNode) throws IOException {
+  public long updateDocuments(Iterable<? extends Iterable<? extends IndexableField>> docs, Analyzer analyzer, DocumentsWriterDeleteQueue.Node<?> deleteNode, DocumentsWriter.FlushNotifications flushNotifications) throws IOException {
     try {
       testPoint("DocumentsWriterPerThread addDocuments start");
       assert hasHitAbortingException() == false: "DWPT has hit aborting exception but is still indexing";
@@ -343,7 +343,7 @@ class DocumentsWriterPerThread {
         docState.clear();
       }
     } finally {
-      maybeAbort("updateDocuments");
+      maybeAbort("updateDocuments", flushNotifications);
     }
   }
   
@@ -425,7 +425,7 @@ class DocumentsWriterPerThread {
   }
 
   /** Flush all pending docs to a new segment */
-  FlushedSegment flush() throws IOException {
+  FlushedSegment flush(DocumentsWriter.FlushNotifications flushNotifications) throws IOException {
     assert numDocsInRAM > 0;
     assert deleteSlice.isEmpty() : "all deletes must be applied in prepareFlush";
     segmentInfo.setMaxDoc(numDocsInRAM);
@@ -499,7 +499,7 @@ class DocumentsWriterPerThread {
       FlushedSegment fs = new FlushedSegment(infoStream, segmentInfoPerCommit, flushState.fieldInfos,
           segmentDeletes, flushState.liveDocs, flushState.delCountOnFlush,
           sortMap);
-      sealFlushedSegment(fs, sortMap);
+      sealFlushedSegment(fs, sortMap, flushNotifications);
       if (infoStream.isEnabled("DWPT")) {
         infoStream.message("DWPT", "flush time " + ((System.nanoTime() - t0) / 1000000.0) + " msec");
       }
@@ -508,18 +508,18 @@ class DocumentsWriterPerThread {
       onAbortingException(t);
       throw t;
     } finally {
-      maybeAbort("flush");
+      maybeAbort("flush", flushNotifications);
     }
   }
 
-  private void maybeAbort(String location) throws IOException {
+  private void maybeAbort(String location, DocumentsWriter.FlushNotifications flushNotifications) throws IOException {
     if (hasHitAbortingException() && aborted == false) {
       // if we are already aborted don't do anything here
       try {
         abort();
       } finally {
         // whatever we do here we have to fire this tragic event up.
-        indexWriter.onTragicEvent(abortingException, location);
+        flushNotifications.onTragicEvent(abortingException, location);
       }
     }
   }
@@ -545,7 +545,7 @@ class DocumentsWriterPerThread {
    * Seals the {@link SegmentInfo} for the new flushed segment and persists
    * the deleted documents {@link MutableBits}.
    */
-  void sealFlushedSegment(FlushedSegment flushedSegment, Sorter.DocMap sortMap) throws IOException {
+  void sealFlushedSegment(FlushedSegment flushedSegment, Sorter.DocMap sortMap, DocumentsWriter.FlushNotifications flushNotifications) throws IOException {
     assert flushedSegment != null;
     SegmentCommitInfo newSegment = flushedSegment.segmentInfo;
 
@@ -559,7 +559,7 @@ class DocumentsWriterPerThread {
       if (indexWriterConfig.getUseCompoundFile()) {
         Set<String> originalFiles = newSegment.info.files();
         // TODO: like addIndexes, we are relying on createCompoundFile to successfully cleanup...
-        indexWriter.createCompoundFile(infoStream, new TrackingDirectoryWrapper(directory), newSegment.info, context);
+        IndexWriter.createCompoundFile(infoStream, new TrackingDirectoryWrapper(directory), newSegment.info, context, flushNotifications::deleteUnusedFiles);
         filesToDelete.addAll(originalFiles);
         newSegment.info.setUseCompoundFile(true);
       }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
index 586afa7..bebc059 100644
--- a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
+++ b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java
@@ -18,6 +18,8 @@ package org.apache.lucene.index;
 
 import java.io.Closeable;
 import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -42,6 +44,7 @@ import org.apache.lucene.store.RAMOutputStream;
 import org.apache.lucene.util.ArrayUtil;
 import org.apache.lucene.util.Bits;
 import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.IOUtils;
 import org.apache.lucene.util.InfoStream;
 import org.apache.lucene.util.RamUsageEstimator;
 
@@ -51,7 +54,7 @@ import org.apache.lucene.util.RamUsageEstimator;
  * structure to hold them.  We don't hold docIDs because these are applied on
  * flush.
  */
-class FrozenBufferedUpdates {
+final class FrozenBufferedUpdates {
 
   /* NOTE: we now apply this frozen packet immediately on creation, yet this process is heavy, and runs
    * in multiple threads, and this compression is sizable (~8.3% of the original size), so it's important
@@ -297,7 +300,7 @@ class FrozenBufferedUpdates {
 
         // Must open while holding IW lock so that e.g. segments are not merged
         // away, dropped from 100% deletions, etc., before we can open the readers
-        segStates = writer.bufferedUpdatesStream.openSegmentStates(infos, seenSegments, delGen());
+        segStates = openSegmentStates(writer, infos, seenSegments, delGen());
 
         if (segStates.length == 0) {
 
@@ -357,7 +360,7 @@ class FrozenBufferedUpdates {
           // Must do this while still holding IW lock else a merge could finish and skip carrying over our updates:
           
           // Record that this packet is finished:
-          writer.bufferedUpdatesStream.finished(this);
+          writer.finished(this);
 
           finished = true;
 
@@ -378,7 +381,7 @@ class FrozenBufferedUpdates {
 
     if (finished == false) {
       // Record that this packet is finished:
-      writer.bufferedUpdatesStream.finished(this);
+      writer.finished(this);
     }
         
     if (infoStream.isEnabled("BD")) {
@@ -388,18 +391,67 @@ class FrozenBufferedUpdates {
       if (iter > 0) {
         message += "; " + (iter+1) + " iters due to concurrent merges";
       }
-      message += "; " + writer.bufferedUpdatesStream.getPendingUpdatesCount() + " packets remain";
+      message += "; " + writer.getPendingUpdatesCount() + " packets remain";
       infoStream.message("BD", message);
     }
   }
 
+  /** Opens SegmentReader and inits SegmentState for each segment. */
+  private static BufferedUpdatesStream.SegmentState[] openSegmentStates(IndexWriter writer, List<SegmentCommitInfo> infos,
+                                                                       Set<SegmentCommitInfo> alreadySeenSegments, long delGen) throws IOException {
+    List<BufferedUpdatesStream.SegmentState> segStates = new ArrayList<>();
+    try {
+      for (SegmentCommitInfo info : infos) {
+        if (info.getBufferedDeletesGen() <= delGen && alreadySeenSegments.contains(info) == false) {
+          segStates.add(new BufferedUpdatesStream.SegmentState(writer.getPooledInstance(info, true), writer::release, info));
+          alreadySeenSegments.add(info);
+        }
+      }
+    } catch (Throwable t) {
+      try {
+        IOUtils.close(segStates);
+      } catch (Throwable t1) {
+        t.addSuppressed(t1);
+      }
+      throw t;
+    }
+
+    return segStates.toArray(new BufferedUpdatesStream.SegmentState[0]);
+  }
+
+  /** Close segment states previously opened with openSegmentStates. */
+  public static BufferedUpdatesStream.ApplyDeletesResult closeSegmentStates(IndexWriter writer, BufferedUpdatesStream.SegmentState[] segStates, boolean success) throws IOException {
+    List<SegmentCommitInfo> allDeleted = null;
+    long totDelCount = 0;
+    final List<BufferedUpdatesStream.SegmentState> segmentStates = Arrays.asList(segStates);
+    for (BufferedUpdatesStream.SegmentState segState : segmentStates) {
+      if (success) {
+        totDelCount += segState.rld.getPendingDeleteCount() - segState.startDelCount;
+        int fullDelCount = segState.rld.info.getDelCount() + segState.rld.getPendingDeleteCount();
+        assert fullDelCount <= segState.rld.info.info.maxDoc() : fullDelCount + " > " + segState.rld.info.info.maxDoc();
+        if (segState.rld.isFullyDeleted() && writer.getConfig().getMergePolicy().keepFullyDeletedSegment(() -> segState.reader) == false) {
+          if (allDeleted == null) {
+            allDeleted = new ArrayList<>();
+          }
+          allDeleted.add(segState.reader.getSegmentInfo());
+        }
+      }
+    }
+    IOUtils.close(segmentStates);
+    if (writer.infoStream.isEnabled("BD")) {
+      writer.infoStream.message("BD", "closeSegmentStates: " + totDelCount + " new deleted documents; pool " + writer.getPendingUpdatesCount()+ " packets; bytesUsed=" + writer.getReaderPoolRamBytesUsed());
+    }
+
+    return new BufferedUpdatesStream.ApplyDeletesResult(totDelCount > 0, allDeleted);
+  }
+
   private void finishApply(IndexWriter writer, BufferedUpdatesStream.SegmentState[] segStates,
                            boolean success, Set<String> delFiles) throws IOException {
     synchronized (writer) {
 
       BufferedUpdatesStream.ApplyDeletesResult result;
       try {
-        result = writer.bufferedUpdatesStream.closeSegmentStates(segStates, success);
+        result = closeSegmentStates(writer, segStates, success);
       } finally {
         // Matches the incRef we did above, but we must do the decRef after closing segment states else
         // IFD can't delete still-open files
@@ -407,8 +459,8 @@ class FrozenBufferedUpdates {
       }
 
       if (result.anyDeletes) {
-        writer.maybeMerge.set(true);
-        writer.checkpoint();
+          writer.maybeMerge.set(true);
+          writer.checkpoint();
       }
 
       if (result.allDeleted != null) {
@@ -857,8 +909,4 @@ class FrozenBufferedUpdates {
   boolean any() {
     return deleteTerms.size() > 0 || deleteQueries.length > 0 || numericDVUpdates.length > 0 || binaryDVUpdates.length > 0;
   }
-
-  boolean anyDeleteTerms() {
-    return deleteTerms.size() > 0;
-  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
index 974f6c5..e8d0666 100644
--- a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
@@ -32,6 +32,7 @@ import java.util.Map;
 import java.util.PriorityQueue;
 import java.util.Queue;
 import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
@@ -236,7 +237,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
   }
   
   /** Used only for testing. */
-  boolean enableTestPoints = false;
+  private final boolean enableTestPoints;
 
   static final int UNBOUNDED_MAX_MERGE_SEGMENTS = -1;
   
@@ -291,7 +292,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
   final FieldNumbers globalFieldNumberMap;
 
   final DocumentsWriter docWriter;
-  private final Queue<Event> eventQueue;
+  private final Queue<Event> eventQueue = new ConcurrentLinkedQueue<>();
   final IndexFileDeleter deleter;
 
   // used by forceMerge to note those needing merging
@@ -345,6 +346,51 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
    *  card to make sure they can later charge you when you check out. */
   final AtomicLong pendingNumDocs = new AtomicLong();
 
+  private final DocumentsWriter.FlushNotifications flushNotifications = new DocumentsWriter.FlushNotifications() {
+    @Override
+    public void deleteUnusedFiles(Collection<String> files) {
+      eventQueue.add(w -> w.deleteNewFiles(files));
+    }
+
+    @Override
+    public void flushFailed(SegmentInfo info) {
+      eventQueue.add(w -> w.flushFailed(info));
+    }
+
+    @Override
+    public void afterSegmentsFlushed() throws IOException {
+      try {
+        purge(false);
+      } finally {
+        if (false) {
+          maybeMerge(config.getMergePolicy(), MergeTrigger.SEGMENT_FLUSH, UNBOUNDED_MAX_MERGE_SEGMENTS);
+        }
+      }
+    }
+
+    @Override
+    public void onTragicEvent(Throwable event, String message) {
+      IndexWriter.this.onTragicEvent(event, message);
+    }
+
+    @Override
+    public void onDeletesApplied() {
+      eventQueue.add(w -> {
+          try {
+            w.purge(true);
+          } finally {
+            flushCount.incrementAndGet();
+          }
+        }
+      );
+    }
+
+    @Override
+    public void onTicketBacklog() {
+      eventQueue.add(w -> w.purge(true));
+    }
+  };
+
   DirectoryReader getReader() throws IOException {
     return getReader(true, false);
   }
@@ -439,7 +485,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       synchronized (fullFlushLock) {
         try {
           // TODO: should we somehow make this available in the returned NRT reader?
-          long seqNo = docWriter.flushAllThreads();
+          long seqNo = docWriter.flushAllThreads(this);
           if (seqNo < 0) {
             anyChanges = true;
             seqNo = -seqNo;
@@ -660,7 +706,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     if (d instanceof FSDirectory && ((FSDirectory) d).checkPendingDeletions()) {
       throw new IllegalArgumentException("Directory " + d + " still has pending deleted files; cannot initialize IndexWriter");
     }
-
+    enableTestPoints = isEnableTestPoints();
     conf.setIndexWriter(this); // prevent reuse by other instances
     config = conf;
     infoStream = config.getInfoStream();
@@ -678,9 +724,6 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       mergeScheduler = config.getMergeScheduler();
       mergeScheduler.setInfoStream(infoStream);
       codec = config.getCodec();
-
-      bufferedUpdatesStream = new BufferedUpdatesStream(this);
-
       OpenMode mode = config.getOpenMode();
       boolean create;
       if (mode == OpenMode.CREATE) {
@@ -824,8 +867,10 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       validateIndexSort();
 
       config.getFlushPolicy().init(config);
-      docWriter = new DocumentsWriter(this, config, directoryOrig, directory);
-      eventQueue = docWriter.eventQueue();
+      bufferedUpdatesStream = new BufferedUpdatesStream(infoStream);
+      docWriter = new DocumentsWriter(flushNotifications, segmentInfos.getIndexCreatedVersionMajor(), pendingNumDocs,
+          enableTestPoints, this::newSegmentName,
+          config, directoryOrig, directory, globalFieldNumberMap);
       readerPool = new ReaderPool(directory, directoryOrig, segmentInfos, globalFieldNumberMap,
           bufferedUpdatesStream::getCompletedDelGen, infoStream, conf.getSoftDeletesField(), reader);
       if (config.getReaderPooling()) {
@@ -2457,7 +2502,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
   synchronized void publishFrozenUpdates(FrozenBufferedUpdates packet) throws IOException {
     assert packet != null && packet.any();
     bufferedUpdatesStream.push(packet);
-    docWriter.putEvent(new DocumentsWriter.ResolveUpdatesEvent(packet));
+    eventQueue.add(new ResolveUpdatesEvent(packet));
   }
 
   /**
@@ -2479,7 +2524,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       if (globalPacket != null && globalPacket.any()) {
         // Do this as an event so it applies higher in the stack when we are not holding DocumentsWriterFlushQueue.purgeLock:
         bufferedUpdatesStream.push(globalPacket);
-        docWriter.putEvent(new DocumentsWriter.ResolveUpdatesEvent(globalPacket));
+        eventQueue.add(new ResolveUpdatesEvent(globalPacket));
       }
 
       // Publishing the segment must be sync'd on IW -> BDS to make the sure
@@ -2489,7 +2534,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
         nextGen = bufferedUpdatesStream.push(packet);
 
         // Do this as an event so it applies higher in the stack when we are not holding DocumentsWriterFlushQueue.purgeLock:
-        docWriter.putEvent(new DocumentsWriter.ResolveUpdatesEvent(packet));
+        eventQueue.add(new ResolveUpdatesEvent(packet));
 
       } else {
         // Since we don't have a delete packet to apply we can get a new
@@ -2877,7 +2922,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
         // TODO: unlike merge, on exception we arent sniping any trash cfs files here?
         // createCompoundFile tries to cleanup, but it might not always be able to...
         try {
-          createCompoundFile(infoStream, trackingCFSDir, info, context);
+          createCompoundFile(infoStream, trackingCFSDir, info, context, this::deleteNewFiles);
         } finally {
           // delete new non cfs files directly: they were never
           // registered with IFD
@@ -3060,7 +3105,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
           boolean flushSuccess = false;
           boolean success = false;
           try {
-            seqNo = docWriter.flushAllThreads();
+            seqNo = docWriter.flushAllThreads(this);
             if (seqNo < 0) {
               anyChanges = true;
               seqNo = -seqNo;
@@ -3421,7 +3466,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       synchronized (fullFlushLock) {
         boolean flushSuccess = false;
         try {
-          long seqNo = docWriter.flushAllThreads();
+          long seqNo = docWriter.flushAllThreads(this);
           if (seqNo < 0) {
             seqNo = -seqNo;
             anyChanges = true;
@@ -3469,7 +3514,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "now apply all deletes for all segments buffered updates bytesUsed=" + bufferedUpdatesStream.ramBytesUsed() + " reader pool bytesUsed=" + readerPool.ramBytesUsed());
     }
-    bufferedUpdatesStream.waitApplyAll();
+    bufferedUpdatesStream.waitApplyAll(this);
   }
 
   // for testing only
@@ -3998,9 +4043,9 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
   /** Does initial setup for a merge, which is fast but holds
    *  the synchronized lock on IndexWriter instance.  */
   final void mergeInit(MergePolicy.OneMerge merge) throws IOException {
-
+    assert Thread.holdsLock(this) == false;
     // Make sure any deletes that must be resolved before we commit the merge are complete:
-    bufferedUpdatesStream.waitApplyForMerge(merge.segments);
+    bufferedUpdatesStream.waitApplyForMerge(merge.segments, this);
 
     boolean success = false;
     try {
@@ -4267,7 +4312,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
         Collection<String> filesToRemove = merge.info.files();
         TrackingDirectoryWrapper trackingCFSDir = new TrackingDirectoryWrapper(mergeDirectory);
         try {
-          createCompoundFile(infoStream, trackingCFSDir, merge.info.info, context);
+          createCompoundFile(infoStream, trackingCFSDir, merge.info.info, context, this::deleteNewFiles);
           success = true;
         } catch (Throwable t) {
           synchronized(this) {
@@ -4751,7 +4796,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
    * deletion files, this SegmentInfo must not reference such files when this
    * method is called, because they are not allowed within a compound file.
    */
-  final void createCompoundFile(InfoStream infoStream, TrackingDirectoryWrapper directory, final SegmentInfo info, IOContext context) throws IOException {
+  static final void createCompoundFile(InfoStream infoStream, TrackingDirectoryWrapper directory, final SegmentInfo info, IOContext context, IOUtils.IOConsumer<Collection<String>> deleteFiles) throws IOException {
 
     // maybe this check is not needed, but why take the risk?
     if (!directory.getCreatedFiles().isEmpty()) {
@@ -4769,7 +4814,7 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     } finally {
       if (!success) {
         // Safe: these files must exist
-        deleteNewFiles(directory.getCreatedFiles());
+        deleteFiles.accept(directory.getCreatedFiles());
       }
     }
 
@@ -4783,14 +4828,13 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
    * @throws IOException if an {@link IOException} occurs
    * @see IndexFileDeleter#deleteNewFiles(Collection)
    */
-  synchronized final void deleteNewFiles(Collection<String> files) throws IOException {
+  private synchronized void deleteNewFiles(Collection<String> files) throws IOException {
     deleter.deleteNewFiles(files);
   }
-  
   /**
    * Cleans up residuals from a segment that could not be entirely flushed due to an error
    */
-  synchronized final void flushFailed(SegmentInfo info) throws IOException {
+  private synchronized final void flushFailed(SegmentInfo info) throws IOException {
     // TODO: this really should be a tragic
     Collection<String> files;
     try {
@@ -4803,29 +4847,11 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
       deleter.deleteNewFiles(files);
     }
   }
-  
-  final int purge(boolean forced) throws IOException {
+
+  private int purge(boolean forced) throws IOException {
     return docWriter.purgeBuffer(this, forced);
   }
 
-  final void applyDeletesAndPurge(boolean forcePurge) throws IOException {
-    try {
-      purge(forcePurge);
-    } finally {
-      flushCount.incrementAndGet();
-    }
-  }
-  
-  final void doAfterSegmentFlushed(boolean triggerMerge, boolean forcePurge) throws IOException {
-    try {
-      purge(forcePurge);
-    } finally {
-      if (triggerMerge) {
-        maybeMerge(config.getMergePolicy(), MergeTrigger.SEGMENT_FLUSH, UNBOUNDED_MAX_MERGE_SEGMENTS);
-      }
-    }
-  }
-  
   /** Record that the files referenced by this {@link SegmentInfos} are still in use.
    *
    * @lucene.internal */
@@ -4867,8 +4893,8 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
    * encoded inside the {@link #process(IndexWriter)} method.
    *
    */
-  interface Event {
-    
+  @FunctionalInterface
+  private interface Event {
     /**
      * Processes the event. This method is called by the {@link IndexWriter}
      * passed as the first argument.
@@ -4971,4 +4997,43 @@ public class IndexWriter implements Closeable, TwoPhaseCommit, Accountable {
     ensureOpen(false);
     return readerPool.get(info, create);
   }
+
+  private static final class ResolveUpdatesEvent implements Event {
+
+    private final FrozenBufferedUpdates packet;
+
+    ResolveUpdatesEvent(FrozenBufferedUpdates packet) {
+      this.packet = packet;
+    }
+
+    @Override
+    public void process(IndexWriter writer) throws IOException {
+      try {
+        packet.apply(writer);
+      } catch (Throwable t) {
+        try {
+          writer.onTragicEvent(t, "applyUpdatesPacket");
+        } catch (Throwable t1) {
+          t.addSuppressed(t1);
+        }
+        throw t;
+      }
+      writer.flushDeletesCount.incrementAndGet();
+    }
+  }
+
+  void finished(FrozenBufferedUpdates packet) {
+    bufferedUpdatesStream.finished(packet);
+  }
+
+  int getPendingUpdatesCount() {
+    return bufferedUpdatesStream.getPendingUpdatesCount();
+  }
+
+  /**
+   * Tests should override this to enable test points. Default is <code>false</code>.
+   */
+  protected boolean isEnableTestPoints() {
+    return false;
+  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
index 61bf1fc..1d680ea 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
@@ -1836,9 +1836,14 @@ public class TestIndexWriterExceptions extends LuceneTestCase {
     Directory dir = newMockDirectory(); // we want to ensure we don't leak any locks or file handles
     IndexWriterConfig iwc = new IndexWriterConfig(null);
     iwc.setInfoStream(evilInfoStream);
-    IndexWriter iw = new IndexWriter(dir, iwc);
     // TODO: cutover to RandomIndexWriter.mockIndexWriter?
-    iw.enableTestPoints = true;
+    IndexWriter iw = new IndexWriter(dir, iwc) {
+      @Override
+      protected boolean isEnableTestPoints() {
+        return true;
+      }
+    };
+
     Document doc = new Document();
     for (int i = 0; i < 10; i++) {
       iw.addDocument(doc);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/core/src/test/org/apache/lucene/index/TestInfoStream.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestInfoStream.java b/lucene/core/src/test/org/apache/lucene/index/TestInfoStream.java
index 4ef2208..4c40948 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestInfoStream.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestInfoStream.java
@@ -74,8 +74,12 @@ public class TestInfoStream extends LuceneTestCase {
         return true;
       }
     });
-    IndexWriter iw = new IndexWriter(dir, iwc);
-    iw.enableTestPoints = true;
+    IndexWriter iw = new IndexWriter(dir, iwc) {
+      @Override
+      protected boolean isEnableTestPoints() {
+        return true;
+      }
+    };
     iw.addDocument(new Document());
     iw.close();
     dir.close();

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6f0a8845/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java
----------------------------------------------------------------------
diff --git a/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java b/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java
index 15ca469..e2db533 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java
@@ -80,7 +80,12 @@ public class RandomIndexWriter implements Closeable {
     IndexWriter iw;
     boolean success = false;
     try {
-      iw = new IndexWriter(dir, conf);
+      iw = new IndexWriter(dir, conf) {
+        @Override
+        protected boolean isEnableTestPoints() {
+          return true;
+        }
+      };
       success = true;
     } finally {
       if (reader != null) {
@@ -91,7 +96,6 @@ public class RandomIndexWriter implements Closeable {
         }
       }
     }
-    iw.enableTestPoints = true;
     return iw;
   }
 


[38/40] lucene-solr:jira/solr-11833: LUCENE-8270: Remove MatchesIterator.term()

Posted by ab...@apache.org.
LUCENE-8270: Remove MatchesIterator.term()


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/e167e912
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/e167e912
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/e167e912

Branch: refs/heads/jira/solr-11833
Commit: e167e9124757b3f3597db8149c49b7f388c48627
Parents: 6f0a884
Author: Alan Woodward <ro...@apache.org>
Authored: Mon Apr 23 15:52:43 2018 +0100
Committer: Alan Woodward <ro...@apache.org>
Committed: Mon Apr 23 16:51:17 2018 +0100

----------------------------------------------------------------------
 lucene/CHANGES.txt                              |  6 +-
 .../search/DisjunctionMatchesIterator.java      | 10 +--
 .../apache/lucene/search/MatchesIterator.java   |  8 ---
 .../lucene/search/TermMatchesIterator.java      |  9 +--
 .../org/apache/lucene/search/TermQuery.java     |  2 +-
 .../lucene/search/TestMatchesIterator.java      | 73 ++++----------------
 .../lucene/search/AssertingMatchesIterator.java |  7 --
 7 files changed, 19 insertions(+), 96 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e167e912/lucene/CHANGES.txt
----------------------------------------------------------------------
diff --git a/lucene/CHANGES.txt b/lucene/CHANGES.txt
index 301b360..a440f95 100644
--- a/lucene/CHANGES.txt
+++ b/lucene/CHANGES.txt
@@ -137,9 +137,9 @@ New Features
   soft deletes if the reader is opened form a directory. (Simon Willnauer,
   Mike McCandless, Uwe Schindler, Adrien Grand)
 
-* LUCENE-8229: Add a method Weight.matches(LeafReaderContext, doc) that returns
-  an iterator over matching positions for a given query and document.  This
-  allows exact hit extraction and will enable implementation of accurate 
+* LUCENE-8229, LUCENE-8270: Add a method Weight.matches(LeafReaderContext, doc) 
+  that returns an iterator over matching positions for a given query and document.
+  This allows exact hit extraction and will enable implementation of accurate 
   highlighters. (Alan Woodward, Adrien Grand, David Smiley)
 
 * LUCENE-8246: Allow to customize the number of deletes a merge claims. This

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e167e912/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java b/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java
index a18b280..975199b 100644
--- a/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java
+++ b/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java
@@ -84,8 +84,7 @@ final class DisjunctionMatchesIterator implements MatchesIterator {
       if (te.seekExact(term)) {
         PostingsEnum pe = te.postings(reuse, PostingsEnum.OFFSETS);
         if (pe.advance(doc) == doc) {
-          // TODO do we want to use the copied term here, or instead create a label that associates all of the TMIs with a single term?
-          mis.add(new TermMatchesIterator(BytesRef.deepCopyOf(term), pe));
+          mis.add(new TermMatchesIterator(pe));
           reuse = null;
         }
         else {
@@ -114,7 +113,7 @@ final class DisjunctionMatchesIterator implements MatchesIterator {
       protected boolean lessThan(MatchesIterator a, MatchesIterator b) {
         return a.startPosition() < b.startPosition() ||
             (a.startPosition() == b.startPosition() && a.endPosition() < b.endPosition()) ||
-            (a.startPosition() == b.startPosition() && a.endPosition() == b.endPosition() && a.term().compareTo(b.term()) < 0);
+            (a.startPosition() == b.startPosition() && a.endPosition() == b.endPosition());
       }
     };
     for (MatchesIterator mi : matches) {
@@ -159,9 +158,4 @@ final class DisjunctionMatchesIterator implements MatchesIterator {
     return queue.top().endOffset();
   }
 
-  @Override
-  public BytesRef term() {
-    return queue.top().term();
-  }
-
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e167e912/lucene/core/src/java/org/apache/lucene/search/MatchesIterator.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/search/MatchesIterator.java b/lucene/core/src/java/org/apache/lucene/search/MatchesIterator.java
index d695ea5..450a352 100644
--- a/lucene/core/src/java/org/apache/lucene/search/MatchesIterator.java
+++ b/lucene/core/src/java/org/apache/lucene/search/MatchesIterator.java
@@ -20,7 +20,6 @@ package org.apache.lucene.search;
 import java.io.IOException;
 
 import org.apache.lucene.index.LeafReaderContext;
-import org.apache.lucene.util.BytesRef;
 
 /**
  * An iterator over match positions (and optionally offsets) for a single document and field
@@ -71,11 +70,4 @@ public interface MatchesIterator {
    */
   int endOffset() throws IOException;
 
-  /**
-   * The underlying term of the current match
-   *
-   * Should only be called after {@link #next()} has returned {@code true}
-   */
-  BytesRef term();
-
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e167e912/lucene/core/src/java/org/apache/lucene/search/TermMatchesIterator.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/search/TermMatchesIterator.java b/lucene/core/src/java/org/apache/lucene/search/TermMatchesIterator.java
index 0516996..defc3af 100644
--- a/lucene/core/src/java/org/apache/lucene/search/TermMatchesIterator.java
+++ b/lucene/core/src/java/org/apache/lucene/search/TermMatchesIterator.java
@@ -20,7 +20,6 @@ package org.apache.lucene.search;
 import java.io.IOException;
 
 import org.apache.lucene.index.PostingsEnum;
-import org.apache.lucene.util.BytesRef;
 
 /**
  * A {@link MatchesIterator} over a single term's postings list
@@ -30,15 +29,13 @@ class TermMatchesIterator implements MatchesIterator {
   private int upto;
   private int pos;
   private final PostingsEnum pe;
-  private final BytesRef term;
 
   /**
    * Create a new {@link TermMatchesIterator} for the given term and postings list
    */
-  TermMatchesIterator(BytesRef term, PostingsEnum pe) throws IOException {
+  TermMatchesIterator(PostingsEnum pe) throws IOException {
     this.pe = pe;
     this.upto = pe.freq();
-    this.term = term;
   }
 
   @Override
@@ -70,8 +67,4 @@ class TermMatchesIterator implements MatchesIterator {
     return pe.endOffset();
   }
 
-  @Override
-  public BytesRef term() {
-    return term;
-  }
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e167e912/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/java/org/apache/lucene/search/TermQuery.java b/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
index b86f340..27b42ad 100644
--- a/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
@@ -95,7 +95,7 @@ public class TermQuery extends Query {
         if (pe.advance(doc) != doc) {
           return null;
         }
-        return new TermMatchesIterator(term.bytes(), pe);
+        return new TermMatchesIterator(pe);
       });
     }
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e167e912/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java
----------------------------------------------------------------------
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java b/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java
index 3b3dd32..185aad9 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java
@@ -147,30 +147,6 @@ public class TestMatchesIterator extends LuceneTestCase {
     }
   }
 
-  void checkTerms(Query q, String field, String[][] expected) throws IOException {
-    Weight w = searcher.createWeight(searcher.rewrite(q), ScoreMode.COMPLETE_NO_SCORES, 1);
-    for (int i = 0; i < expected.length; i++) {
-      LeafReaderContext ctx = searcher.leafContexts.get(ReaderUtil.subIndex(i, searcher.leafContexts));
-      int doc = i - ctx.docBase;
-      Matches matches = w.matches(ctx, doc);
-      if (matches == null) {
-        assertEquals(expected[i].length, 0);
-        continue;
-      }
-      MatchesIterator it = matches.getMatches(field);
-      if (it == null) {
-        assertEquals(expected[i].length, 0);
-        continue;
-      }
-      int pos = 0;
-      while (it.next()) {
-        assertEquals(expected[i][pos], it.term().utf8ToString());
-        pos += 1;
-      }
-      assertEquals(expected[i].length, pos);
-    }
-  }
-
   public void testTermQuery() throws IOException {
     Query q = new TermQuery(new Term(FIELD_WITH_OFFSETS, "w1"));
     checkMatches(q, FIELD_WITH_OFFSETS, new int[][]{
@@ -191,13 +167,6 @@ public class TestMatchesIterator extends LuceneTestCase {
         { 3, 0, 0, -1, -1, 2, 2, -1, -1 },
         { 4 }
     });
-    checkTerms(q, FIELD_NO_OFFSETS, new String[][]{
-        { "w1" },
-        { "w1" },
-        { "w1" },
-        { "w1", "w1" },
-        {}
-    });
   }
 
   public void testTermQueryNoPositions() throws IOException {
@@ -208,9 +177,11 @@ public class TestMatchesIterator extends LuceneTestCase {
   }
 
   public void testDisjunction() throws IOException {
+    Query w1 = new TermQuery(new Term(FIELD_WITH_OFFSETS, "w1"));
+    Query w3 = new TermQuery(new Term(FIELD_WITH_OFFSETS, "w3"));
     Query q = new BooleanQuery.Builder()
-        .add(new TermQuery(new Term(FIELD_WITH_OFFSETS, "w1")), BooleanClause.Occur.SHOULD)
-        .add(new TermQuery(new Term(FIELD_WITH_OFFSETS, "w3")), BooleanClause.Occur.SHOULD)
+        .add(w1, BooleanClause.Occur.SHOULD)
+        .add(w3, BooleanClause.Occur.SHOULD)
         .build();
     checkMatches(q, FIELD_WITH_OFFSETS, new int[][]{
         { 0, 0, 0, 0, 2, 2, 2, 6, 8 },
@@ -219,13 +190,6 @@ public class TestMatchesIterator extends LuceneTestCase {
         { 3, 0, 0, 0, 2, 2, 2, 6, 8, 5, 5, 15, 17 },
         { 4 }
     });
-    checkTerms(q, FIELD_WITH_OFFSETS, new String[][]{
-        { "w1", "w3" },
-        { "w1", "w3", "w3" },
-        { "w1" },
-        { "w1", "w1", "w3" },
-        {}
-    });
   }
 
   public void testDisjunctionNoPositions() throws IOException {
@@ -263,12 +227,16 @@ public class TestMatchesIterator extends LuceneTestCase {
   }
 
   public void testMinShouldMatch() throws IOException {
+    Query w1 = new TermQuery(new Term(FIELD_WITH_OFFSETS, "w1"));
+    Query w3 = new TermQuery(new Term(FIELD_WITH_OFFSETS, "w3"));
+    Query w4 = new TermQuery(new Term(FIELD_WITH_OFFSETS, "w4"));
+    Query xx = new TermQuery(new Term(FIELD_WITH_OFFSETS, "xx"));
     Query q = new BooleanQuery.Builder()
-        .add(new TermQuery(new Term(FIELD_WITH_OFFSETS, "w3")), BooleanClause.Occur.SHOULD)
+        .add(w3, BooleanClause.Occur.SHOULD)
         .add(new BooleanQuery.Builder()
-            .add(new TermQuery(new Term(FIELD_WITH_OFFSETS, "w1")), BooleanClause.Occur.SHOULD)
-            .add(new TermQuery(new Term(FIELD_WITH_OFFSETS, "w4")), BooleanClause.Occur.SHOULD)
-            .add(new TermQuery(new Term(FIELD_WITH_OFFSETS, "xx")), BooleanClause.Occur.SHOULD)
+            .add(w1, BooleanClause.Occur.SHOULD)
+            .add(w4, BooleanClause.Occur.SHOULD)
+            .add(xx, BooleanClause.Occur.SHOULD)
             .setMinimumNumberShouldMatch(2)
             .build(), BooleanClause.Occur.SHOULD)
         .build();
@@ -279,13 +247,6 @@ public class TestMatchesIterator extends LuceneTestCase {
         { 3, 0, 0, 0, 2, 2, 2, 6, 8, 3, 3, 9, 11, 5, 5, 15, 17 },
         { 4 }
     });
-    checkTerms(q, FIELD_WITH_OFFSETS, new String[][]{
-        { "w1", "w3", "w4" },
-        { "w3", "w3" },
-        { "w1", "xx", "w4" },
-        { "w1", "w1", "w4", "w3" },
-        {}
-    });
   }
 
   public void testMinShouldMatchNoPositions() throws IOException {
@@ -360,9 +321,6 @@ public class TestMatchesIterator extends LuceneTestCase {
         { 3 },
         { 4 }
     });
-    checkTerms(q, FIELD_WITH_OFFSETS, new String[][]{
-        {}, {}, { "xx" }, {}
-    });
 
     Query rq = new RegexpQuery(new Term(FIELD_WITH_OFFSETS, "w[1-2]"));
     checkMatches(rq, FIELD_WITH_OFFSETS, new int[][]{
@@ -430,11 +388,4 @@ public class TestMatchesIterator extends LuceneTestCase {
     assertTrue(fields.contains("id"));
   }
 
-  protected String[] doc1Fields = {
-      "w1 w2 w3 w4 w5",
-      "w1 w3 w2 w3 zz",
-      "w1 xx w2 yy w4",
-      "w1 w2 w1 w4 w2 w3"
-  };
-
 }

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/e167e912/lucene/test-framework/src/java/org/apache/lucene/search/AssertingMatchesIterator.java
----------------------------------------------------------------------
diff --git a/lucene/test-framework/src/java/org/apache/lucene/search/AssertingMatchesIterator.java b/lucene/test-framework/src/java/org/apache/lucene/search/AssertingMatchesIterator.java
index 52fb184..4f06512 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/search/AssertingMatchesIterator.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/search/AssertingMatchesIterator.java
@@ -19,8 +19,6 @@ package org.apache.lucene.search;
 
 import java.io.IOException;
 
-import org.apache.lucene.util.BytesRef;
-
 class AssertingMatchesIterator implements MatchesIterator {
 
   private final MatchesIterator in;
@@ -69,9 +67,4 @@ class AssertingMatchesIterator implements MatchesIterator {
     return in.endOffset();
   }
 
-  @Override
-  public BytesRef term() {
-    assert state == State.ITERATING : state;
-    return in.term();
-  }
 }


[17/40] lucene-solr:jira/solr-11833: SOLR-6286: TestReplicationHandler.doTestReplicateAfterCoreReload(): stop checking for identical commits before/after master core reload; and make non-nightly mode test 10 docs instead of 0.

Posted by ab...@apache.org.
SOLR-6286: TestReplicationHandler.doTestReplicateAfterCoreReload(): stop checking for identical commits before/after master core reload; and make non-nightly mode test 10 docs instead of 0.


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/46037dc6
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/46037dc6
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/46037dc6

Branch: refs/heads/jira/solr-11833
Commit: 46037dc67494a746857048399c02a6cf6f7a07c1
Parents: 42da6f7
Author: Steve Rowe <sa...@apache.org>
Authored: Thu Apr 19 14:49:10 2018 -0400
Committer: Steve Rowe <sa...@apache.org>
Committed: Thu Apr 19 14:49:10 2018 -0400

----------------------------------------------------------------------
 solr/CHANGES.txt                                |  4 +++
 .../solr/handler/TestReplicationHandler.java    | 36 +++++++++-----------
 2 files changed, 21 insertions(+), 19 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/46037dc6/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index 298abad..be3f704 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -170,6 +170,10 @@ Bug Fixes
 
 * SOLR-12187: Replica should watch clusterstate and unload itself if its entry is removed (Cao Manh Dat)
 
+* SOLR-6286: TestReplicationHandler.doTestReplicateAfterCoreReload(): stop checking for identical
+  commits before/after master core reload; and make non-nightly mode test 10 docs instead of 0.
+  (shalin, hossman, Mark Miller, Steve Rowe)
+ 
 Optimizations
 ----------------------
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/46037dc6/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java
index 8c24754..e22f4f7 100644
--- a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java
+++ b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java
@@ -231,22 +231,22 @@ public class TestReplicationHandler extends SolrTestCaseJ4 {
     return details;
   }
   
-  private NamedList<Object> getCommits(SolrClient s) throws Exception {
-    
-
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set("command","commits");
-    params.set("_trace","getCommits");
-    params.set("qt",ReplicationHandler.PATH);
-    QueryRequest req = new QueryRequest(params);
-
-    NamedList<Object> res = s.request(req);
-
-    assertNotNull("null response from server", res);
-
-
-    return res;
-  }
+//  private NamedList<Object> getCommits(SolrClient s) throws Exception {
+//    
+//
+//    ModifiableSolrParams params = new ModifiableSolrParams();
+//    params.set("command","commits");
+//    params.set("_trace","getCommits");
+//    params.set("qt",ReplicationHandler.PATH);
+//    QueryRequest req = new QueryRequest(params);
+//
+//    NamedList<Object> res = s.request(req);
+//
+//    assertNotNull("null response from server", res);
+//
+//
+//    return res;
+//  }
   
   private NamedList<Object> getIndexVersion(SolrClient s) throws Exception {
     
@@ -1239,7 +1239,7 @@ public class TestReplicationHandler extends SolrTestCaseJ4 {
 
   @Test
   public void doTestReplicateAfterCoreReload() throws Exception {
-    int docs = TEST_NIGHTLY ? 200000 : 0;
+    int docs = TEST_NIGHTLY ? 200000 : 10;
     
     //stop slave
     slaveJetty.stop();
@@ -1283,12 +1283,10 @@ public class TestReplicationHandler extends SolrTestCaseJ4 {
     assertEquals(null, cmp);
     
     Object version = getIndexVersion(masterClient).get("indexversion");
-    NamedList<Object> commits = getCommits(masterClient);
     
     reloadCore(masterClient, "collection1");
     
     assertEquals(version, getIndexVersion(masterClient).get("indexversion"));
-    assertEquals(commits.get("commits"), getCommits(masterClient).get("commits"));
     
     index(masterClient, "id", docs + 10, "name", "name = 1");
     index(masterClient, "id", docs + 20, "name", "name = 2");


[12/40] lucene-solr:jira/solr-11833: SOLR-12204: Upgrade commons-fileupload dependency to 1.3.3 to address CVE-2016-1000031

Posted by ab...@apache.org.
SOLR-12204: Upgrade commons-fileupload dependency to 1.3.3 to address CVE-2016-1000031


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/d09c7651
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/d09c7651
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/d09c7651

Branch: refs/heads/jira/solr-11833
Commit: d09c76518a1f72626a189957d8d4d8c6dab68d3c
Parents: 29cbd03
Author: Steve Rowe <sa...@apache.org>
Authored: Wed Apr 18 19:28:55 2018 -0400
Committer: Steve Rowe <sa...@apache.org>
Committed: Wed Apr 18 19:28:55 2018 -0400

----------------------------------------------------------------------
 lucene/ivy-versions.properties                  |  2 +-
 solr/CHANGES.txt                                | 18 ++++++++++++++++++
 solr/licenses/commons-fileupload-1.3.2.jar.sha1 |  1 -
 solr/licenses/commons-fileupload-1.3.3.jar.sha1 |  1 +
 4 files changed, 20 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d09c7651/lucene/ivy-versions.properties
----------------------------------------------------------------------
diff --git a/lucene/ivy-versions.properties b/lucene/ivy-versions.properties
index 14e7194..d930f25 100644
--- a/lucene/ivy-versions.properties
+++ b/lucene/ivy-versions.properties
@@ -53,7 +53,7 @@ com.sun.jersey.version = 1.9
 /commons-collections/commons-collections = 3.2.2
 /commons-configuration/commons-configuration = 1.6
 /commons-digester/commons-digester = 2.1
-/commons-fileupload/commons-fileupload = 1.3.2
+/commons-fileupload/commons-fileupload = 1.3.3
 /commons-io/commons-io = 2.5
 /commons-lang/commons-lang = 2.6
 /commons-logging/commons-logging = 1.1.3

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d09c7651/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index df7df15..e771990 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -230,6 +230,24 @@ Other Changes
 * SOLR-12134: ref-guide 'bare-bones html' validation is now part of 'ant documentation' and validates
   javadoc links locally. (hossman)
 
+==================  7.3.1 ==================
+
+Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.
+
+Versions of Major Components
+---------------------
+Apache Tika 1.17
+Carrot2 3.15.0
+Velocity 1.7 and Velocity Tools 2.0
+Apache UIMA 2.3.1
+Apache ZooKeeper 3.4.11
+Jetty 9.4.8.v20171121
+
+Bug Fixes
+----------------------
+
+* SOLR-12204: Upgrade commons-fileupload dependency to 1.3.3 to address CVE-2016-1000031.  (Steve Rowe)
+
 ==================  7.3.0 ==================
 
 Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d09c7651/solr/licenses/commons-fileupload-1.3.2.jar.sha1
----------------------------------------------------------------------
diff --git a/solr/licenses/commons-fileupload-1.3.2.jar.sha1 b/solr/licenses/commons-fileupload-1.3.2.jar.sha1
deleted file mode 100644
index 80f80fb..0000000
--- a/solr/licenses/commons-fileupload-1.3.2.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-5d7491ed6ebd02b6a8d2305f8e6b7fe5dbd95f72

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d09c7651/solr/licenses/commons-fileupload-1.3.3.jar.sha1
----------------------------------------------------------------------
diff --git a/solr/licenses/commons-fileupload-1.3.3.jar.sha1 b/solr/licenses/commons-fileupload-1.3.3.jar.sha1
new file mode 100644
index 0000000..d27deb4
--- /dev/null
+++ b/solr/licenses/commons-fileupload-1.3.3.jar.sha1
@@ -0,0 +1 @@
+04ff14d809195b711fd6bcc87e6777f886730ca1


[34/40] lucene-solr:jira/solr-11833: SOLR-12250: Add missing assumeWorkingMockito call

Posted by ab...@apache.org.
SOLR-12250: Add missing assumeWorkingMockito call


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4136fe0e
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4136fe0e
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4136fe0e

Branch: refs/heads/jira/solr-11833
Commit: 4136fe0e65ac4394033d24840ac364943c7d89a2
Parents: 84583d2
Author: Simon Willnauer <si...@apache.org>
Authored: Mon Apr 23 10:13:53 2018 +0200
Committer: Simon Willnauer <si...@apache.org>
Committed: Mon Apr 23 10:13:53 2018 +0200

----------------------------------------------------------------------
 solr/core/src/test/org/apache/solr/update/TransactionLogTest.java | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4136fe0e/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java b/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java
index 944b2ee..27383e7 100644
--- a/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java
+++ b/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java
@@ -23,6 +23,7 @@ import java.util.ArrayList;
 import java.util.Locale;
 
 import org.apache.lucene.util.LuceneTestCase;
+import org.apache.solr.SolrTestCaseJ4;
 import org.apache.solr.common.SolrInputDocument;
 import org.junit.Test;
 
@@ -32,6 +33,7 @@ public class TransactionLogTest extends LuceneTestCase {
 
   @Test
   public void testBigLastAddSize() throws IOException {
+    SolrTestCaseJ4.assumeWorkingMockito();
     String tlogFileName = String.format(Locale.ROOT, UpdateLog.LOG_FILENAME_PATTERN, UpdateLog.TLOG_NAME, 0);
     try (TransactionLog transactionLog = new TransactionLog(Files.createTempFile(tlogFileName, "").toFile(), new ArrayList<>())) {
       transactionLog.lastAddSize = 2000000000;


[29/40] lucene-solr:jira/solr-11833: SOLR-4793: Document usage of ZooKeeper's jute.maxbuffer sysprop for increasing the file size limit above 1MB

Posted by ab...@apache.org.
SOLR-4793: Document usage of ZooKeeper's jute.maxbuffer sysprop for increasing the file size limit above 1MB


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/22c4b9c3
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/22c4b9c3
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/22c4b9c3

Branch: refs/heads/jira/solr-11833
Commit: 22c4b9c36f5dfdf0578bacea2e83740714512765
Parents: 76578cf
Author: Steve Rowe <sa...@apache.org>
Authored: Fri Apr 20 16:06:22 2018 -0400
Committer: Steve Rowe <sa...@apache.org>
Committed: Fri Apr 20 16:06:22 2018 -0400

----------------------------------------------------------------------
 ...ractNamedEntitiesUpdateProcessorFactory.java |  4 ++
 solr/solr-ref-guide/src/learning-to-rank.adoc   |  2 +
 ...tting-up-an-external-zookeeper-ensemble.adoc | 70 ++++++++++++++++++++
 .../src/update-request-processors.adoc          |  2 +-
 4 files changed, 77 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/22c4b9c3/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
----------------------------------------------------------------------
diff --git a/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java b/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
index aa6a97b..2a7514d 100644
--- a/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
+++ b/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
@@ -77,6 +77,10 @@ import org.slf4j.LoggerFactory;
  * <p>See the <a href="http://opennlp.apache.org/models.html">OpenNLP website</a>
  * for information on downloading pre-trained models.</p>
  *
+ * Note that in order to use model files larger than 1MB on SolrCloud, 
+ * <a href="https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble#increasing-zookeeper-s-1mb-file-size-limit"
+ * >ZooKeeper server and client configuration is required</a>.
+ * 
  * <p>
  * The <code>source</code> field(s) can be configured as either:
  * </p>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/22c4b9c3/solr/solr-ref-guide/src/learning-to-rank.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
index 4e79a7a..938fd44 100644
--- a/solr/solr-ref-guide/src/learning-to-rank.adoc
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -560,6 +560,8 @@ NOTE: No `"features"` are configured in `myWrapperModel` because the features of
 
 CAUTION: `<lib dir="/path/to/models" regex=".*\.json" />` doesn't work as expected in this case, because `SolrResourceLoader` considers given resources as JAR if `<lib />` indicates files.
 
+As an alternative to the above-described `DefaultWrapperModel`, it is possible to <<setting-up-an-external-zookeeper-ensemble#increasing-zookeeper-s-1mb-file-size-limit,increase ZooKeeper's file size limit>>.
+
 === Applying Changes
 
 The feature store and the model store are both <<managed-resources.adoc#managed-resources,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/22c4b9c3/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
index f6bc525..ed4cfef 100644
--- a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
+++ b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
@@ -349,6 +349,76 @@ set ZK_HOST=zk1:2181,zk2:2181,zk3:2181/solr
 
 Now you will not have to enter the connection string when starting Solr.
 
+== Increasing ZooKeeper's 1MB File Size Limit
+
+ZooKeeper is designed to hold small files, on the order of kilobytes.  By default, ZooKeeper's file size limit is 1MB.  Attempting to write or read files larger than this will cause errors. 
+
+Some Solr features, e.g. text analysis synonyms, LTR, and OpenNLP named entity recognition, require configuration resources that can be larger than the default limit.  ZooKeeper can be configured, via Java system property https://zookeeper.apache.org/doc/r{ivy-zookeeper-version}/zookeeperAdmin.html#Unsafe+Options[`jute.maxbuffer`], to increase this limit.  Note that this configuration, which is required both for ZooKeeper server(s) and for all clients that connect to the server(s), must be the same everywhere it is specified.
+
+=== Configuring jute.maxbuffer on ZooKeeper nodes
+
+`jute.maxbuffer` must be configured on each external ZooKeeper node.  This can be achieved in any of the following ways; note though that only the first option works on Windows:  
+
+. In `<ZOOKEEPER_HOME>/conf/zoo.cfg`, e.g. to increase the file size limit to one byte less than 10MB, add this line:
++
+[source,properties]
+jute.maxbuffer=0x9fffff
+. In `<ZOOKEEPER_HOME>/conf/zookeeper-env.sh`, e.g. to increase the file size limit to 50MiB, add this line:
++
+[source,properties]
+JVMFLAGS="$JVMFLAGS -Djute.maxbuffer=50000000"
+. In `<ZOOKEEPER_HOME>/bin/zkServer.sh`, add a `JVMFLAGS` environment variable assignment near the top of the script, e.g. to increase the file size limit to 5MiB:
++
+[source,properties]
+JVMFLAGS="$JVMFLAGS -Djute.maxbuffer=5000000"
+
+=== Configuring jute.maxbuffer for ZooKeeper clients
+
+The `bin/solr` script invokes Java programs that act as ZooKeeper clients.  (When you use Solr's bundled ZooKeeper server instead of setting up an external ZooKeeper ensemble, the configuration described below will also configure the ZooKeeper server.) 
+  
+Add the setting to the `SOLR_OPTS` environment variable in Solr's include file (`bin/solr.in.sh` or `solr.in.cmd`):
+
+[.dynamic-tabs]
+--
+[example.tab-pane#linux2]
+====
+[.tab-label]*Linux: solr.in.sh*
+
+The section to look for will start:
+
+[source,properties]
+----
+# Anything you add to the SOLR_OPTS variable will be included in the java
+# start command line as-is, in ADDITION to other options. If you specify the
+# -a option on start script, those options will be appended as well. Examples:
+----
+
+Add the following line to increase the file size limit to 2MB:
+
+[source,properties]
+SOLR_OPTS="$SOLR_OPTS -Djute.maxbuffer=0x200000"
+====
+
+[example.tab-pane#zkwindows2]
+====
+[.tab-label]*Windows: solr.in.cmd*
+
+The section to look for will start:
+
+[source,bat]
+----
+REM Anything you add to the SOLR_OPTS variable will be included in the java
+REM start command line as-is, in ADDITION to other options. If you specify the
+REM -a option on start script, those options will be appended as well. Examples:
+----
+
+Add the following line to increase the file size limit to 2MB:
+
+[source,bat]
+set SOLR_OPTS=%SOLR_OPTS% -Djute.maxbuffer=0x200000
+====
+--
+
 == Securing the ZooKeeper Connection
 
 You may also want to secure the communication between ZooKeeper and Solr.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/22c4b9c3/solr/solr-ref-guide/src/update-request-processors.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/update-request-processors.adoc b/solr/solr-ref-guide/src/update-request-processors.adoc
index f394b08..6958628 100644
--- a/solr/solr-ref-guide/src/update-request-processors.adoc
+++ b/solr/solr-ref-guide/src/update-request-processors.adoc
@@ -355,7 +355,7 @@ The {solr-javadocs}/solr-uima/index.html[`uima`] contrib provides::
 
 The {solr-javadocs}/solr-analysis-extras/index.html[`analysis-extras`] contrib provides::
 
-{solr-javadocs}/solr-analysis-extras/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.html[OpenNLPExtractNamedEntitiesUpdateProcessorFactory]::: Update document(s) to be indexed with named entities extracted using an OpenNLP NER model.
+{solr-javadocs}/solr-analysis-extras/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.html[OpenNLPExtractNamedEntitiesUpdateProcessorFactory]::: Update document(s) to be indexed with named entities extracted using an OpenNLP NER model.  Note that in order to use model files larger than 1MB on SolrCloud, <<setting-up-an-external-zookeeper-ensemble#increasing-zookeeper-s-1mb-file-size-limit,ZooKeeper server and client configuration is required>>.  
 
 === Update Processor Factories You Should _Not_ Modify or Remove
 


[33/40] lucene-solr:jira/solr-11833: SOLR-12253: Remove optimize button from the core admin page

Posted by ab...@apache.org.
SOLR-12253: Remove optimize button from the core admin page


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/84583d25
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/84583d25
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/84583d25

Branch: refs/heads/jira/solr-11833
Commit: 84583d2563522ecd360851a7bfde4489772bebe1
Parents: f8c210f
Author: Erick Erickson <er...@apache.org>
Authored: Sun Apr 22 19:19:09 2018 -0700
Committer: Erick Erickson <er...@apache.org>
Committed: Sun Apr 22 19:19:09 2018 -0700

----------------------------------------------------------------------
 solr/CHANGES.txt                            | 3 +++
 solr/webapp/web/css/angular/collections.css | 4 ----
 solr/webapp/web/css/angular/cores.css       | 8 --------
 solr/webapp/web/partials/cores.html         | 2 --
 4 files changed, 3 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/84583d25/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index ff82838..ff0ea2c 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -182,6 +182,9 @@ Bug Fixes
 
 * SOLR-12250: NegativeArraySizeException on TransactionLog if previous document more than 1.9GB (Cao Manh Dat)
 
+
+* SOLR-12253: Remove optimize button from the core admin page (Erick Erickson)
+
 Optimizations
 ----------------------
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/84583d25/solr/webapp/web/css/angular/collections.css
----------------------------------------------------------------------
diff --git a/solr/webapp/web/css/angular/collections.css b/solr/webapp/web/css/angular/collections.css
index 43360c0..e8d1207 100644
--- a/solr/webapp/web/css/angular/collections.css
+++ b/solr/webapp/web/css/angular/collections.css
@@ -170,10 +170,6 @@ limitations under the License.
   background-image: url( ../../img/ico/cross-button.png );
 }
 
-#content #collections .actions #optimize span
-{
-  background-image: url( ../../img/ico/hammer-screwdriver.png );
-}
 
 #content #collections .actions div.action
 {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/84583d25/solr/webapp/web/css/angular/cores.css
----------------------------------------------------------------------
diff --git a/solr/webapp/web/css/angular/cores.css b/solr/webapp/web/css/angular/cores.css
index 0d10d60..0428c66 100644
--- a/solr/webapp/web/css/angular/cores.css
+++ b/solr/webapp/web/css/angular/cores.css
@@ -164,14 +164,6 @@ limitations under the License.
   background-image: url( ../../img/ico/arrow-switch.png );
 }
 
-#content #cores .actions #optimize
-{
-}
-
-#content #cores .actions #optimize span
-{
-  background-image: url( ../../img/ico/hammer-screwdriver.png );
-}
 
 #content #cores .actions div.action
 {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/84583d25/solr/webapp/web/partials/cores.html
----------------------------------------------------------------------
diff --git a/solr/webapp/web/partials/cores.html b/solr/webapp/web/partials/cores.html
index e426e4c..1615769 100644
--- a/solr/webapp/web/partials/cores.html
+++ b/solr/webapp/web/partials/cores.html
@@ -29,8 +29,6 @@ limitations under the License.
       <button id="swap" class="action requires-core" ng-click="showSwapCores()"><span>Swap</span></button>
       <button id="reload" class="requires-core" ng-click="reloadCore()"
          ng-class="{success: reloadSuccess, warn: reloadFailure}"><span>Reload</span></button>
-      <button id="optimize" class="requires-core" ng-click="optimizeCore()" ng-show="core.index.hasDeletions || optimizeSuccess"
-         ng-class="{success: optimizeSuccess, warn: optimizeFailure}"><span>Optimize</span></button>
       </span>
       <div class="action add" data-rel="add" ng-show="showAdd" style="display:block;left:0px;">
 


[32/40] lucene-solr:jira/solr-11833: SOLR-12250: NegativeArraySizeException on TransactionLog if previous document more than 1.9GB

Posted by ab...@apache.org.
SOLR-12250: NegativeArraySizeException on TransactionLog if previous document more than 1.9GB


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/f8c210f1
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/f8c210f1
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/f8c210f1

Branch: refs/heads/jira/solr-11833
Commit: f8c210f1484ed2749d4e14be1fa4905fb3d96e94
Parents: 4e0e8e9
Author: Cao Manh Dat <da...@apache.org>
Authored: Mon Apr 23 08:42:03 2018 +0700
Committer: Cao Manh Dat <da...@apache.org>
Committed: Mon Apr 23 08:42:03 2018 +0700

----------------------------------------------------------------------
 solr/CHANGES.txt                                |  2 +
 .../org/apache/solr/update/TransactionLog.java  |  3 +-
 .../apache/solr/update/TransactionLogTest.java  | 45 ++++++++++++++++++++
 3 files changed, 49 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f8c210f1/solr/CHANGES.txt
----------------------------------------------------------------------
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index a9e63f3..ff82838 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -180,6 +180,8 @@ Bug Fixes
 * SOLR-9304: Fix Solr's HTTP handling to respect '-Dsolr.ssl.checkPeerName=false' aka SOLR_SSL_CHECK_PEER_NAME
   (Shawn Heisey, Carlton Findley, Robby Pond, hossman)
 
+* SOLR-12250: NegativeArraySizeException on TransactionLog if previous document more than 1.9GB (Cao Manh Dat)
+
 Optimizations
 ----------------------
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f8c210f1/solr/core/src/java/org/apache/solr/update/TransactionLog.java
----------------------------------------------------------------------
diff --git a/solr/core/src/java/org/apache/solr/update/TransactionLog.java b/solr/core/src/java/org/apache/solr/update/TransactionLog.java
index 35d722b..be4dabc 100644
--- a/solr/core/src/java/org/apache/solr/update/TransactionLog.java
+++ b/solr/core/src/java/org/apache/solr/update/TransactionLog.java
@@ -379,7 +379,8 @@ public class TransactionLog implements Closeable {
 
       // adaptive buffer sizing
       int bufSize = lastAddSize;    // unsynchronized access of lastAddSize should be fine
-      bufSize = Math.min(1024*1024, bufSize+(bufSize>>3)+256);
+      // at least 256 bytes and at most 1 MB
+      bufSize = Math.min(1024*1024, Math.max(256, bufSize+(bufSize>>3)+256));
 
       MemOutputStream out = new MemOutputStream(new byte[bufSize]);
       codec.init(out);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f8c210f1/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java
----------------------------------------------------------------------
diff --git a/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java b/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java
new file mode 100644
index 0000000..944b2ee
--- /dev/null
+++ b/solr/core/src/test/org/apache/solr/update/TransactionLogTest.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.ArrayList;
+import java.util.Locale;
+
+import org.apache.lucene.util.LuceneTestCase;
+import org.apache.solr.common.SolrInputDocument;
+import org.junit.Test;
+
+import static org.mockito.Mockito.*;
+
+public class TransactionLogTest extends LuceneTestCase {
+
+  @Test
+  public void testBigLastAddSize() throws IOException {
+    String tlogFileName = String.format(Locale.ROOT, UpdateLog.LOG_FILENAME_PATTERN, UpdateLog.TLOG_NAME, 0);
+    try (TransactionLog transactionLog = new TransactionLog(Files.createTempFile(tlogFileName, "").toFile(), new ArrayList<>())) {
+      transactionLog.lastAddSize = 2000000000;
+      AddUpdateCommand updateCommand = mock(AddUpdateCommand.class);
+      when(updateCommand.isInPlaceUpdate()).thenReturn(false);
+      when(updateCommand.getSolrInputDocument()).thenReturn(new SolrInputDocument());
+      transactionLog.write(updateCommand, 0);
+    }
+  }
+
+}


[28/40] lucene-solr:jira/solr-11833: SOLR-12163: Minor cleanups

Posted by ab...@apache.org.
SOLR-12163: Minor cleanups


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/76578cf1
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/76578cf1
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/76578cf1

Branch: refs/heads/jira/solr-11833
Commit: 76578cf17b07c7d3d3440de171c031386a10aa28
Parents: b99e07c
Author: Steve Rowe <sa...@apache.org>
Authored: Fri Apr 20 15:59:06 2018 -0400
Committer: Steve Rowe <sa...@apache.org>
Committed: Fri Apr 20 15:59:06 2018 -0400

----------------------------------------------------------------------
 .../setting-up-an-external-zookeeper-ensemble.adoc    | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/76578cf1/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
index d46b7f9..f6bc525 100644
--- a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
+++ b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
@@ -131,14 +131,14 @@ We've added these parameters to the three we had already:
 +
 Since we've assigned server IDs to specific hosts/ports, we must also define which server in the list this node is. We do this with a `myid` file stored in the data directory (defined by the `dataDir` parameter). The contents of the `myid` file is only the server ID.
 +
-In the case of the configuration example above, you would create the file `/var/lib/zookeeperdata/1/myid` with the content "1" (without quotes), as in this example:
+In the case of the configuration example above, you would create the file `/var/lib/zookeeper/1/myid` with the content "1" (without quotes), as in this example:
 +
 [source,bash]
 1
 
 `autopurge.snapRetainCount`:: The number of snapshots and corresponding transaction logs to retain when purging old snapshots and transaction logs.
 +
-ZooKeeper automatically keeps a transaction log and writes to it as changes are made. A snapshot of the current state is taken periodically, and this snapshot supersedes transaction logs older than the snapshot. However, ZooKeeper never cleans up neither the old snapshots nor the old transaction logs; over time they will silently fill available disk space on each server.
+ZooKeeper automatically keeps a transaction log and writes to it as changes are made. A snapshot of the current state is taken periodically, and this snapshot supersedes transaction logs older than the snapshot. However, ZooKeeper never cleans up either the old snapshots or the old transaction logs; over time they will silently fill available disk space on each server.
 +
 To avoid this, set the `autopurge.snapRetainCount` and `autopurge.purgeInterval` parameters to enable an automatic clean up (purge) to occur at regular intervals. The `autopurge.snapRetainCount` parameter will keep the set number of snapshots and transaction logs when a clean up occurs. This parameter can be configured higher than `3`, but cannot be set lower than 3.
 
@@ -199,7 +199,7 @@ Repeat this for servers 4 and 5 if you are creating a 5-node ensemble (a rare ca
 
 To ease troubleshooting in case of problems with the ensemble later, it's recommended to run ZooKeeper with logging enabled and with proper JVM garbage collection (GC) settings.
 
-. Create a file named `zookeeper-env.sh` and put it in the `ZOOKEEPER_HOME/conf` directory (the same place you put `zoo.cfg`). This file will need to exist on each server of the ensemble.
+. Create a file named `zookeeper-env.sh` and put it in the `<ZOOKEEPER_HOME>/conf` directory (the same place you put `zoo.cfg`). This file will need to exist on each server of the ensemble.
 
 . Add the following settings to the file:
 +
@@ -215,7 +215,7 @@ The property `ZOO_LOG_DIR` defines the location on the server where ZooKeeper wi
 +
 With `SERVER_JVMFLAGS`, we've defined several parameters for garbage collection and logging GC-related events. One of the system parameters is `-Xloggc:$ZOO_LOG_DIR/zookeeper_gc.log`, which will put the garbage collection logs in the same directory we've defined for ZooKeeper logs, in a file named `zookeeper_gc.log`.
 
-. Review the default settings in `ZOOKEEPER_HOME/conf/log4j.properties`, especially the `log4j.appender.ROLLINGFILE.MaxFileSize` parameter. This sets the size at which log files will be rolled over, and by default it is 10MB.
+. Review the default settings in `<ZOOKEEPER_HOME>/conf/log4j.properties`, especially the `log4j.appender.ROLLINGFILE.MaxFileSize` parameter. This sets the size at which log files will be rolled over, and by default it is 10MB.
 
 . Copy `zookeeper-env.sh` and any changes to `log4j.properties` to each server in the ensemble.
 
@@ -231,7 +231,7 @@ ZooKeeper provides a great deal of power through additional configurations, but
 
 === Start ZooKeeper
 
-To start the ensemble, use the `ZOOKEEPER_HOME/bin/zkServer.sh` or `zkServer.cmd` script, as with this command:
+To start the ensemble, use the `<ZOOKEEPER_HOME>/bin/zkServer.sh` or `zkServer.cmd` script, as with this command:
 
 .Linux OS
 [source,bash]
@@ -280,9 +280,9 @@ Once the znode is created, it behaves in a similar way to a directory on a files
 
 === Using the -z Parameter with bin/solr
 
-Pointing Solr at the ZooKeeper instance you've created is a simple matter of using the `-z` parameter when using the `bin/solr` script.
+Pointing Solr at the ZooKeeper ensemble you've created is a simple matter of using the `-z` parameter when using the `bin/solr` script.
 
-For example, to point the Solr instance to the ZooKeeper you've started on port 2181 on three servers, this is what you'd need to do:
+For example, to point the Solr instance to the ZooKeeper you've started on port 2181 on three servers with chroot `/solr` (see <<Using a chroot>> above), this is what you'd need to do:
 
 [source,bash]
 ----