You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/12 14:05:24 UTC

[16/37] lucene-solr:branch_6x: squash merge jira/solr-10290 into master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/learning-to-rank.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
new file mode 100644
index 0000000..e6dbe0a
--- /dev/null
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -0,0 +1,744 @@
+= Learning To Rank
+:page-shortname: learning-to-rank
+:page-permalink: learning-to-rank.html
+
+With the *Learning To Rank* (or *LTR* for short) contrib module you can configure and run machine learned ranking models in Solr.
+
+The module also supports feature extraction inside Solr. The only thing you need to do outside Solr is train your own ranking model.
+
+[[LearningToRank-Concepts]]
+== Concepts
+
+[[LearningToRank-Re-Ranking]]
+=== Re-Ranking
+
+Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, complex query. This page describes the use of *LTR* complex queries, information on other rank queries included in the Solr distribution can be found on the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> page.
+
+[[LearningToRank-LearningToRank]]
+=== Learning To Rank
+
+In information retrieval systems, https://en.wikipedia.org/wiki/Learning_to_rank[Learning to Rank] is used to re-rank the top N retrieved documents using trained machine learning models. The hope is that such sophisticated models can make more nuanced ranking decisions than standard ranking functions like https://en.wikipedia.org/wiki/Tf%E2%80%93idf[TF-IDF] or https://en.wikipedia.org/wiki/Okapi_BM25[BM25].
+
+[[LearningToRank-Model]]
+==== Model
+
+A ranking model computes the scores used to rerank documents. Irrespective of any particular algorithm or implementation, a ranking model's computation can use three types of inputs:
+
+* parameters that represent the scoring algorithm
+* features that represent the document being scored
+* features that represent the query for which the document is being scored
+
+[[LearningToRank-Feature]]
+==== Feature
+
+A feature is a value, a number, that represents some quantity or quality of the document being scored or of the query for which documents are being scored. For example documents often have a 'recency' quality and 'number of past purchases' might be a quantity that is passed to Solr as part of the search query.
+
+[[LearningToRank-Normalizer]]
+==== Normalizer
+
+Some ranking models expect features on a particular scale. A normalizer can be used to translate arbitrary feature values into normalized values e.g. on a 0..1 or 0..100 scale.
+
+[[LearningToRank-Training]]
+=== Training
+
+[[LearningToRank-Featureengineering]]
+==== Feature engineering
+
+The LTR contrib module includes several feature classes as well as support for custom features. Each feature class's javadocs contain an example to illustrate use of that class. The process of https://en.wikipedia.org/wiki/Feature_engineering[feature engineering] itself is then entirely up to your domain expertise and creativity.
+
+[cols=",,,",options="header",]
+|===
+|Feature |Class |Example parameters |<<LearningToRank-ExternalFeatureInformation,External Feature Information>>
+|field length |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/FieldLengthFeature.html[FieldLengthFeature] |`{"field":"title"}` |not (yet) supported
+|field value |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/FieldValueFeature.html[FieldValueFeature] |`{"field":"hits"}` |not (yet) supported
+|original score |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/OriginalScoreFeature.html[OriginalScoreFeature] |`{}` |not applicable
+|solr query |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html[SolrFeature] |`{"q":"{!func}` `recip(ms(NOW,last_modified)` `,3.16e-11,1,1)"}` |supported
+|solr filter query |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html[SolrFeature] |`{"fq":["{!terms f=category}book"]}` |supported
+|solr query + filter query |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html[SolrFeature] |`{"q":"{!func}` `recip(ms(NOW,last_modified),` `3.16e-11,1,1)",` `"fq":["{!terms f=category}book"]}` |supported
+|value |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/ValueFeature.html[ValueFeature] |`{"value":"${userFromMobile}","required":true}` |supported
+|(custom) |(custom class extending {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/Feature.html[Feature]) | |
+|===
+
+[cols=",,",options="header",]
+|===
+|Normalizer |Class |Example parameters
+|Identity |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/IdentityNormalizer.html[IdentityNormalizer] |`{}`
+|MinMax |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/MinMaxNormalizer.html[MinMaxNormalizer] |`{"min":"0", "max":"50" }`
+|Standard |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/StandardNormalizer.html[StandardNormalizer] |`{"avg":"42","std":"6"}`
+|(custom) |(custom class extending {solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/Normalizer.html[Normalizer]) |
+|===
+
+[[LearningToRank-Featureextraction]]
+==== Feature Extraction
+
+The ltr contrib module includes a <<transforming-result-documents.adoc#transforming-result-documents,[features>> transformer] to support the calculation and return of feature values for https://en.wikipedia.org/wiki/Feature_extraction[feature extraction] purposes including and especially when you do not yet have an actual reranking model.
+
+[[LearningToRank-Featureselectionandmodeltraining]]
+==== Feature Selection and Model Training
+
+Feature selection and model training take place offline and outside Solr. The ltr contrib module supports two generalized forms of models as well as custom models. Each model class's javadocs contain an example to illustrate configuration of that class. In the form of JSON files your trained model or models (e.g. different models for different customer geographies) can then be directly uploaded into Solr using provided REST APIs.
+
+[cols=",,",options="header",]
+|===
+|General form |Class |Specific examples
+|Linear |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/model/LinearModel.html[LinearModel] |RankSVM, Pranking
+|Multiple Additive Trees |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/model/MultipleAdditiveTreesModel.html[MultipleAdditiveTreesModel] |LambdaMART, Gradient Boosted Regression Trees (GBRT)
+|(custom) |(custom class extending {solr-javadocs}/solr-ltr/org/apache/solr/ltr/model/LTRScoringModel.html[LTRScoringModel]) |(not applicable)
+|===
+
+[[LearningToRank-QuickStartExample]]
+== Quick Start Example
+
+The `"techproducts"` example included with Solr is pre-configured with the plugins required for learning-to-rank, but they are disabled by default.
+
+To enable the plugins, please specify the `solr.ltr.enabled` JVM System Property when running the example:
+
+[source,bash]
+----
+bin/solr start -e techproducts -Dsolr.ltr.enabled=true
+----
+
+[[LearningToRank-Uploadingfeatures]]
+=== Uploading Features
+
+To upload features in a `/path/myFeatures.json` file, please run:
+
+[source,bash]
+----
+curl -XPUT 'http://localhost:8983/solr/techproducts/schema/feature-store' --data-binary "@/path/myFeatures.json" -H 'Content-type:application/json'
+----
+
+To view the features you just uploaded please open the following URL in a browser:
+
+`\http://localhost:8983/solr/techproducts/schema/feature-store/\_DEFAULT_`
+
+.Example: /path/myFeatures.json
+[source,json]
+----
+[
+  {
+    "name" : "documentRecency",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : {
+      "q" : "{!func}recip( ms(NOW,last_modified), 3.16e-11, 1, 1)"
+    }
+  },
+  {
+    "name" : "isBook",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : {
+      "fq": ["{!terms f=cat}book"]
+    }
+  },
+  {
+    "name" : "originalScore",
+    "class" : "org.apache.solr.ltr.feature.OriginalScoreFeature",
+    "params" : {}
+  }
+]
+----
+
+[[LearningToRank-Extractingfeatures]]
+=== Extracting Features
+
+To extract features as part of a query, add `[features]` to the `fl` parameter, for example:
+
+`\http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,%5Bfeatures%5D`
+
+The output XML will include feature values as a comma-separated list, resembling the output shown here:
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":0,
+    "params":{
+      "q":"test",
+      "fl":"id,score,[features]"}},
+  "response":{"numFound":2,"start":0,"maxScore":1.959392,"docs":[
+      {
+        "id":"GB18030TEST",
+        "score":1.959392,
+        "[features]":"documentRecency=0.020893794,isBook=0.0,originalScore=1.959392"},
+      {
+        "id":"UTF8TEST",
+        "score":1.5513437,
+        "[features]":"documentRecency=0.020893794,isBook=0.0,originalScore=1.5513437"}]
+  }}
+----
+
+[[LearningToRank-Uploadingamodel]]
+=== Uploading a Model
+
+To upload the model in a `/path/myModel.json` file, please run:
+
+[source,bash]
+----
+curl -XPUT 'http://localhost:8983/solr/techproducts/schema/model-store' --data-binary "@/path/myModel.json" -H 'Content-type:application/json'
+----
+
+To view the model you just uploaded please open the following URL in a browser:
+
+`\http://localhost:8983/solr/techproducts/schema/model-store`
+
+.Example: /path/myModel.json
+[source,json]
+----
+{
+  "class" : "org.apache.solr.ltr.model.LinearModel",
+  "name" : "myModel",
+  "features" : [
+    { "name" : "documentRecency" },
+    { "name" : "isBook" },
+    { "name" : "originalScore" }
+  ],
+  "params" : {
+    "weights" : {
+      "documentRecency" : 1.0,
+      "isBook" : 0.1,
+      "originalScore" : 0.5
+    }
+  }
+}
+----
+
+[[LearningToRank-Runningarerankquery]]
+=== Running a Rerank Query
+
+To rerank the results of a query, add the `rq` parameter to your search, for example:
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%20reRankDocs=100%7D&fl=id,score[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel reRankDocs=100}&fl=id,score]`
+
+The addition of the `rq` parameter will not change the output XML of the search.
+
+To obtain the feature values computed during reranking, add `[features]` to the `fl` parameter, for example:
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%20reRankDocs=100%7D&fl=id,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel reRankDocs=100}&fl=id,score,[features]]
+
+The output XML will include feature values as a comma-separated list, resembling the output shown here:
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":0,
+    "params":{
+      "q":"test",
+      "fl":"id,score,[features]",
+      "rq":"{!ltr model=myModel reRankDocs=100}"}},
+  "response":{"numFound":2,"start":0,"maxScore":1.0005897,"docs":[
+      {
+        "id":"GB18030TEST",
+        "score":1.0005897,
+        "[features]":"documentRecency=0.020893792,isBook=0.0,originalScore=1.959392"},
+      {
+        "id":"UTF8TEST",
+        "score":0.79656565,
+        "[features]":"documentRecency=0.020893792,isBook=0.0,originalScore=1.5513437"}]
+  }}
+----
+
+[[LearningToRank-ExternalFeatureInformation]]
+=== External Feature Information
+
+The {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/ValueFeature.html[ValueFeature] and {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html[SolrFeature] classes support the use of external feature information, `efi` for short.
+
+[[LearningToRank-Uploadingfeatures.1]]
+==== Uploading Features
+
+To upload features in a `/path/myEfiFeatures.json` file, please run:
+
+[source,bash]
+----
+curl -XPUT 'http://localhost:8983/solr/techproducts/schema/feature-store' --data-binary "@/path/myEfiFeatures.json" -H 'Content-type:application/json'
+----
+
+To view the features you just uploaded please open the following URL in a browser:
+
+`\http://localhost:8983/solr/techproducts/schema/feature-store/myEfiFeatureStore`
+
+.Example: /path/myEfiFeatures.json
+[source,json]
+----
+[
+  {
+    "store" : "myEfiFeatureStore",
+    "name" : "isPreferredManufacturer",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : { "fq" : [ "{!field f=manu}${preferredManufacturer}" ] }
+  },
+  {
+    "store" : "myEfiFeatureStore",
+    "name" : "userAnswerValue",
+    "class" : "org.apache.solr.ltr.feature.ValueFeature",
+    "params" : { "value" : "${answer:42}" }
+  },
+  {
+    "store" : "myEfiFeatureStore",
+    "name" : "userFromMobileValue",
+    "class" : "org.apache.solr.ltr.feature.ValueFeature",
+    "params" : { "value" : "${fromMobile}", "required" : true }
+  },
+  {
+    "store" : "myEfiFeatureStore",
+    "name" : "userTextCat",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : { "q" : "{!field f=cat}${text}" }
+  }
+]
+----
+
+As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"store":"myEfiFeatureStore"` attributes: read more about feature `store` in the <<Lifecycle>> section of this page.
+
+[[LearningToRank-Extractingfeatures.1]]
+==== Extracting Features
+
+To extract `myEfiFeatureStore` features as part of a query, add `efi.*` parameters to the `[features]` part of the `fl` parameter, for example:
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13]
+
+[[LearningToRank-Uploadingamodel.1]]
+==== Uploading a Model
+
+To upload the model in a `/path/myEfiModel.json` file, please run:
+
+[source,bash]
+----
+curl -XPUT 'http://localhost:8983/solr/techproducts/schema/model-store' --data-binary "@/path/myEfiModel.json" -H 'Content-type:application/json'
+----
+
+To view the model you just uploaded please open the following URL in a browser:
+
+`\http://localhost:8983/solr/techproducts/schema/model-store`
+
+.Example: /path/myEfiModel.json
+[source,json]
+----
+{
+  "store" : "myEfiFeatureStore",
+  "name" : "myEfiModel",
+  "class" : "org.apache.solr.ltr.model.LinearModel",
+  "features" : [
+    { "name" : "isPreferredManufacturer" },
+    { "name" : "userAnswerValue" },
+    { "name" : "userFromMobileValue" },
+    { "name" : "userTextCat" }
+  ],
+  "params" : {
+    "weights" : {
+      "isPreferredManufacturer" : 0.2,
+      "userAnswerValue" : 1.0,
+      "userFromMobileValue" : 1.0,
+      "userTextCat" : 0.1
+    }
+  }
+}
+----
+
+[[LearningToRank-Runningarerankquery.1]]
+==== Running a Rerank Query
+
+To obtain the feature values computed during reranking, add `[features]` to the `fl` parameter and `efi.*` parameters to the `rq` parameter, for example:
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1}&fl=id,cat,manu,score,[features]] link:[]
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]]
+
+Notice the absence of `efi.*` parameters in the `[features]` part of the `fl` parameter.
+
+[[LearningToRank-Extractingfeatureswhilstreranking]]
+==== Extracting Features While Reranking
+
+To extract features for `myEfiFeatureStore` features while still reranking with `myModel`:
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]] link:[]
+
+Notice the absence of `efi.*` parameters in the `rq` parameter (because `myModel` does not use `efi` feature) and the presence of `efi.*` parameters in the `[features]` part of the `fl` parameter (because `myEfiFeatureStore` contains `efi` features).
+
+Read more about model evolution in the <<Lifecycle>> section of this page.
+
+[[LearningToRank-Trainingexample]]
+=== Training Example
+
+Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com] (the `solr/contrib/ltr/example` folder is not shipped in the solr binary release).
+
+[[LearningToRank-Installation]]
+== Installation
+
+The ltr contrib module requires the `dist/solr-ltr-*.jar` JARs.
+
+[[LearningToRank-Configuration]]
+== Configuration
+
+Learning-To-Rank is a contrib module and therefore its plugins must be configured in `solrconfig.xml`.
+
+[[LearningToRank-Minimumrequirements]]
+=== Minimum requirements
+
+* Include the required contrib JARs. Note that by default paths are relative to the Solr core so they may need adjustments to your configuration, or an explicit specification of the `$solr.install.dir`.
++
+[source,xml]
+----
+<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-ltr-\d.*\.jar" />
+----
+
+* Declaration of the `ltr` query parser.
++
+[source,xml]
+----
+<queryParser name="ltr" class="org.apache.solr.ltr.search.LTRQParserPlugin"/>
+----
+
+* Configuration of the feature values cache.
++
+[source,xml]
+----
+<cache name="QUERY_DOC_FV"
+       class="solr.search.LRUCache"
+       size="4096"
+       initialSize="2048"
+       autowarmCount="4096"
+       regenerator="solr.search.NoOpRegenerator" />
+----
+
+* Declaration of the `[features]` transformer.
++
+[source,xml]
+----
+<transformer name="features" class="org.apache.solr.ltr.response.transform.LTRFeatureLoggerTransformerFactory">
+  <str name="fvCacheName">QUERY_DOC_FV</str>
+</transformer>
+----
+
+[[LearningToRank-Advancedoptions]]
+=== Advanced Options
+
+[[LearningToRank-LTRThreadModule]]
+==== LTRThreadModule
+
+A thread module can be configured for the query parser and/or the transformer to parallelize the creation of feature weights. For details, please refer to the {solr-javadocs}/solr-ltr/org/apache/solr/ltr/LTRThreadModule.html[LTRThreadModule] javadocs.
+
+[[LearningToRank-Featurevectorcustomization]]
+==== Feature Vector Customization
+
+The features transformer returns dense CSV values such as `featureA=0.1,featureB=0.2,featureC=0.3,featureD=0.0`.
+
+For sparse CSV output such as `featureA:0.1 featureB:0.2 featureC:0.3` you can customize the {solr-javadocs}/solr-ltr/org/apache/solr/ltr/response/transform/LTRFeatureLoggerTransformerFactory.html[feature logger transformer] declaration in `solrconfig.xml` as follows:
+
+[source,xml]
+----
+<transformer name="features" class="org.apache.solr.ltr.response.transform.LTRFeatureLoggerTransformerFactory">
+  <str name="fvCacheName">QUERY_DOC_FV</str>
+  <str name="defaultFormat">sparse</str>
+  <str name="csvKeyValueDelimiter">:</str>
+  <str name="csvFeatureSeparator"> </str>
+</transformer>
+----
+
+[[LearningToRank-Implementationandcontributions]]
+==== Implementation and Contributions
+
+.How does Solr Learning-To-Rank work under the hood?
+
+NOTE: Please refer to the `ltr` {solr-javadocs}/solr-ltr/org/apache/solr/ltr/package-summary.html[javadocs] for an implementation overview.
+
+.How could i write additional models and/or features?
+[NOTE]
+====
+Contributions for further models, features and normalizers are welcome. Related links:
+
+* {solr-javadocs}/solr-ltr/org/apache/solr/ltr/model/LTRScoringModel.html[LTRScoringModel javadocs]
+* {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/Feature.html[Feature javadocs]
+* {solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/Normalizer.html[Normalizer javadocs]
+* http://wiki.apache.org/solr/HowToContribute
+* http://wiki.apache.org/lucene-java/HowToContribute
+====
+
+[[LearningToRank-Lifecycle]]
+== Lifecycle
+
+[[LearningToRank-Featurestores]]
+=== Feature Stores
+
+It is recommended that you organise all your features into stores which are akin to namespaces:
+
+* Features within a store must be named uniquely.
+* Across stores identical or similar features can share the same name.
+* If no store name is specified then the default `\_DEFAULT_` feature store will be used.
+
+To discover the names of all your feature stores:
+
+`\http://localhost:8983/solr/techproducts/schema/feature-store`
+
+To inspect the content of the `commonFeatureStore` feature store:
+
+`\http://localhost:8983/solr/techproducts/schema/feature-store/commonFeatureStore`
+
+[[LearningToRank-Models]]
+=== Models
+
+* A model uses features from exactly one feature store.
+* If no store is specified then the default `\_DEFAULT_` feature store will be used.
+* A model need not use all the features defined in a feature store.
+* Multiple models can use the same feature store.
+
+To extract features for `currentFeatureStore`'s features:
+
+`\http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,[features store=currentFeatureStore]`
+
+To extract features for `nextFeatureStore` features whilst reranking with `currentModel` based on `currentFeatureStore`:
+
+`\http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=currentModel reRankDocs=100}&fl=id,score,[features store=nextFeatureStore]`
+
+To view all models:
+
+`\http://localhost:8983/solr/techproducts/schema/model-store`
+
+To delete the `currentModel` model:
+
+[source,bash]
+----
+curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/model-store/currentModel'
+----
+
+IMPORTANT: A feature store must be deleted only when there are no models using it.
+
+To delete the `currentFeatureStore` feature store:
+
+[source,bash]
+----
+curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/feature-store/currentFeatureStore'
+----
+
+[[LearningToRank-Applyingchanges]]
+=== Applying Changes
+
+The feature store and the model store are both <<managed-resources.adoc#managed-resources,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
+
+[[LearningToRank-Examples]]
+=== Examples
+
+==== One Feature Store, Multiple Ranking Models
+
+* `leftModel` and `rightModel` both use features from `commonFeatureStore` and the only different between the two models is the weights attached to each feature.
+* Conventions used:
+** `commonFeatureStore.json` file contains features for the `commonFeatureStore` feature store
+** `leftModel.json` file contains model named `leftModel`
+** `rightModel.json` file contains model named `rightModel`
+** The model's features and weights are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two models are.
+** The stores features are sorted alphabetically by name, this makes it easy to lookup features used in the models
+
+.Example: /path/commonFeatureStore.json
+[source,json]
+----
+[
+  {
+    "store" : "commonFeatureStore",
+    "name" : "documentRecency",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : {
+      "q" : "{!func}recip( ms(NOW,last_modified), 3.16e-11, 1, 1)"
+    }
+  },
+  {
+    "store" : "commonFeatureStore",
+    "name" : "isBook",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : {
+      "fq": [ "{!terms f=category}book" ]
+    }
+  },
+  {
+    "store" : "commonFeatureStore",
+    "name" : "originalScore",
+    "class" : "org.apache.solr.ltr.feature.OriginalScoreFeature",
+    "params" : {}
+  }
+]
+----
+
+.Example: /path/leftModel.json
+[source,json]
+----
+{
+  "store" : "commonFeatureStore",
+  "name" : "leftModel",
+  "class" : "org.apache.solr.ltr.model.LinearModel",
+  "features" : [
+    { "name" : "documentRecency" },
+    { "name" : "isBook" },
+    { "name" : "originalScore" }
+  ],
+  "params" : {
+    "weights" : {
+      "documentRecency" : 0.1,
+      "isBook" : 1.0,
+      "originalScore" : 0.5
+    }
+  }
+}
+----
+
+.Example: /path/rightModel.json
+[source,json]
+----
+{
+  "store" : "commonFeatureStore",
+  "name" : "rightModel",
+  "class" : "org.apache.solr.ltr.model.LinearModel",
+  "features" : [
+    { "name" : "documentRecency" },
+    { "name" : "isBook" },
+    { "name" : "originalScore" }
+  ],
+  "params" : {
+    "weights" : {
+      "documentRecency" : 1.0,
+      "isBook" : 0.1,
+      "originalScore" : 0.5
+    }
+  }
+}
+----
+
+[[LearningToRank-Modelevolution]]
+==== Model Evolution
+
+* `linearModel201701` uses features from `featureStore201701`
+* `treesModel201702` uses features from `featureStore201702`
+* `linearModel201701` and `treesModel201702` and their feature stores can co-exist whilst both are needed.
+* When `linearModel201701` has been deleted then `featureStore201701` can also be deleted.
+* Conventions used:
+** `<store>.json` file contains features for the `<store>` feature store
+** `<model>.json` file contains model name `<model>`
+** a 'generation' id (e.g. `YYYYMM` year-month) is part of the feature store and model names
+** The model's features and weights are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two models are.
+** The stores features are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two feature stores are.
+
+.Example: /path/featureStore201701.json
+[source,json]
+----
+[
+  {
+    "store" : "featureStore201701",
+    "name" : "documentRecency",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : {
+      "q" : "{!func}recip( ms(NOW,last_modified), 3.16e-11, 1, 1)"
+    }
+  },
+  {
+    "store" : "featureStore201701",
+    "name" : "isBook",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : {
+      "fq": [ "{!terms f=category}book" ]
+    }
+  },
+  {
+    "store" : "featureStore201701",
+    "name" : "originalScore",
+    "class" : "org.apache.solr.ltr.feature.OriginalScoreFeature",
+    "params" : {}
+  }
+]
+----
+
+.Example: /path/linearModel201701.json
+[source,json]
+----
+{
+  "store" : "featureStore201701",
+  "name" : "linearModel201701",
+  "class" : "org.apache.solr.ltr.model.LinearModel",
+  "features" : [
+    { "name" : "documentRecency" },
+    { "name" : "isBook" },
+    { "name" : "originalScore" }
+  ],
+  "params" : {
+    "weights" : {
+      "documentRecency" : 0.1,
+      "isBook" : 1.0,
+      "originalScore" : 0.5
+    }
+  }
+}
+----
+
+.Example: /path/featureStore201702.json
+[source,json]
+----
+[
+  {
+    "store" : "featureStore201702",
+    "name" : "isBook",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : {
+      "fq": [ "{!terms f=category}book" ]
+    }
+  },
+  {
+    "store" : "featureStore201702",
+    "name" : "originalScore",
+    "class" : "org.apache.solr.ltr.feature.OriginalScoreFeature",
+    "params" : {}
+  }
+]
+----
+
+.Example: /path/treesModel201702.json
+[source,json]
+----
+{
+  "store" : "featureStore201702",
+  "name" : "treesModel201702",
+  "class" : "org.apache.solr.ltr.model.MultipleAdditiveTreesModel",
+  "features" : [
+    { "name" : "isBook" },
+    { "name" : "originalScore" }
+  ],
+  "params" : {
+    "trees" : [
+      {
+        "weight" : "1",
+        "root" : {
+          "feature" : "isBook",
+          "threshold" : "0.5",
+          "left" : { "value" : "-100" },
+          "right" : {
+            "feature" : "originalScore",
+            "threshold" : "10.0",
+            "left" : { "value" : "50" },
+            "right" : { "value" : "75" }
+          }
+        }
+      },
+      {
+        "weight" : "2",
+        "root" : {
+          "value" : "-10"
+        }
+      }
+    ]
+  }
+}
+----
+
+[[LearningToRank-AdditionalResources]]
+== Additional Resources
+
+* "Learning to Rank in Solr" presentation at Lucene/Solr Revolution 2015 in Austin:
+** Slides: http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp
+** Video: https://www.youtube.com/watch?v=M7BKwJoh96s

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/legacy-scaling-and-distribution.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/legacy-scaling-and-distribution.adoc b/solr/solr-ref-guide/src/legacy-scaling-and-distribution.adoc
new file mode 100644
index 0000000..3499387
--- /dev/null
+++ b/solr/solr-ref-guide/src/legacy-scaling-and-distribution.adoc
@@ -0,0 +1,18 @@
+= Legacy Scaling and Distribution
+:page-shortname: legacy-scaling-and-distribution
+:page-permalink: legacy-scaling-and-distribution.html
+:page-children: introduction-to-scaling-and-distribution, distributed-search-with-index-sharding, index-replication, combining-distribution-and-replication, merging-indexes
+
+This section describes how to set up distribution and replication in Solr. It is considered "legacy" behavior, since while it is still supported in Solr, the SolrCloud functionality described in the previous chapter is where the current development is headed. However, if you don't need all that SolrCloud delivers, search distribution and index replication may be sufficient.
+
+This section covers the following topics:
+
+<<introduction-to-scaling-and-distribution.adoc#introduction-to-scaling-and-distribution,Introduction to Scaling and Distribution>>: Conceptual information about distribution and replication in Solr.
+
+<<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>: Detailed information about implementing distributed searching in Solr.
+
+<<index-replication.adoc#index-replication,Index Replication>>: Detailed information about replicating your Solr indexes.
+
+<<combining-distribution-and-replication.adoc#combining-distribution-and-replication,Combining Distribution and Replication>>: Detailed information about replicating shards in a distributed index.
+
+<<merging-indexes.adoc#merging-indexes,Merging Indexes>>: Information about combining separate indexes in Solr.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/lib-directives-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/lib-directives-in-solrconfig.adoc b/solr/solr-ref-guide/src/lib-directives-in-solrconfig.adoc
new file mode 100644
index 0000000..036e00a
--- /dev/null
+++ b/solr/solr-ref-guide/src/lib-directives-in-solrconfig.adoc
@@ -0,0 +1,24 @@
+= Lib Directives in SolrConfig
+:page-shortname: lib-directives-in-solrconfig
+:page-permalink: lib-directives-in-solrconfig.html
+
+Solr allows loading plugins by defining `<lib/>` directives in `solrconfig.xml`.
+
+The plugins are loaded in the order they appear in `solrconfig.xml`. If there are dependencies, list the lowest level dependency jar first.
+
+Regular expressions can be used to provide control loading jars with dependencies on other jars in the same directory. All directories are resolved as relative to the Solr `instanceDir`.
+
+[source,xml]
+----
+<lib dir="../../../contrib/extraction/lib" regex=".*\.jar" />
+<lib dir="../../../dist/" regex="solr-cell-\d.*\.jar" />
+
+<lib dir="../../../contrib/clustering/lib/" regex=".*\.jar" />
+<lib dir="../../../dist/" regex="solr-clustering-\d.*\.jar" />
+
+<lib dir="../../../contrib/langid/lib/" regex=".*\.jar" />
+<lib dir="../../../dist/" regex="solr-langid-\d.*\.jar" />
+
+<lib dir="../../../contrib/velocity/lib" regex=".*\.jar" />
+<lib dir="../../../dist/" regex="solr-velocity-\d.*\.jar" />
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/licenses/LICENSE
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/licenses/LICENSE b/solr/solr-ref-guide/src/licenses/LICENSE
new file mode 100755
index 0000000..e04b3d0
--- /dev/null
+++ b/solr/solr-ref-guide/src/licenses/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2016 Tom Johnson
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/licenses/LICENSE-BSD-NAVGOCO.txt
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/licenses/LICENSE-BSD-NAVGOCO.txt b/solr/solr-ref-guide/src/licenses/LICENSE-BSD-NAVGOCO.txt
new file mode 100755
index 0000000..7fdefc3
--- /dev/null
+++ b/solr/solr-ref-guide/src/licenses/LICENSE-BSD-NAVGOCO.txt
@@ -0,0 +1,27 @@
+/* This license pertains to the Navgoco jQuery component used for the sidebar. */
+
+Copyright (c) 2013, Christodoulos Tsoulloftas, http://www.komposta.net
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+   * Redistributions of source code must retain the above copyright notice,
+      this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above copyright notice,
+      this list of conditions and the following disclaimer in the documentation
+      and/or other materials provided with the distribution.
+   * Neither the name of the <Christodoulos Tsoulloftas> nor the names of its
+      contributors may be used to endorse or promote products derived from this
+      software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+OF THE POSSIBILITY OF SUCH DAMAGE.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
new file mode 100644
index 0000000..1d24b61
--- /dev/null
+++ b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
@@ -0,0 +1,70 @@
+= Local Parameters in Queries
+:page-shortname: local-parameters-in-queries
+:page-permalink: local-parameters-in-queries.html
+
+Local parameters are arguments in a Solr request that are specific to a query parameter.
+
+Local parameters provide a way to add meta-data to certain argument types such as query strings. (In Solr documentation, local parameters are sometimes referred to as LocalParams.)
+
+Local parameters are specified as prefixes to arguments. Take the following query argument, for example:
+
+`q=solr rocks`
+
+We can prefix this query string with local parameters to provide more information to the Standard Query Parser. For example, we can change the default operator type to "AND" and the default field to "title":
+
+`q={!q.op=AND df=title}solr rocks`
+
+These local parameters would change the query to require a match on both "solr" and "rocks" while searching the "title" field by default.
+
+[[LocalParametersinQueries-BasicSyntaxofLocalParameters]]
+== Basic Syntax of Local Parameters
+
+To specify a local parameter, insert the following before the argument to be modified:
+
+* Begin with `{!`
+
+* Insert any number of key=value pairs separated by white space
+
+* End with `}` and immediately follow with the query argument
+
+You may specify only one local parameters prefix per argument. Values in the key-value pairs may be quoted via single or double quotes, and backslash escaping works within quoted strings.
+
+[[LocalParametersinQueries-QueryTypeShortForm]]
+== Query Type Short Form
+
+If a local parameter value appears without a name, it is given the implicit name of "type". This allows short-form representation for the type of query parser to use when parsing a query string. Thus
+
+`q={!dismax qf=myfield}solr rocks`
+
+is equivalent to:
+
+`q={!type=dismax qf=myfield}solr rocks`
+
+If no "type" is specified (either explicitly or implicitly) then the <<the-standard-query-parser.adoc#the-standard-query-parser,lucene parser>> is used by default. Thus
+
+`fq={!df=summary}solr rocks`
+
+is equivilent to:
+
+`fq={!type=lucene df=summary}solr rocks`
+
+== Specifying the Parameter Value with the `v` Key
+
+A special key of `v` within local parameters is an alternate way to specify the value of that parameter.
+
+`q={!dismax qf=myfield}solr rocks`
+
+is equivalent to
+
+`q={!type=dismax qf=myfield v='solr rocks'`}
+
+[[LocalParametersinQueries-ParameterDereferencing]]
+== Parameter Dereferencing
+
+Parameter dereferencing, or indirection, lets you use the value of another argument rather than specifying it directly. This can be used to simplify queries, decouple user input from query parameters, or decouple front-end GUI parameters from defaults set in `solrconfig.xml`.
+
+`q={!dismax qf=myfield}solr rocks`
+
+`is equivalent to:`
+
+`q={!type=dismax qf=myfield v=$qq}&qq=solr rocks`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/logging.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/logging.adoc b/solr/solr-ref-guide/src/logging.adoc
new file mode 100644
index 0000000..a048b24
--- /dev/null
+++ b/solr/solr-ref-guide/src/logging.adoc
@@ -0,0 +1,22 @@
+= Logging
+:page-shortname: logging
+:page-permalink: logging.html
+
+The Logging page shows recent messages logged by this Solr node.
+
+When you click the link for "Logging", a page similar to the one below will be displayed:
+
+.The Main Logging Screen, including an example of an error due to a bad document sent by a client
+image::images/logging/logging.png[image,width=621,height=250]
+
+While this example shows logged messages for only one core, if you have multiple cores in a single instance, they will each be listed, with the level for each.
+
+[[Logging-SelectingaLoggingLevel]]
+== Selecting a Logging Level
+
+When you select the *Level* link on the left, you see the hierarchy of classpaths and classnames for your instance. A row highlighted in yellow indicates that the class has logging capabilities. Click on a highlighted row, and a menu will appear to allow you to change the log level for that class. Characters in boldface indicate that the class will not be affected by level changes to root.
+
+.Log level selection
+image::images/logging/level_menu.png[image,width=589,height=250]
+
+For an explanation of the various logging levels, see <<configuring-logging.adoc#configuring-logging,Configuring Logging>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
new file mode 100644
index 0000000..671e5b4
--- /dev/null
+++ b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
@@ -0,0 +1,73 @@
+= Major Changes from Solr 5 to Solr 6
+:page-shortname: major-changes-from-solr-5-to-solr-6
+:page-permalink: major-changes-from-solr-5-to-solr-6.html
+
+There are some major changes in Solr 6 to consider before starting to migrate your configurations and indexes. There are many hundreds of changes, so a thorough review of the <<upgrading-solr.adoc#upgrading-solr,Upgrading Solr>> section as well as the {solr-javadocs}/changes/Changes.html[CHANGES.txt] file in your Solr instance will help you plan your migration to Solr 6. This section attempts to highlight some of the major changes you should be aware of.
+
+== Highlights of New Features in Solr 6
+
+Some of the major improvements in Solr 6 include:
+
+[[major-5-6-streaming]]
+=== Streaming Expressions
+
+Introduced in Solr 5, <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> allow querying Solr and getting results as a stream of data, sorted and aggregated as requested.
+
+Several new expression types have been added in Solr 6:
+
+* Parallel expressions using a MapReduce-like shuffling for faster throughput of high-cardinality fields.
+* Daemon expressions to support continuous push or pull streaming.
+* Advanced parallel relational algebra like distributed joins, intersections, unions and complements.
+* Publish/Subscribe messaging.
+* JDBC connections to pull data from other systems and join with documents in the Solr index.
+
+[[major-5-6-parallel-sql]]
+=== Parallel SQL Interface
+
+Built on streaming expressions, new in Solr 6 is a <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL interface>> to be able to send SQL queries to Solr. SQL statements are compiled to streaming expressions on the fly, providing the full range of aggregations available to streaming expression requests. A JDBC driver is included, which allows using SQL clients and database visualization tools to query your Solr index and import data to other systems.
+
+=== Cross Data Center Replication
+
+Replication across data centers is now possible with <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>>. Using an active-passive model, a SolrCloud cluster can be replicated to another data center, and monitored with a new API.
+
+=== Graph Query Parser
+
+A new <<other-parsers.adoc#OtherParsers-GraphQueryParser,`graph` query parser>> makes it possible to to graph traversal queries of Directed (Cyclic) Graphs modelled using Solr documents.
+
+[[major-5-6-docvalues]]
+=== DocValues
+
+Most non-text field types in the Solr sample configsets now default to using <<docvalues.adoc#docvalues,DocValues>>.
+
+== Java 8 Required
+
+The minimum supported version of Java for Solr 6 (and the <<using-solrj.adoc#using-solrj,SolrJ client libraries>>) is now Java 8.
+
+== Index Format Changes
+
+Solr 6 has no support for reading Lucene/Solr 4.x and earlier indexes. Be sure to run the Lucene `IndexUpgrader` included with Solr 5.5 if you might still have old 4x formatted segments in your index. Alternatively: fully optimize your index with Solr 5.5 to make sure it consists only of one up-to-date index segment.
+
+== Managed Schema is now the Default
+
+Solr's default behavior when a `solrconfig.xml` does not explicitly define a `<schemaFactory/>` is now dependent on the `luceneMatchVersion` specified in that `solrconfig.xml`. When `luceneMatchVersion < 6.0`, `ClassicIndexSchemaFactory` will continue to be used for back compatibility, otherwise an instance of <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,`ManagedIndexSchemaFactory`>> will be used.
+
+The most notable impacts of this change are:
+
+* Existing `solrconfig.xml` files that are modified to use `luceneMatchVersion >= 6.0`, but do _not_ have an explicitly configured `ClassicIndexSchemaFactory`, will have their `schema.xml` file automatically upgraded to a `managed-schema` file.
+* Schema modifications via the <<schema-api.adoc#schema-api,Schema API>> will now be enabled by default.
+
+Please review the <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> section for more details.
+
+== Default Similarity Changes
+
+Solr's default behavior when a Schema does not explicitly define a global <<other-schema-elements.adoc#other-schema-elements,`<similarity/>`>> is now dependent on the `luceneMatchVersion` specified in the `solrconfig.xml`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarityFactory` will be used, otherwise an instance of `SchemaSimlarityFactory` will be used. Most notably this change means that users can take advantage of per Field Type similarity declarations, with out needing to also explicitly declare a global usage of `SchemaSimlarityFactory`.
+
+Regardless of whether it is explicitly declared, or used as an implicit global default, `SchemaSimlarityFactory`'s implicit behavior when a Field Types do not declare an explicit `<similarity />` has also been changed to depend on the the `luceneMatchVersion`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarity` will be used, otherwise an instance of `BM25Simlarity` will be used. A `defaultSimFromFieldType` init option may be specified on the `SchemaSimilarityFactory` declaration to change this behavior. Please review the `SchemaSimlarityFactory` javadocs for more details
+
+== Replica & Shard Delete Command Changes
+
+DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#collections-api,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands
+
+== `facet.date.*` Parameters Removed
+
+The `facet.date` parameter (and associated `facet.date.*` parameters) that were deprecated in Solr 3.x have been removed completely. If you have not yet switched to using the equivalent <<faceting.adoc#faceting,`facet.range`>> functionality you must do so now before upgrading.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/making-and-restoring-backups.adoc b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
new file mode 100644
index 0000000..a1fdaba
--- /dev/null
+++ b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
@@ -0,0 +1,221 @@
+= Making and Restoring Backups
+:page-shortname: making-and-restoring-backups
+:page-permalink: making-and-restoring-backups.html
+
+If you are worried about data loss, and of course you _should_ be, you need a way to back up your Solr indexes so that you can recover quickly in case of catastrophic failure.
+
+Solr provides two approaches to backing up and restoring Solr cores or collections, depending on how you are running Solr. If you run in SolrCloud mode, you will use the Collections API. If you run Solr in standalone mode, you will use the replication handler.
+
+== [[cloud-backups]]SolrCloud
+
+Support for backups when running SolrCloud is provided with the <<collections-api.adoc#collections-api,Collections API>>. This allows the backups to be generated across multiple shards, and restored to the same number of shards and replicas as the original collection.
+
+Two commands are available:
+
+* `action=BACKUP`: This command backs up Solr indexes and configurations. More information is available in the section <<collections-api.adoc#CollectionsAPI-backup,Backup Collection>>.
+* `action=RESTORE`: This command restores Solr indexes and configurations. More information is available in the section <<collections-api.adoc#CollectionsAPI-restore,Restore Collection>>.
+
+== Standalone Mode
+
+Backups and restoration uses Solr's replication handler. Out of the box, Solr includes implicit support for replication so this API can be used. Configuration of the replication handler can, however, be customized by defining your own replication handler in `solrconfig.xml` . For details on configuring the replication handler, see the section <<index-replication.adoc#IndexReplication-ConfiguringtheReplicationHandler,Configuring the ReplicationHandler>>.
+
+=== Backup API
+
+The backup API requires sending a command to the `/replication` handler to back up the system.
+
+You can trigger a back-up with an HTTP command like this (replace "gettingstarted" with the name of the core you are working with):
+
+.Backup API Example
+[source,text]
+----
+http://localhost:8983/solr/gettingstarted/replication?command=backup
+----
+
+The backup command is an asynchronous call, and it will represent data from the latest index commit point. All indexing and search operations will continue to be executed against the index as usual.
+
+Only one backup call can be made against a core at any point in time. While an ongoing backup operation is happening subsequent calls for restoring will throw an exception.
+
+The backup request can also take the following additional parameters:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|location |The path where the backup will be created. If the path is not absolute then the backup path will be relative to Solr's instance directory.
+|name |The snapshot will be created in a directory called `snapshot.<name>`. If a name is not specified then the directory name would have the following format: `snapshot.<yyyyMMddHHmmssSSS>`
+|numberToKeep |The number of backups to keep. If `maxNumberOfBackups` has been specified on the replication handler in `solrconfig.xml`, `maxNumberOfBackups` is always used and attempts to use `numberToKeep` will cause an error. Also, this parameter is not taken into consideration if the backup name is specified. More information about `maxNumberOfBackups` can be found in the section <<index-replication.adoc#IndexReplication-ConfiguringtheReplicationHandler,Configuring the ReplicationHandler>>.
+|repository |The name of the repository to be used for the backup. If no repository is specified then the local filesystem repository will be used automatically.
+|commitName |The name of the commit which was used while taking a snapshot using the CREATESNAPSHOT command.
+|===
+
+=== Backup Status
+
+The backup operation can be monitored to see if it has completed by sending the `details` command to the `/replication` handler, as in this example:
+
+.Status API Example
+[source,text]
+----
+http://localhost:8983/solr/gettingstarted/replication?command=details
+----
+
+.Output Snippet
+[source,xml]
+----
+<lst name="backup">
+  <str name="startTime">Sun Apr 12 16:22:50 DAVT 2015</str>
+  <int name="fileCount">10</int>
+  <str name="status">success</str>
+  <str name="snapshotCompletedAt">Sun Apr 12 16:22:50 DAVT 2015</str>
+  <str name="snapshotName">my_backup</str>
+</lst>
+----
+
+If it failed then a `snapShootException` will be sent in the response.
+
+=== Restore API
+
+Restoring the backup requires sending the `restore` command to the `/replication` handler, followed by the name of the backup to restore.
+
+You can restore from a backup with a command like this:
+
+.Example Usage
+[source,text]
+----
+http://localhost:8983/solr/gettingstarted/replication?command=restore&name=backup_name
+----
+
+This will restore the named index snapshot into the current core. Searches will start reflecting the snapshot data once the restore is complete.
+
+The restore request can also take these additional parameters:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|location |The location of the backup snapshot file. If not specified, it looks for backups in Solr's data directory.
+|name |The name of the backed up index snapshot to be restored. If the name is not provided it looks for backups with `snapshot.<timestamp>` format in the location directory. It picks the latest timestamp backup in that case.
+|repository |The name of the repository to be used for the backup. If no repository is specified then the local filesystem repository will be used automatically.
+|===
+
+The restore command is an asynchronous call. Once the restore is complete the data reflected will be of the backed up index which was restored.
+
+Only one restore call can can be made against a core at one point in time. While an ongoing restore operation is happening subsequent calls for restoring will throw an exception.
+
+=== Restore Status API
+
+You can also check the status of a restore operation by sending the `restorestatus` command to the `/replication` handler, as in this example:
+
+.Status API Example
+[source,text]
+----
+http://localhost:8983/solr/gettingstarted/replication?command=restorestatus
+----
+
+.Status API Output
+[source,xml]
+----
+<response>
+  <lst name="responseHeader">
+    <int name="status">0</int>
+    <int name="QTime">0</int>
+  </lst>
+  <lst name="restorestatus">
+    <str name="snapshotName">snapshot.<name></str>
+    <str name="status">success</str>
+  </lst>
+</response>
+----
+
+The status value can be "In Progress" , "success" or "failed". If it failed then an "exception" will also be sent in the response.
+
+=== Create Snapshot API
+
+The snapshot functionality is different from the backup functionality as the index files aren't copied anywhere. The index files are snapshotted in the same index directory and can be referenced while taking backups.
+
+You can trigger a snapshot command with an HTTP command like this (replace "techproducts" with the name of the core you are working with):
+
+.Create Snapshot API Example
+[source,text]
+----
+http://localhost:8983/solr/admin/cores?action=CREATESNAPSHOT&core=techproducts&commitName=commit1
+----
+
+The list snapshot request parameters are:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|commitName |Specify the commit name to store the snapshot as
+|core |name of the core to perform the snapshot on
+|async |Request ID to track this action which will be processed asynchronously
+|===
+
+=== List Snapshot API
+
+The list snapshot functionality lists all the taken snapshots for a particular core.
+
+You can trigger a list snapshot command with an HTTP command like this (replace "techproducts" with the name of the core you are working with):
+
+.List Snapshot API
+[source,text]
+----
+http://localhost:8983/solr/admin/cores?action=LISTSNAPSHOTS&core=techproducts&commitName=commit1
+----
+
+The list snapshot request parameters are:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|core |name of the core to whose snapshots we want to list
+|async |Request ID to track this action which will be processed asynchronously
+|===
+
+=== Delete Snapshot API
+
+The delete snapshot functionality deletes a particular snapshot for a particular core.
+
+You can trigger a delete snapshot command with an HTTP command like this (replace "techproducts" with the name of the core you are working with):
+
+.Delete Snapshot API Example
+[source,text]
+----
+http://localhost:8983/solr/admin/cores?action=DELETESNAPSHOT&core=techproducts&commitName=commit1
+----
+
+The delete snapshot request parameters are:
+
+[width="100%",options="header",]
+|===
+|Parameter |Description
+|commitName |Specify the commit name to be deleted
+|core |name of the core whose snapshot we want to delete
+|async |Request ID to track this action which will be processed asynchronously
+|===
+
+== Backup/Restore Storage Repositories
+
+Solr provides interfaces to plug different storage systems for backing up and restoring. For example, you can have a Solr cluster running on a local filesystem like EXT3 but you can backup the indexes to a HDFS filesystem or vice versa.
+
+The repository interfaces needs to be configured in the solr.xml file . While running backup/restore commands we can specify the repository to be used.
+
+If no repository is configured then the local filesystem repository will be used automatically.
+
+Example solr.xml section to configure a repository like <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,HDFS>>:
+
+[source,xml]
+----
+<backup>
+  <repository name="hdfs" class="org.apache.solr.core.backup.repository.HdfsBackupRepository" default="false">
+    <str name="location">${solr.hdfs.default.backup.path}</str>
+    <str name="solr.hdfs.home">${solr.hdfs.home:}</str>
+    <str name="solr.hdfs.confdir">${solr.hdfs.confdir:}</str>
+  </repository>
+</backup>
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/managed-resources.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/managed-resources.adoc b/solr/solr-ref-guide/src/managed-resources.adoc
new file mode 100644
index 0000000..7e91ee3
--- /dev/null
+++ b/solr/solr-ref-guide/src/managed-resources.adoc
@@ -0,0 +1,280 @@
+= Managed Resources
+:page-shortname: managed-resources
+:page-permalink: managed-resources.html
+
+Managed resources expose a REST API endpoint for performing Create-Read-Update-Delete (CRUD) operations on a Solr object.
+
+Any long-lived Solr object that has configuration settings and/or data is a good candidate to be a managed resource. Managed resources complement other programmatically manageable components in Solr, such as the RESTful schema API to add fields to a managed schema.
+
+Consider a Web-based UI that offers Solr-as-a-Service where users need to configure a set of stop words and synonym mappings as part of an initial setup process for their search application. This type of use case can easily be supported using the Managed Stop Filter & Managed Synonym Filter Factories provided by Solr, via the Managed resources REST API.
+
+Users can also write their own custom plugins, that leverage the same internal hooks to make additional resources REST managed.
+
+All of the examples in this section assume you are running the "techproducts" Solr example:
+
+[source,bash]
+----
+bin/solr -e techproducts
+----
+
+[[ManagedResources-Overview]]
+== Overview
+
+Let's begin learning about managed resources by looking at a couple of examples provided by Solr for managing stop words and synonyms using a REST API. After reading this section, you'll be ready to dig into the details of how managed resources are implemented in Solr so you can start building your own implementation.
+
+[[ManagedResources-Stopwords]]
+=== Stop Words
+
+To begin, you need to define a field type that uses the <<filter-descriptions.adoc#FilterDescriptions-ManagedStopFilter,ManagedStopFilterFactory>>, such as:
+
+[source,xml,subs="verbatim,callouts"]
+----
+<fieldType name="managed_en" positionIncrementGap="100">
+  <analyzer>
+    <tokenizer class="solr.StandardTokenizerFactory"/>
+    <filter class="solr.ManagedStopFilterFactory" <!--1-->
+            managed="english" /> <!--2-->
+  </analyzer>
+</fieldType>
+----
+
+There are two important things to notice about this field type definition:
+
+<1> The filter implementation class is `solr.ManagedStopFilterFactory`. This is a special implementation of the <<filter-descriptions.adoc#FilterDescriptions-StopFilter,StopFilterFactory>> that uses a set of stop words that are managed from a REST API.
+
+<2> The `managed=”english”` attribute gives a name to the set of managed stop words, in this case indicating the stop words are for English text.
+
+The REST endpoint for managing the English stop words in the techproducts collection is: `/solr/techproducts/schema/analysis/stopwords/english`.
+
+The example resource path should be mostly self-explanatory. It should be noted that the ManagedStopFilterFactory implementation determines the `/schema/analysis/stopwords` part of the path, which makes sense because this is an analysis component defined by the schema.
+
+It follows that a field type that uses the following filter:
+
+[source,xml]
+----
+<filter class="solr.ManagedStopFilterFactory"
+        managed="french" />
+----
+
+would resolve to path: `/solr/techproducts/schema/analysis/stopwords/french`.
+
+So now let’s see this API in action, starting with a simple GET request:
+
+[source,bash]
+----
+curl "http://localhost:8983/solr/techproducts/schema/analysis/stopwords/english"
+----
+
+Assuming you sent this request to Solr, the response body is a JSON document:
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":1
+  },
+  "wordSet":{
+    "initArgs":{"ignoreCase":true},
+    "initializedOn":"2014-03-28T20:53:53.058Z",
+    "managedList":[
+      "a",
+      "an",
+      "and",
+      "are",
+       ]
+  }
+}
+----
+
+The `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> ships with a pre-built set of managed stop words, however you should only interact with this file using the API and not edit it directly.
+
+One thing that should stand out to you in this response is that it contains a `managedList` of words as well as `initArgs`. This is an important concept in this framework -- managed resources typically have configuration and data. For stop words, the only configuration parameter is a boolean that determines whether to ignore the case of tokens during stop word filtering (ignoreCase=true|false). The data is a list of words, which is represented as a JSON array named `managedList` in the response.
+
+Now, let’s add a new word to the English stop word list using an HTTP PUT:
+
+[source,bash]
+----
+curl -X PUT -H 'Content-type:application/json' --data-binary '["foo"]' "http://localhost:8983/solr/techproducts/schema/analysis/stopwords/english"
+----
+
+Here we’re using cURL to PUT a JSON list containing a single word “foo” to the managed English stop words set. Solr will return 200 if the request was successful. You can also put multiple words in a single PUT request.
+
+You can test to see if a specific word exists by sending a GET request for that word as a child resource of the set, such as:
+
+[source,bash]
+----
+curl "http://localhost:8983/solr/techproducts/schema/analysis/stopwords/english/foo"
+----
+
+This request will return a status code of 200 if the child resource (foo) exists or 404 if it does not exist the managed list.
+
+To delete a stop word, you would do:
+
+[source,bash]
+----
+curl -X DELETE "http://localhost:8983/solr/techproducts/schema/analysis/stopwords/english/foo"
+----
+
+NOTE: PUT/POST is used to add terms to an existing list instead of replacing the list entirely. This is because it is more common to add a term to an existing list than it is to replace a list altogether, so the API favors the more common approach of incrementally adding terms especially since deleting individual terms is also supported.
+
+[[ManagedResources-Synonyms]]
+=== Synonyms
+
+For the most part, the API for managing synonyms behaves similar to the API for stop words, except instead of working with a list of words, it uses a map, where the value for each entry in the map is a set of synonyms for a term. As with stop words, the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> includes a pre-built set of synonym mappings suitable for the sample data that is activated by the following field type definition in schema.xml:
+
+[source,xml]
+----
+<fieldType name="managed_en" positionIncrementGap="100">
+  <analyzer>
+    <tokenizer class="solr.StandardTokenizerFactory"/>
+    <filter class="solr.ManagedStopFilterFactory"
+            managed="english" />
+
+    <filter class="solr.ManagedSynonymFilterFactory"
+            managed="english" />
+
+  </analyzer>
+</fieldType>
+----
+
+To get the map of managed synonyms, send a GET request to:
+
+[source,bash]
+----
+curl "http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english"
+----
+
+This request will return a response that looks like:
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":3},
+  "synonymMappings":{
+    "initArgs":{
+      "ignoreCase":true,
+      "format":"solr"},
+    "initializedOn":"2014-12-16T22:44:05.33Z",
+    "managedMap":{
+      "GB":
+        ["GiB",
+         "Gigabyte"],
+      "TV":
+        ["Television"],
+      "happy":
+        ["glad",
+         "joyful"]}}}
+----
+
+Managed synonyms are returned under the `managedMap` property which contains a JSON Map where the value of each entry is a set of synonyms for a term, such as "happy" has synonyms "glad" and "joyful" in the example above.
+
+To add a new synonym mapping, you can PUT/POST a single mapping such as:
+
+[source,bash]
+----
+curl -X PUT -H 'Content-type:application/json' --data-binary '{"mad":["angry","upset"]}' "http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english"
+----
+
+The API will return status code 200 if the PUT request was successful. To determine the synonyms for a specific term, you send a GET request for the child resource, such as `/schema/analysis/synonyms/english/mad` would return `["angry","upset"]` .
+
+You can also PUT a list of symmetric synonyms, which will be expanded into a mapping for each term in the list. For example, you could PUT the following list of symmetric synonyms using the JSON list syntax instead of a map:
+
+[source,bash]
+----
+curl -X PUT -H 'Content-type:application/json' --data-binary '["funny", "entertaining", "whimiscal", "jocular"]' "http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english"
+----
+
+Note that the expansion is performed when processing the PUT request so the underlying persistent state is still a managed map. Consequently, if after sending the previous PUT request, you did a GET for `/schema/analysis/synonyms/english/jocular`, then you would receive a list containing `["funny", "entertaining", "whimiscal"]`. Once you've created synonym mappings using a list, each term must be managed separately.
+
+Lastly, you can delete a mapping by sending a DELETE request to the managed endpoint.
+
+[[ManagedResources-ApplyingChanges]]
+== Applying Changes
+
+Changes made to managed resources via this REST API are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
+
+For example: after adding or deleting a stop word, you must reload the core/collection before changes become active; related APIs: <<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>> and <<collections-api.adoc#collections-api,Collections API>>.
+
+This approach is required when running in distributed mode so that we are assured changes are applied to all cores in a collection at the same time so that behavior is consistent and predictable. It goes without saying that you don’t want one of your replicas working with a different set of stop words or synonyms than the others.
+
+One subtle outcome of this _apply-changes-at-reload_ approach is that the once you make changes with the API, there is no way to read the active data. In other words, the API returns the most up-to-date data from an API perspective, which could be different than what is currently being used by Solr components.
+
+However, the intent of this API implementation is that changes will be applied using a reload within a short time frame after making them so the time in which the data returned by the API differs from what is active in the server is intended to be negligible.
+
+[IMPORTANT]
+====
+Changing things like stop words and synonym mappings typically require re-indexing existing documents if being used by index-time analyzers. The RestManager framework does not guard you from this, it simply makes it possible to programmatically build up a set of stop words, synonyms etc.
+====
+
+[[ManagedResources-RestManagerEndpoint]]
+== RestManager Endpoint
+
+Metadata about registered ManagedResources is available using the `/schema/managed` endpoint for each collection.
+
+Assuming you have the `managed_en` field type shown above defined in your schema.xml, sending a GET request to the following resource will return metadata about which schema-related resources are being managed by the RestManager:
+
+[source,bash]
+----
+curl "http://localhost:8983/solr/techproducts/schema/managed"
+----
+
+The response body is a JSON document containing metadata about managed resources under the /schema root:
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":3
+  },
+  "managedResources":[
+    {
+      "resourceId":"/schema/analysis/stopwords/english",
+      "class":"org.apache.solr.rest.schema.analysis.ManagedWordSetResource",
+      "numObservers":"1"
+    },
+    {
+      "resourceId":"/schema/analysis/synonyms/english",
+      "class":"org.apache.solr.rest.schema.analysis.ManagedSynonymFilterFactory$SynonymManager",
+      "numObservers":"1"
+    }
+  ]
+}
+----
+
+You can also create new managed resource using PUT/POST to the appropriate URL – before ever configuring anything that uses these resources.
+
+For example, imagine we want to build up a set of German stop words. Before we can start adding stop words, we need to create the endpoint:
+
+`/solr/techproducts/schema/analysis/stopwords/german`
+
+To create this endpoint, send the following PUT/POST request to the endpoint we wish to create:
+
+[source,bash]
+----
+curl -X PUT -H 'Content-type:application/json' --data-binary \
+'{"class":"org.apache.solr.rest.schema.analysis.ManagedWordSetResource"}' \
+"http://localhost:8983/solr/techproducts/schema/analysis/stopwords/german"
+----
+
+Solr will respond with status code 200 if the request is successful. Effectively, this action registers a new endpoint for a managed resource in the RestManager. From here you can start adding German stop words as we saw above:
+
+[source,bash]
+----
+curl -X PUT -H 'Content-type:application/json' --data-binary '["die"]' \
+"http://localhost:8983/solr/techproducts/schema/analysis/stopwords/german"
+----
+
+For most users, creating resources in this way should never be necessary, since managed resources are created automatically when configured.
+
+However, You may want to explicitly delete managed resources if they are no longer being used by a Solr component.
+
+For instance, the managed resource for German that we created above can be deleted because there are no Solr components that are using it, whereas the managed resource for English stop words cannot be deleted because there is a token filter declared in schema.xml that is using it.
+
+[source,bash]
+----
+curl -X DELETE "http://localhost:8983/solr/techproducts/schema/analysis/stopwords/german"
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/managing-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/managing-solr.adoc b/solr/solr-ref-guide/src/managing-solr.adoc
new file mode 100644
index 0000000..ad13113
--- /dev/null
+++ b/solr/solr-ref-guide/src/managing-solr.adoc
@@ -0,0 +1,26 @@
+= Managing Solr
+:page-shortname: managing-solr
+:page-permalink: managing-solr.html
+:page-children: taking-solr-to-production, securing-solr, running-solr-on-hdfs, making-and-restoring-backups, configuring-logging, using-jmx-with-solr, mbean-request-handler, performance-statistics-reference, metrics-reporting, v2-api
+
+This section describes how to run Solr and how to look at Solr when it is running. It contains the following sections:
+
+<<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>: Describes how to install Solr as a service on Linux for production environments.
+
+<<securing-solr.adoc#securing-solr,Securing Solr>>: How to use the Basic and Kerberos authentication and rule-based authorization plugins for Solr, and how to enable SSL.
+
+<<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>>: How to use HDFS to store your Solr indexes and transaction logs.
+
+<<making-and-restoring-backups.adoc#making-and-restoring-backups,Making and Restoring Backups>>: Describes backup strategies for your Solr indexes.
+
+<<configuring-logging.adoc#configuring-logging,Configuring Logging>>: Describes how to configure logging for Solr.
+
+<<using-jmx-with-solr.adoc#using-jmx-with-solr,Using JMX with Solr>>: Describes how to use Java Management Extensions with Solr.
+
+<<mbean-request-handler.adoc#mbean-request-handler,MBean Request Handler>>: How to use Solr's MBeans for programmatic access to the system plugins and stats.
+
+<<performance-statistics-reference.adoc#performance-statistics-reference,Performance Statistics Reference>>: Additional information on statistics returned from JMX.
+
+<<metrics-reporting.adoc#metrics-reporting,Metrics Reporting>>: Details of Solr's metrics registries and Metrics API.
+
+<<v2-api.adoc#v2-api,v2 API>>: Describes a redesigned API framework covering most existing Solr APIs.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/mbean-request-handler.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/mbean-request-handler.adoc b/solr/solr-ref-guide/src/mbean-request-handler.adoc
new file mode 100644
index 0000000..0582b82
--- /dev/null
+++ b/solr/solr-ref-guide/src/mbean-request-handler.adoc
@@ -0,0 +1,44 @@
+= MBean Request Handler
+:page-shortname: mbean-request-handler
+:page-permalink: mbean-request-handler.html
+
+The MBean Request Handler offers programmatic access to the information provided on the <<plugins-stats-screen.adoc#plugins-stats-screen,Plugin/Stats>> page of the Admin UI.
+
+The MBean Request Handler accepts the following parameters:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="10,20,10,60",options="header"]
+|===
+|Parameter |Type |Default |Description
+|key |multivalued |all |Restricts results by object key.
+|cat |multivalued |all |Restricts results by category name.
+|stats |boolean |false |Specifies whether statistics are returned with results. You can override the `stats` parameter on a per-field basis.
+|wt |multivalued |xml |The output format. This operates the same as the <<response-writers.adoc#response-writers,`wt` parameter in a query>>.
+|===
+
+[[MBeanRequestHandler-Examples]]
+== Examples
+
+The following examples assume you are running Solr's `techproducts` example configuration:
+
+[source,bash]
+----
+bin/solr start -e techproducts
+----
+
+To return information about the CACHE category only:
+
+`\http://localhost:8983/solr/techproducts/admin/mbeans?cat=CACHE`
+
+To return information and statistics about the CACHE category only, formatted in JSON:
+
+`\http://localhost:8983/solr/techproducts/admin/mbeans?stats=true&cat=CACHE&indent=true&wt=json`
+
+To return information for everything, and statistics for everything except the `fieldCache`:
+
+`\http://localhost:8983/solr/techproducts/admin/mbeans?stats=true&f.fieldCache.stats=false`
+
+To return information and statistics for the `fieldCache` only:
+
+`\http://localhost:8983/solr/techproducts/admin/mbeans?key=fieldCache&stats=true`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/merging-indexes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/merging-indexes.adoc b/solr/solr-ref-guide/src/merging-indexes.adoc
new file mode 100644
index 0000000..bf554aa
--- /dev/null
+++ b/solr/solr-ref-guide/src/merging-indexes.adoc
@@ -0,0 +1,35 @@
+= Merging Indexes
+:page-shortname: merging-indexes
+:page-permalink: merging-indexes.html
+
+If you need to combine indexes from two different projects or from multiple servers previously used in a distributed configuration, you can use either the IndexMergeTool included in `lucene-misc` or the `CoreAdminHandler`.
+
+To merge indexes, they must meet these requirements:
+
+* The two indexes must be compatible: their schemas should include the same fields and they should analyze fields the same way.
+* The indexes must not include duplicate data.
+
+Optimally, the two indexes should be built using the same schema.
+
+[[MergingIndexes-UsingIndexMergeTool]]
+== Using `IndexMergeTool`
+
+To merge the indexes, do the following:
+
+. Make sure that both indexes you want to merge are closed.
+. Issue this command:
++
+[source,bash]
+----
+java -cp $SOLR/server/solr-webapp/webapp/WEB-INF/lib/lucene-core-VERSION.jar:$SOLR/server/solr-webapp/webapp/WEB-INF/lib/lucene-misc-VERSION.jar org/apache/lucene/misc/IndexMergeTool /path/to/newindex /path/to/old/index1 /path/to/old/index2
+----
++
+This will create a new index at `/path/to/newindex` that contains both index1 and index2.
+. Copy this new directory to the location of your application's solr index (move the old one aside first, of course) and start Solr.
+
+[[MergingIndexes-UsingCoreAdmin]]
+== Using CoreAdmin
+
+The `MERGEINDEXES` command of the <<coreadmin-api.adoc#CoreAdminAPI-MERGEINDEXES,CoreAdminHandler>> can be used to merge indexes into a new core – either from one or more arbitrary `indexDir` directories or by merging from one or more existing `srcCore` core names.
+
+See the <<coreadmin-api.adoc#CoreAdminAPI-MERGEINDEXES,CoreAdminHandler>> section for details.