You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ignite.apache.org by ab...@apache.org on 2020/07/31 16:15:53 UTC

[ignite] branch IGNITE-7595 updated: remove top level guides and add sql reference pages

This is an automated email from the ASF dual-hosted git repository.

abudnikov pushed a commit to branch IGNITE-7595
in repository https://gitbox.apache.org/repos/asf/ignite.git


The following commit(s) were added to refs/heads/IGNITE-7595 by this push:
     new ab6019a  remove top level guides and add sql reference pages
ab6019a is described below

commit ab6019a00ac43726890b1268c9126c36e021a65c
Author: abudnikov <ab...@gridgain.com>
AuthorDate: Fri Jul 31 19:14:53 2020 +0300

    remove top level guides and add sql reference pages
---
 docs/_data/toc.yaml                                | 535 +++++-------
 .../SQL/JDBC/error-codes.adoc                      |   2 +-
 .../SQL/JDBC/jdbc-client-driver.adoc               |   4 +-
 .../SQL/JDBC/jdbc-driver.adoc                      |  28 +-
 .../SQL/ODBC/connection-string-dsn.adoc            |   6 +-
 .../SQL/ODBC/data-types.adoc                       |   0
 .../SQL/ODBC/error-codes.adoc                      |   0
 .../SQL/ODBC/odbc-driver.adoc                      |   2 +-
 .../SQL/ODBC/querying-modifying-data.adoc          |   6 +-
 .../SQL/ODBC/specification.adoc                    |   0
 .../SQL/custom-sql-func.adoc                       |   0
 .../SQL/distributed-joins.adoc                     |   4 +-
 docs/_docs/{developers-guide => }/SQL/indexes.adoc |  45 +-
 docs/_docs/{developers-guide => }/SQL/schemas.adoc |   4 +-
 docs/_docs/{developers-guide => }/SQL/sql-api.adoc |   4 +-
 .../SQL/sql-introduction.adoc                      |  10 +-
 .../SQL/sql-transactions.adoc                      |   2 +-
 docs/_docs/administrators-guide/index.adoc         |   6 -
 docs/_docs/administrators-guide/introduction.adoc  |  18 -
 .../{developers-guide => }/baseline-topology.adoc  |  29 +-
 docs/_docs/clustering/clustering.adoc              |  37 +
 .../clustering/connect-client-nodes.adoc           |   2 +-
 .../clustering/discovery-in-the-cloud.adoc         |  18 +-
 docs/_docs/clustering/network-configuration.adoc   | 171 ++++
 .../clustering/tcp-ip-discovery.adoc               | 119 +--
 .../clustering/zookeeper-discovery.adoc            |   2 +-
 .../_docs/code-snippets/dotnet/BaselineTopology.cs |  33 +
 docs/_docs/code-snippets/dotnet/DefiningIndexes.cs |  98 ++-
 .../apache/ignite/snippets/ClusteringOverview.java |   5 +-
 .../ignite/snippets/NetworkConfiguration.java      |  36 +
 .../code-snippets/xml/discovery-multicast.xml      |  20 +
 .../xml/discovery-static-and-multicast.xml         |  29 +
 docs/_docs/code-snippets/xml/discovery-static.xml  |  32 +
 .../code-snippets/xml/network-configuration.xml    |  30 +
 .../collocated-computations.adoc                   |  10 +-
 .../configuring-caches/atomicity-modes.adoc        |  26 +-
 .../configuring-caches/cache-groups.adoc           |   2 +-
 .../configuring-caches/configuration-overview.adoc |  15 +-
 .../configuring-caches/configuring-backups.adoc    |   2 +-
 .../configuring-caches/expiry-policies.adoc        |   0
 .../configuring-caches/on-heap-caching.adoc        |   2 +-
 .../{administrators-guide => }/control-script.adoc |  10 +-
 .../data-modeling/affinity-collocation.adoc        |  18 +-
 .../data-modeling/data-modeling.adoc               |  12 +-
 .../data-modeling/data-partitioning.adoc           |  48 +-
 .../{developers-guide => }/data-rebalancing.adoc   |  12 +-
 .../{developers-guide => }/data-streaming.adoc     |   8 +-
 .../developers-guide/clustering/clustering.adoc    |  95 --
 .../distributed-computing/cluster-groups.adoc      |   0
 .../distributed-computing.adoc                     |  12 +-
 .../distributed-computing/executor-service.adoc    |   4 +-
 .../distributed-computing/fault-tolerance.adoc     |   6 +-
 .../distributed-computing/job-scheduling.adoc      |   8 +-
 .../distributed-computing/load-balancing.adoc      |  18 +-
 .../distributed-computing/map-reduce.adoc          |   8 +-
 .../{developers-guide => }/events/events.adoc      |  49 +-
 .../events/listening-to-events.adoc                |   6 +-
 docs/_docs/images/collocated_joins.png             | Bin 110276 -> 174755 bytes
 docs/_docs/images/data_streaming.png               | Bin 0 -> 159011 bytes
 docs/_docs/images/ignite_clustering.png            | Bin 0 -> 117282 bytes
 docs/_docs/images/jconsole.png                     | Bin 0 -> 97939 bytes
 docs/_docs/images/network_segmentation.png         | Bin 0 -> 37812 bytes
 docs/_docs/images/non_collocated_joins.png         | Bin 0 -> 190860 bytes
 docs/_docs/images/partitioned_cache.png            | Bin 0 -> 183181 bytes
 docs/_docs/images/replicated_cache.png             | Bin 0 -> 181143 bytes
 docs/_docs/images/segmentation_resolved.png        | Bin 0 -> 41915 bytes
 docs/_docs/images/split_brain.png                  | Bin 0 -> 15844 bytes
 docs/_docs/images/split_brain_resolved.png         | Bin 0 -> 15887 bytes
 docs/_docs/images/zookeeper.png                    | Bin 0 -> 139311 bytes
 docs/_docs/images/zookeeper_split.png              | Bin 0 -> 56004 bytes
 docs/_docs/includes/installggqsg.adoc              |   2 +-
 docs/_docs/includes/note-on-deactivation.adoc      |   2 +-
 docs/_docs/includes/thick-and-thin-clients.adoc    |   2 +-
 docs/_docs/installation-guide/index.adoc           |   6 -
 .../deb-rpm.adoc                                   |   0
 .../{developers-guide => installation}/index.adoc  |   2 +-
 .../installing-using-docker.adoc                   |   6 +-
 .../installing-using-zip.adoc                      |   0
 .../kubernetes/amazon-eks-deployment.adoc          |   6 +-
 .../kubernetes/azure-deployment.adoc               |   6 +-
 .../kubernetes/generic-configuration.adoc          |  19 +-
 .../kubernetes/gke-deployment.adoc                 |   6 +-
 .../key-value-api/basic-cache-operations.adoc      |   6 +-
 .../key-value-api/binary-objects.adoc              |  18 +-
 .../key-value-api/continuous-queries.adoc          |   6 +-
 .../key-value-api/transactions.adoc                |   6 +-
 .../key-value-api/using-scan-queries.adoc          |   4 +-
 .../key-value-api/with-expiry-policy.adoc          |   0
 docs/_docs/{developers-guide => }/logging.adoc     |  10 +-
 .../memory-architecture.adoc                       |   8 +-
 .../memory-configuration/data-regions.adoc         |   6 +-
 .../memory-configuration/eviction-policies.adoc    |  16 +-
 .../memory-configuration/index.adoc                |   0
 .../monitoring-metrics/configuring-metrics.adoc    |   4 +-
 .../monitoring-metrics/intro.adoc                  |  27 +-
 .../monitoring-metrics/metrics.adoc                |  24 +-
 .../monitoring-metrics/system-views.adoc           |  36 +-
 docs/_docs/{developers-guide => }/near-cache.adoc  |  12 +-
 .../partition-loss-policy.adoc                     |   6 +-
 .../{developers-guide => }/peer-class-loading.adoc |  24 +-
 .../persistence/custom-cache-store.adoc            |   6 +-
 .../persistence/external-storage.adoc              |  16 +-
 .../persistence/native-persistence.adoc            |  34 +-
 .../{developers-guide => }/persistence/swap.adoc   |  22 +-
 docs/_docs/{developers-guide => }/preface.adoc     |   0
 docs/_docs/{developers-guide => }/restapi.adoc     |  14 +-
 .../security/authentication.adoc                   |  10 +-
 .../{administrators-guide => }/security/index.adoc |   0
 .../security/ssl-tls.adoc                          |   2 +-
 .../{administrators-guide => }/security/tde.adoc   |   2 +-
 docs/_docs/{developers-guide => }/setup.adoc       |   2 +-
 docs/_docs/sql-reference/aggregate-functions.adoc  | 383 ++++++++
 docs/_docs/sql-reference/data-types.adoc           | 168 ++++
 docs/_docs/sql-reference/date-time-functions.adoc  | 385 ++++++++
 docs/_docs/sql-reference/ddl.adoc                  | 506 +++++++++++
 docs/_docs/sql-reference/dml.adoc                  | 349 ++++++++
 .../security => sql-reference}/index.adoc          |   2 +-
 docs/_docs/sql-reference/numeric-functions.adoc    | 967 +++++++++++++++++++++
 docs/_docs/sql-reference/operational-commands.adoc | 115 +++
 docs/_docs/sql-reference/sql-conformance.adoc      | 457 ++++++++++
 docs/_docs/sql-reference/string-functions.adoc     | 928 ++++++++++++++++++++
 docs/_docs/sql-reference/system-functions.adoc     | 211 +++++
 docs/_docs/sql-reference/transactions.adoc         |  52 ++
 .../{developers-guide => }/starting-nodes.adoc     |  10 +-
 .../thin-client-comparison.csv                     |   0
 .../thin-clients/cpp-thin-client.adoc              |   4 +-
 .../thin-clients/dotnet-thin-client.adoc           |   6 +-
 .../getting-started-with-thin-clients.adoc         |  24 +-
 .../thin-clients/java-thin-client.adoc             |  16 +-
 .../thin-clients/nodejs-thin-client.adoc           |   8 +-
 .../thin-clients/php-thin-client.adoc              |   6 +-
 .../thin-clients/python-thin-client.adoc           |  20 +-
 .../{developers-guide => }/transactions/mvcc.adoc  |  16 +-
 .../understanding-configuration.adoc               |   5 +-
 docs/_plugins/asciidoctor-extensions.rb            |  26 +
 135 files changed, 5717 insertions(+), 1033 deletions(-)

diff --git a/docs/_data/toc.yaml b/docs/_data/toc.yaml
index 1228759..2665b18 100644
--- a/docs/_data/toc.yaml
+++ b/docs/_data/toc.yaml
@@ -1,291 +1,213 @@
-#- title: Quick Start Guide
-#  items:
-#  - title: Quick Start Overview
-#    url: /getting-started/quickstart
-#  - title: Java
-#    url: /getting-started/quick-start/java
-#  - title: .NET/C#
-#    url: /getting-started/quick-start/dotnet
-#  - title: C++
-#    url: /getting-started/quick-start/cpp
-#  - title: Python
-#    url: /getting-started/quick-start/python
-#  - title: Node.JS
-#    url: /getting-started/quick-start/nodejs
-#  - title: SQL
-#    url: /getting-started/quick-start/sql
-#  - title: PHP
-#    url: /getting-started/quick-start/php
-#  - title: REST API
-#    url: /getting-started/quick-start/restapi
-- title: Installation and Upgrade
-  url: /installation-guide
+- title: Preface 
+  url: /preface
+- title: Installation
+  url: /installation
   items:
-#    - title: Deployment Modes
-#      url: /installation-guide/deployment-modes
-    - title: Installing Using ZIP Archive 
-      url: /installation-guide/installing-using-zip
-    - title: Installing Using Docker
-      url: /installation-guide/installing-using-docker
-    - title:  Installing DEB or RPM package
-      url: /installation-guide/dem-rpm
-    - title: Kubernetes
-      items: 
-        - title: Amazon EKS 
-          url: /installation-guide/kubernetes/amazon-eks-deployment
-        - title: Azure Kubernetes Service 
-          url: /installation-guide/kubernetes/azure-deployment
-        - title: Google Kubernetes Engine
-          url: /installation-guide/kubernetes/gke-deployment
-#    - title: AWS
-#      items: 
-#        - title: Manual Install on Amazon EC2 
-#          url:  /installation-guide/aws/manual-install-on-ec2
-#        - title: Amazon EKS Deployment
-#          url: /installation-guide/kubernetes/amazon-eks-deployment
-#    - title: Microsoft Azure
-#      items:
-#        - title: Azure Kubernetes Service Deployment
-#          url: /installation-guide/kubernetes/azure-deployment
-#    - title: Google Kubernetes Engine
-#      url: /installation-guide/kubernetes/gke-deployment
-- title: Developer's Guide
-  url: /developers-guide
+  - title: Installing Using ZIP Archive 
+    url: /installation/installing-using-zip
+  - title: Installing Using Docker
+    url: /installation/installing-using-docker
+  - title:  Installing DEB or RPM package
+    url: /installation/dem-rpm
+  - title: Kubernetes
+    items: 
+      - title: Amazon EKS 
+        url: /installation/kubernetes/amazon-eks-deployment
+      - title: Azure Kubernetes Service 
+        url: /installation/kubernetes/azure-deployment
+      - title: Google Kubernetes Engine
+        url: /installation/kubernetes/gke-deployment
+- title: Setting Up
+  url: /setup
+- title: Understanding Configuration
+  url: /understanding-configuration
+- title: Configuring Logging
+  url: /logging
+- title: Starting and Stopping Nodes
+  url: /starting-nodes
+- title: Clustering
+  items:
+    - title: Overview
+      url: /clustering/clustering
+    - title: TCP/IP Discovery
+      url: /clustering/tcp-ip-discovery
+    - title: ZooKeeper Discovery
+      url: /clustering/zookeeper-discovery
+    - title: Discovery in the Cloud
+      url: /clustering/discovery-in-the-cloud
+    - title: Network Configuration
+      url: /clustering/network-configuration
+    - title: Connecting Client Nodes 
+      url: /clustering/connect-client-nodes
+- title: Data Modeling 
+  items: 
+    - title: Introduction
+      url: /data-modeling/data-modeling
+    - title: Data Partitioning
+      url: /data-modeling/data-partitioning
+    - title: Affinity Colocation 
+      url: /data-modeling/affinity-collocation
+- title: Configuring Memory 
+  items:
+    - title: Memory Architecture
+      url: /memory-architecture
+    - title: Configuring Data Regions
+      url: /memory-configuration/data-regions
+    - title: Eviction Policies
+      url: /memory-configuration/eviction-policies        
+- title: Configuring Caches
+  items:
+    - title: Cache Configuration 
+      url: /configuring-caches/configuration-overview 
+    - title: Configuring Partition Backups
+      url: /configuring-caches/configuring-backups
+    - title: Atomicity Modes
+      url: /configuring-caches/atomicity-modes
+    - title: Expiry Policy
+      url: /configuring-caches/expiry-policies
+    - title: On-Heap Caching
+      url: /configuring-caches/on-heap-caching
+    - title: Cache Groups 
+      url: /configuring-caches/cache-groups
+- title: Persistence
+  items:
+    - title: Ignite Persistence
+      url: /persistence/native-persistence
+    - title: External Storage
+      url: /persistence/external-storage
+    - title: Swapping
+      url: /persistence/swap 
+    - title: Implementing Custom Cache Store
+      url: /persistence/custom-cache-store
+- title: Baseline Topology
+  url: /baseline-topology
+- title: Data Rebalancing
+  url: /data-rebalancing 
+- title: Partition Loss Policy
+  url: /partition-loss-policy
+- title: Peer Class Loading
+  url: /peer-class-loading
+- title: Data Streaming
+  url: /data-streaming
+- title: Using Key-Value Cache API
+  items:
+    - title: Basic Cache Operations 
+      url: /key-value-api/basic-cache-operations
+    - title: Working with Binary Objects
+      url: /key-value-api/binary-objects
+    - title: Using Scan Queries
+      url: /key-value-api/using-scan-queries
+- title: Using Continuous Queries
+  url: /key-value-api/continuous-queries
+- title: Performing Transactions
+  url: /key-value-api/transactions
+- title: Working with SQL
+  items:
+    - title: Introduction
+      url: /SQL/sql-introduction
+    - title: Understanding Schemas
+      url: /SQL/schemas
+    - title: Defining Indexes
+      url: /SQL/indexes          
+    - title: Using SQL API
+      url: /SQL/sql-api          
+    - title: Distributed Joins
+      url: /SQL/distributed-joins
+    - title: SQL Transactions
+      url: /SQL/sql-transactions
+    - title: Custom SQL Functions
+      url: /SQL/custom-sql-func
+    - title: JDBC Driver
+      url: /SQL/JDBC/jdbc-driver
+    - title: JDBC Client Driver
+      url: /SQL/JDBC/jdbc-client-driver
+    - title: Multiversion Concurrency Control
+      url: /transactions/mvcc
+- title: Distributed Computing 
   items:
-    - title: Preface 
-      url: /developers-guide/preface
-    - title: Setting Up
-      url: /developers-guide/setup
-    - title: Understanding Configuration
-      url: /developers-guide/understanding-configuration
-    - title: Configuring Logging
-      url: /developers-guide/logging
-    - title: Starting and Stopping Nodes
-      url: /developers-guide/starting-nodes
-    - title: Clustering
-      items:
-        - title: Overview
-          url: /developers-guide/clustering/clustering
-        - title: TCP/IP Discovery
-          url: /developers-guide/clustering/tcp-ip-discovery
-        - title: ZooKeeper Discovery
-          url: /developers-guide/clustering/zookeeper-discovery
-        - title: Discovery in the Cloud
-          url: /developers-guide/clustering/discovery-in-the-cloud
-        - title: Connecting Client Nodes 
-          url: /developers-guide/clustering/connect-client-nodes
-    - title: Data Modeling 
-      items: 
-        - title: Introduction
-          url: /developers-guide/data-modeling/data-modeling
-        - title: Data Partitioning
-          url: /developers-guide/data-modeling/data-partitioning
-        - title: Affinity Colocation 
-          url: /developers-guide/data-modeling/affinity-collocation
-    - title: Configuring Memory 
-      items:
-        - title: Memory Architecture
-          url: /developers-guide/memory-architecture
-        - title: Configuring Data Regions
-          url: /developers-guide/memory-configuration/data-regions
-        - title: Eviction Policies
-          url: /developers-guide/memory-configuration/eviction-policies        
-    - title: Configuring Caches
-      items:
-        - title: Cache Configuration 
-          url: /developers-guide/configuring-caches/configuration-overview 
-        - title: Configuring Partition Backups
-          url: /developers-guide/configuring-caches/configuring-backups
-        - title: Atomicity Modes
-          url: /developers-guide/configuring-caches/atomicity-modes
-        - title: Expiry Policy
-          url: /developers-guide/configuring-caches/expiry-policies
-        - title: On-Heap Caching
-          url: /developers-guide/configuring-caches/on-heap-caching
-        - title: Cache Groups 
-          url: /developers-guide/configuring-caches/cache-groups
-    - title: Persistence
-      items:
-        - title: Ignite Persistence
-          url: /developers-guide/persistence/native-persistence
-        - title: External Storage
-          url: /developers-guide/persistence/external-storage
-        - title: Swapping
-          url: /developers-guide/persistence/swap 
-        - title: Implementing Custom Cache Store
-          url: /developers-guide/persistence/custom-cache-store
-    - title: Baseline Topology
-      url: /developers-guide/baseline-topology
-    - title: Data Rebalancing
-      url: /developers-guide/data-rebalancing 
-    - title: Partition Loss Policy
-      url: /developers-guide/partition-loss-policy
-    - title: Data Streaming
-      url: /developers-guide/data-streaming
-    - title: Using Key-Value Cache API
-      items:
-        - title: Basic Cache Operations 
-          url: /developers-guide/key-value-api/basic-cache-operations
-        - title: Working with Binary Objects
-          url: /developers-guide/key-value-api/binary-objects
-        - title: Using Scan Queries
-          url: /developers-guide/key-value-api/using-scan-queries
-    - title: Using Continuous Queries
-      url: /developers-guide/key-value-api/continuous-queries
-    - title: Performing Transactions
-      url: /developers-guide/key-value-api/transactions
-    - title: Working with SQL
-      items:
-        - title: Introduction
-          url: /developers-guide/SQL/sql-introduction
-        - title: Understanding Schemas
-          url: /developers-guide/SQL/schemas
-        - title: Defining Indexes
-          url: /developers-guide/SQL/indexes          
-        - title: Using SQL API
-          url: /developers-guide/SQL/sql-api          
-        - title: Distributed Joins
-          url: /developers-guide/SQL/distributed-joins
-        - title: SQL Transactions
-          url: /developers-guide/SQL/sql-transactions
-        - title: Custom SQL Functions
-          url: /developers-guide/SQL/custom-sql-func
-        - title: JDBC Driver
-          url: /developers-guide/SQL/JDBC/jdbc-driver
-        - title: JDBC Client Driver
-          url: /developers-guide/SQL/JDBC/jdbc-client-driver
-        - title: Multiversion Concurrency Control
-          url: /developers-guide/transactions/mvcc
-    - title: Distributed Computing 
-      items:
-        - title: Distributed Computing API 
-          url: /developers-guide/distributed-computing/distributed-computing
-        - title: Cluster Groups
-          url: /developers-guide/distributed-computing/cluster-groups
-        - title: Executor Service
-          url: /developers-guide/distributed-computing/executor-service
-        - title: MapReduce API
-          url: /developers-guide/distributed-computing/map-reduce
-        - title: Load Balancing 
-          url: /developers-guide/distributed-computing/load-balancing
-        - title: Fault Tolerance
-          url: /developers-guide/distributed-computing/fault-tolerance
-        - title: Job Scheduling
-          url: /developers-guide/distributed-computing/job-scheduling
-    - title: Colocating Computations with Data
-      url: /developers-guide/collocated-computations
-    - title: Working with Events
-      items:
-        - title: Enabling and Listenting to Events 
-          url: /developers-guide/events/listening-to-events
-        - title: Events 
-          url: /developers-guide/events/events
-    - title: Near Caches
-      url: /developers-guide/near-cache
+    - title: Distributed Computing API 
+      url: /distributed-computing/distributed-computing
+    - title: Cluster Groups
+      url: /distributed-computing/cluster-groups
+    - title: Executor Service
+      url: /distributed-computing/executor-service
+    - title: MapReduce API
+      url: /distributed-computing/map-reduce
+    - title: Load Balancing 
+      url: /distributed-computing/load-balancing
+    - title: Fault Tolerance
+      url: /distributed-computing/fault-tolerance
+    - title: Job Scheduling
+      url: /distributed-computing/job-scheduling
+- title: Colocating Computations with Data
+  url: /collocated-computations
+- title: Working with Events
+  items:
+    - title: Enabling and Listenting to Events 
+      url: /events/listening-to-events
+    - title: Events 
+      url: /events/events
+- title: Near Caches
+  url: /near-cache
 #    - title: .NET Platform Cache
-#      url: /developers-guide/platform-cache
-    - title: Peer Class Loading
-      url: /developers-guide/peer-class-loading
-    - title: Thin Clients
-      items: 
-        - title: Thin Clients Overview 
-          url: /developers-guide/thin-clients/getting-started-with-thin-clients
-        - title: Java Thin Client
-          url: /developers-guide/thin-clients/java-thin-client
-        - title: .NET Thin Client
-          url: /developers-guide/thin-clients/dotnet-thin-client
-        - title: C++ Thin Client
-          url: /developers-guide/thin-clients/cpp-thin-client
-        - title: Python Thin Client
-          url: /developers-guide/thin-clients/python-thin-client
-        - title: PHP Thin Client
-          url: /developers-guide/thin-clients/php-thin-client
-        - title: Node.js Thin Client
-          url: /developers-guide/thin-clients/nodejs-thin-client
-    - title: ODBC Driver
-      items: 
-        - title: ODBC Driver
-          url: /developers-guide/SQL/ODBC/odbc-driver 
-        - title: Connection String and DSN
-          url:  /developers-guide/SQL/ODBC/connection-string-dsn 
-        - title: Querying and Modifying Data 
-          url: /developers-guide/SQL/ODBC/querying-modifying-data 
-        - title: Specification
-          url: /developers-guide/SQL/ODBC/specification  
-        - title: Data Types
-          url: /developers-guide/SQL/ODBC/data-types 
-        - title: Error Codes
-          url: /developers-guide/SQL/ODBC/error-codes
-    - title: REST API
-      url: /developers-guide/restapi
-#
-#    - title: Machine Learning
-#      items:
-#        - title: Machine Learning
-#          url: /developers-guide/machine-learning/ml
-#        - title: Preprocessing
-#          url: /developers-guide/machine-learning/preprocessing
-#        - title: Partition Based Dataset
-#          url: /developers-guide/machine-learning/part-based-dataset
-#        - title: Linear Regression
-#          url: /developers-guide/machine-learning/line-reg
-#        - title: K-Means Clustering
-#          url: /developers-guide/machine-learning/k-means
-#        - title: Genetic Algorithms
-#          url: /developers-guide/machine-learning/genetic-alg
-#        - title: Multilayer Perceptron
-#          url: /developers-guide/machine-learning/ml-percep
-#        - title: Decision Trees
-#          url: /developers-guide/machine-learning/decision-trees
-#        - title: k-NN Classification
-#          url: /developers-guide/machine-learning/knn-class
-#        - title: k-NN Regression
-#          url: /developers-guide/machine-learning/knn-reg
-#        - title: SVM Binary Classification
-#          url: /developers-guide/machine-learning/svm-binary
-#        - title: SVM Multi-class Classification
-#          url: /developers-guide/machine-learning/svm-multi
-#        - title: Model Cross Validation
-#          url: /developers-guide/machine-learning/model-cross
-#        - title: Logistic Regression
-#          url: /developers-guide/machine-learning/log-reg
-#        - title: Random Forest
-#          url: /developers-guide/machine-learning/random-forest
-#        - title: Gradient Boosting
-#          url: /developers-guide/machine-learning/grad-boost
-#        - title: ANN (Approximate Nearest Neighbor)
-#          url: /developers-guide/machine-learning/ann
-#        - title: Model updating
-#          url: /developers-guide/machine-learning/model-updating
-#        - title: Model Importing
-#          url: /developers-guide/machine-learning/model-importing
-- title: Adminstrator's Guide
-  url: /administrators-guide/
+#      url: /platform-cache
+- title: Metrics and Monitoring
   items:
-    - title: Control Script
-      url: /administrators-guide/control-script
-    - title: Metrics and Monitoring
-      items:
-        - title: Introduction
-          url: /administrators-guide/monitoring-metrics/intro
-        - title: Configuring Metrics
-          url: /administrators-guide/monitoring-metrics/configuring-metrics
-        - title: JMX Metrics 
-          url: /administrators-guide/monitoring-metrics/metrics
-        - title: System Views
-          url: /administrators-guide/monitoring-metrics/system-views
-    - title: Security
-      url: /administrators-guide/security
-      items: 
-        - title: Authentication
-          url: /administrators-guide/security/authentication
-        - title: SSL/TLS 
-          url: /administrators-guide/security/ssl-tls
-        - title: Transparent Data Encryption
-          url: /administrators-guide/security/tde
+    - title: Introduction
+      url: /monitoring-metrics/intro
+    - title: Configuring Metrics
+      url: /monitoring-metrics/configuring-metrics
+    - title: JMX Metrics 
+      url: /monitoring-metrics/metrics
+    - title: System Views
+      url: /monitoring-metrics/system-views
+- title: Security
+  url: /security
+  items: 
+    - title: Authentication
+      url: /security/authentication
+    - title: SSL/TLS 
+      url: /security/ssl-tls
+    - title: Transparent Data Encryption
+      url: /security/tde
+
+- title: Thin Clients
+  items: 
+    - title: Thin Clients Overview 
+      url: /thin-clients/getting-started-with-thin-clients
+    - title: Java Thin Client
+      url: /thin-clients/java-thin-client
+    - title: .NET Thin Client
+      url: /thin-clients/dotnet-thin-client
+    - title: C++ Thin Client
+      url: /thin-clients/cpp-thin-client
+    - title: Python Thin Client
+      url: /thin-clients/python-thin-client
+    - title: PHP Thin Client
+      url: /thin-clients/php-thin-client
+    - title: Node.js Thin Client
+      url: /thin-clients/nodejs-thin-client
+- title: ODBC Driver
+  items: 
+    - title: ODBC Driver
+      url: /SQL/ODBC/odbc-driver 
+    - title: Connection String and DSN
+      url:  /SQL/ODBC/connection-string-dsn 
+    - title: Querying and Modifying Data 
+      url: /SQL/ODBC/querying-modifying-data 
+    - title: Specification
+      url: /SQL/ODBC/specification  
+    - title: Data Types
+      url: /SQL/ODBC/data-types 
+    - title: Error Codes
+      url: /SQL/ODBC/error-codes
+- title: REST API
+  url: /restapi
+- title: Control Script
+  url: /control-script
 
 #    - title: Capacity Planning
-#      url: /administrators-guide/capacity-planning
+#      url: /capacity-planning
 #    - title: Performance and Troubleshooting Guide
 #      url: /perf-troubleshooting-guide/general-perf-tips
 #      items:
@@ -302,31 +224,28 @@
 #        - title: Troubleshooting and Debugging
 #          url: /perf-troubleshooting-guide/troubleshooting
 #
-#- title: SQL Reference
-#  url: /sql-reference/sql-reference-overview
-#  descripiton: SQL reference information
-#  items:
-#    - title: SQL Reference Overview
-#      url: /sql-reference/sql-reference-overview
+- title: SQL Reference
+  url: /sql-reference/sql-reference-overview
+  items:
 #    - title: SQL Conformance
 #      url: /sql-reference/sql-conformance
-#    - title: Data Definition Language (DDL)
-#      url: /sql-reference/ddl
-#    - title: Data Manipulation Language (DML)
-#      url: /sql-reference/dml
-#    - title: Transactions
-#      url: /sql-reference/transactions
-#    - title: Operational Commands
-#      url: /sql-reference/operational-commands
-#    - title: Aggregate functions
-#      url: /sql-reference/aggregate-functions
-#    - title: Numeric Functions
-#      url: /sql-reference/numeric-functions
-#    - title: String Functions
-#      url: /sql-reference/string-functions
-#    - title: Data and Time Functions
-#      url: /sql-reference/date-time-functions
-#    - title: System Functions
-#      url: /sql-reference/system-functions
-#    - title: Data Types
-#      url: /sql-reference/data-types
\ No newline at end of file
+    - title: Data Definition Language (DDL)
+      url: /sql-reference/ddl
+    - title: Data Manipulation Language (DML)
+      url: /sql-reference/dml
+    - title: Transactions
+      url: /sql-reference/transactions
+    - title: Operational Commands
+      url: /sql-reference/operational-commands
+    - title: Aggregate functions
+      url: /sql-reference/aggregate-functions
+    - title: Numeric Functions
+      url: /sql-reference/numeric-functions
+    - title: String Functions
+      url: /sql-reference/string-functions
+    - title: Data and Time Functions
+      url: /sql-reference/date-time-functions
+    - title: System Functions
+      url: /sql-reference/system-functions
+    - title: Data Types
+      url: /sql-reference/data-types
diff --git a/docs/_docs/developers-guide/SQL/JDBC/error-codes.adoc b/docs/_docs/SQL/JDBC/error-codes.adoc
similarity index 92%
rename from docs/_docs/developers-guide/SQL/JDBC/error-codes.adoc
rename to docs/_docs/SQL/JDBC/error-codes.adoc
index db950d4..b8f649c 100644
--- a/docs/_docs/developers-guide/SQL/JDBC/error-codes.adoc
+++ b/docs/_docs/SQL/JDBC/error-codes.adoc
@@ -36,7 +36,7 @@ The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI S
 
 |0A000|Requested operation is not supported.
 
-|40001|Concurrent update conflict. See link:developers-guide/transactions/mvcc#concurrent-updates[Concurrent Updates].
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
 
 |42000|Query parsing exception.
 
diff --git a/docs/_docs/developers-guide/SQL/JDBC/jdbc-client-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
similarity index 97%
rename from docs/_docs/developers-guide/SQL/JDBC/jdbc-client-driver.adoc
rename to docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
index 1c85546..5dab3c4 100644
--- a/docs/_docs/developers-guide/SQL/JDBC/jdbc-client-driver.adoc
+++ b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
@@ -42,7 +42,7 @@ include::{javaFile}[tags=register, indent=0]
 [discrete]
 === Securing Connection
 
-For information on how to secure the JDBC client driver connection, you can refer to the link:administrators-guide/security/ssl-tls[Security documentation].
+For information on how to secure the JDBC client driver connection, you can refer to the link:security/ssl-tls[Security documentation].
 ====
 
 === Supported Parameters
@@ -182,7 +182,7 @@ Presently, streaming mode is supported only for INSERT operations. This is usefu
 Make sure you specify a target cache for streaming as an argument to the `cache=` parameter in the JDBC connection string. If a cache is not specified or does not match the table used in streaming DML statements, updates will be ignored.
 ====
 
-The parameters cover almost all of the settings of a general `IgniteDataStreamer` and allow you to tune the streamer according to your needs. Please refer to the link:developers-guide/data-streaming[Data Streaming] section for more information on how to configure the streamer.
+The parameters cover almost all of the settings of a general `IgniteDataStreamer` and allow you to tune the streamer according to your needs. Please refer to the link:data-streaming[Data Streaming] section for more information on how to configure the streamer.
 
 [NOTE]
 ====
diff --git a/docs/_docs/developers-guide/SQL/JDBC/jdbc-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
similarity index 94%
rename from docs/_docs/developers-guide/SQL/JDBC/jdbc-driver.adoc
rename to docs/_docs/SQL/JDBC/jdbc-driver.adoc
index 0b5b178..e468e9e 100644
--- a/docs/_docs/developers-guide/SQL/JDBC/jdbc-driver.adoc
+++ b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
@@ -3,7 +3,7 @@
 
 Ignite is shipped with JDBC drivers that allow processing of distributed data using standard SQL statements like `SELECT`, `INSERT`, `UPDATE` or `DELETE` directly from the JDBC side.
 
-Presently, there are two drivers supported by Ignite: the lightweight and easy to use JDBC Thin Driver described in this document and link:developers-guide/SQL/JDBC/jdbc-client-driver[JDBC Client Driver] that interacts with the cluster by means of a link:developers-guide/clustering/clustering#servers-and-clients[client node].
+Presently, there are two drivers supported by Ignite: the lightweight and easy to use JDBC Thin Driver described in this document and link:SQL/JDBC/jdbc-client-driver[JDBC Client Driver] that interacts with the cluster by means of a link:clustering/clustering#servers-and-clients[client node].
 
 == JDBC Thin Driver
 
@@ -61,16 +61,16 @@ The following table lists all the parameters that are supported by the JDBC conn
 
 |`user`
 |Username for the SQL Connection. This parameter is required if authentication is enabled on the server.
-See the link:administrators-guide/security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
 |ignite
 
 |`password`
 |Password for SQL Connection. Required if authentication is enabled on the server.
-See the link:administrators-guide/security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
 |`ignite`
 
 |`distributedJoins`
-|Whether to execute distributed joins in link:developers-guide/SQL/distributed-joins#non-colocated-joins[non-colocated mode].
+|Whether to execute distributed joins in link:SQL/distributed-joins#non-colocated-joins[non-colocated mode].
 |false
 
 |`enforceJoinOrder`
@@ -107,7 +107,7 @@ See the link:administrators-guide/security/authentication[Authentication] and li
 | 1000
 
 |`partitionAwarenessPartitionDistributionsCacheSize` [[partitionAwarenessPartitionDistributionsCacheSize]]
-| The number of distinct objects that represent partition distribution that the driver keeps locally for optimization. See the description of the previous parameter for details. This local storage with partition distribution objects invalidates when the cluster topology changes. The optimal value for this parameter should equal the number of distinct tables (link:developers-guide/configuring-caches/cache-groups[cache groups]) you are going to use in your queries.
+| The number of distinct objects that represent partition distribution that the driver keeps locally for optimization. See the description of the previous parameter for details. This local storage with partition distribution objects invalidates when the cluster topology changes. The optimal value for this parameter should equal the number of distinct tables (link:configuring-caches/cache-groups[cache groups]) you are going to use in your queries.
 | 1000
 
 |`socketSendBuffer`
@@ -152,8 +152,8 @@ For the list of security parameters, refer to the <<Using SSL>> section.
 
 === Multiple Endpoints
 
-You can enable automatic failover if a current connection is broken by setting multiple connection endpoints in the connection string. 
-The JDBC Driver randomly picks an address from the list to connect to. If the connection fails, the JDBC Driver selects another address from the list until the connection is restored. 
+You can enable automatic failover if a current connection is broken by setting multiple connection endpoints in the connection string.
+The JDBC Driver randomly picks an address from the list to connect to. If the connection fails, the JDBC Driver selects another address from the list until the connection is restored.
 The Driver stops reconnecting and throws an exception if all the endpoints are unreachable.
 
 The example below shows how to pass three addresses via the connection string:
@@ -182,15 +182,15 @@ Without partition awareness, the JDBC driver connects to a single node, and all
 If the data is hosted on a different node, the query has to be rerouted within the cluster, which adds an additional network hop.
 Partition awareness eliminates that hop by sending the query to the right node.
 
-To make use of the partition awareness feature, provide the addresses of all the server nodes in the connection properties. 
+To make use of the partition awareness feature, provide the addresses of all the server nodes in the connection properties.
 The driver will route requests to the nodes that store the data requested by the query.
 
 [WARNING]
 ====
 [discrete]
-Note that presently you need to provide the addresses of all server nodes in the connection properties because the driver does not load them automatically after a connection is opened. 
-It also means that if a new server node joins the cluster, you are advised to reconnect the driver and add the node's address to the connection properties. 
-Otherwise, the driver will not be able to send direct requests to this node. 
+Note that presently you need to provide the addresses of all server nodes in the connection properties because the driver does not load them automatically after a connection is opened.
+It also means that if a new server node joins the cluster, you are advised to reconnect the driver and add the node's address to the connection properties.
+Otherwise, the driver will not be able to send direct requests to this node.
 ====
 
 To enable partition awareness, add the `partitionAwareness=true` parameter to the connection string and provide the
@@ -330,7 +330,7 @@ When this parameter is set to zero or a negative value, the idle timeout is disa
 
 |`sslContextFactory`
 
-|The class name that implements `Factory<SSLContext>` to provide node-side SSL. See link:administrators-guide/security/ssl-tls[this] for more information.
+|The class name that implements `Factory<SSLContext>` to provide node-side SSL. See link:security/ssl-tls[this] for more information.
 
 |`null`
 |=======================================================================
@@ -358,7 +358,7 @@ SQLSTATE="08006"
 
 You can configure the JDBC Thin Driver to use SSL to secure communication with the cluster.
 SSL must be configured both on the cluster side and in the JDBC Driver.
-Refer to the  link:administrators-guide/security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section for the information about cluster configuration.
+Refer to the  link:security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section for the information about cluster configuration.
 
 To enable SSL in the JDBC Driver, pass the `sslMode=require` parameter in the connection string and provide the key store and trust store parameters:
 
@@ -608,7 +608,7 @@ The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI S
 
 |0A000|Requested operation is not supported.
 
-|40001|Concurrent update conflict. See link:developers-guide/transactions/mvcc#concurrent-updates[Concurrent Updates].
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
 
 |42000|Query parsing exception.
 
diff --git a/docs/_docs/developers-guide/SQL/ODBC/connection-string-dsn.adoc b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
similarity index 92%
rename from docs/_docs/developers-guide/SQL/ODBC/connection-string-dsn.adoc
rename to docs/_docs/SQL/ODBC/connection-string-dsn.adoc
index 67cb4fe..1ac9bbe 100644
--- a/docs/_docs/developers-guide/SQL/ODBC/connection-string-dsn.adoc
+++ b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
@@ -42,12 +42,12 @@ This argument value is ignored if `ADDRESS` argument is specified.
 
 |`USER`
 |Username for SQL Connection. This parameter is required if authentication is enabled on the server.
-See link:administrators-guide/security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
 |Empty string
 
 |`PASSWORD`
 |Password for SQL Connection. This parameter is required if authentication is enabled on the server.
-See link:administrators-guide/security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
 |Empty string
 
 |`SCHEMA`
@@ -63,7 +63,7 @@ See link:administrators-guide/security/authentication[Authentication] and link:s
 |`1024`
 
 |`DISTRIBUTED_JOINS`
-|Enables the link:developers-guide/SQL/distributed-joins#non-colocated-joins[non-colocated distributed joins] feature for all queries that are executed over the ODBC connection.
+|Enables the link:SQL/distributed-joins#non-colocated-joins[non-colocated distributed joins] feature for all queries that are executed over the ODBC connection.
 |`false`
 
 |`ENFORCE_JOIN_ORDER`
diff --git a/docs/_docs/developers-guide/SQL/ODBC/data-types.adoc b/docs/_docs/SQL/ODBC/data-types.adoc
similarity index 100%
rename from docs/_docs/developers-guide/SQL/ODBC/data-types.adoc
rename to docs/_docs/SQL/ODBC/data-types.adoc
diff --git a/docs/_docs/developers-guide/SQL/ODBC/error-codes.adoc b/docs/_docs/SQL/ODBC/error-codes.adoc
similarity index 100%
rename from docs/_docs/developers-guide/SQL/ODBC/error-codes.adoc
rename to docs/_docs/SQL/ODBC/error-codes.adoc
diff --git a/docs/_docs/developers-guide/SQL/ODBC/odbc-driver.adoc b/docs/_docs/SQL/ODBC/odbc-driver.adoc
similarity index 99%
rename from docs/_docs/developers-guide/SQL/ODBC/odbc-driver.adoc
rename to docs/_docs/SQL/ODBC/odbc-driver.adoc
index 5b5784a..4833c27 100644
--- a/docs/_docs/developers-guide/SQL/ODBC/odbc-driver.adoc
+++ b/docs/_docs/SQL/ODBC/odbc-driver.adoc
@@ -121,7 +121,7 @@ cfg.setClientConnectorConfiguration(clientConnectorCfg);
 ----
 --
 
-A connection that is established from the ODBC driver side to the cluster via `ClientListenerProcessor` is also configurable. Find more details on how to alter connection settings from the driver side link:developers-guide/SQL/ODBC/connection-string-dsn[here].
+A connection that is established from the ODBC driver side to the cluster via `ClientListenerProcessor` is also configurable. Find more details on how to alter connection settings from the driver side link:SQL/ODBC/connection-string-dsn[here].
 
 == Thread-Safety
 
diff --git a/docs/_docs/developers-guide/SQL/ODBC/querying-modifying-data.adoc b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
similarity index 96%
rename from docs/_docs/developers-guide/SQL/ODBC/querying-modifying-data.adoc
rename to docs/_docs/SQL/ODBC/querying-modifying-data.adoc
index f7c2be7..300b80d 100644
--- a/docs/_docs/developers-guide/SQL/ODBC/querying-modifying-data.adoc
+++ b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
@@ -5,7 +5,7 @@ This page elaborates on how to connect to a cluster and execute a variety of SQL
 
 
 At the implementation layer, the ODBC driver uses SQL Fields queries to retrieve data from the cluster.
-This means that from ODBC side you can access only those fields that are link:developers-guide/SQL/sql-api#configuring-queryable-fields[defined in the cluster configuration].
+This means that from ODBC side you can access only those fields that are link:SQL/sql-api#configuring-queryable-fields[defined in the cluster configuration].
 
 Moreover, the ODBC driver supports DML (Data Modification Layer), which means that you can modify your data using an ODBC connection.
 
@@ -80,9 +80,9 @@ For both types, we listed specific fields and indexes that will be read or updat
 
 == Connecting to the Cluster
 
-After the cluster is configured and started, we can connect to it from the ODBC driver side. To do this, you need to prepare a valid connection string and pass it as a parameter to the ODBC driver at the connection time. Refer to the link:developers-guide/SQL/ODBC/connection-string-dsn[Connection String] page for more details.
+After the cluster is configured and started, we can connect to it from the ODBC driver side. To do this, you need to prepare a valid connection string and pass it as a parameter to the ODBC driver at the connection time. Refer to the link:SQL/ODBC/connection-string-dsn[Connection String] page for more details.
 
-Alternatively, you can also use a link:developers-guide/SQL/ODBC/connection-string-dsn#configuring-dsn[pre-configured DSN] for connection purposes as shown in the example below.
+Alternatively, you can also use a link:SQL/ODBC/connection-string-dsn#configuring-dsn[pre-configured DSN] for connection purposes as shown in the example below.
 
 
 [source,c++]
diff --git a/docs/_docs/developers-guide/SQL/ODBC/specification.adoc b/docs/_docs/SQL/ODBC/specification.adoc
similarity index 100%
rename from docs/_docs/developers-guide/SQL/ODBC/specification.adoc
rename to docs/_docs/SQL/ODBC/specification.adoc
diff --git a/docs/_docs/developers-guide/SQL/custom-sql-func.adoc b/docs/_docs/SQL/custom-sql-func.adoc
similarity index 100%
rename from docs/_docs/developers-guide/SQL/custom-sql-func.adoc
rename to docs/_docs/SQL/custom-sql-func.adoc
diff --git a/docs/_docs/developers-guide/SQL/distributed-joins.adoc b/docs/_docs/SQL/distributed-joins.adoc
similarity index 96%
rename from docs/_docs/developers-guide/SQL/distributed-joins.adoc
rename to docs/_docs/SQL/distributed-joins.adoc
index 347f474..cbeb492 100644
--- a/docs/_docs/developers-guide/SQL/distributed-joins.adoc
+++ b/docs/_docs/SQL/distributed-joins.adoc
@@ -33,7 +33,7 @@ If the join is done on the primary or affinity key, the nodes send unicast reque
 
 Enable the non-colocated mode of query execution by setting a JDBC/ODBC parameter or, if you use SQL API, by calling `SqlFieldsQuery.setDistributedJoins(true)`.
 
-WARNING: If you use a non-collocated join on a column from a link:developers-guide/data-modeling/data-partitioning#replicated[replicated table], the column must have an index.
+WARNING: If you use a non-collocated join on a column from a link:data-modeling/data-partitioning#replicated[replicated table], the column must have an index.
 Otherwise, you will get an exception.
 
 
@@ -68,7 +68,7 @@ Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
 // Open the JDBC connection.
 Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1?enforceJoinOrder=true");
 ----
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::code-snippets/dotnet/SqlJoinOrder.cs[tag=sqlJoinOrder,indent=0]
diff --git a/docs/_docs/developers-guide/SQL/indexes.adoc b/docs/_docs/SQL/indexes.adoc
similarity index 89%
rename from docs/_docs/developers-guide/SQL/indexes.adoc
rename to docs/_docs/SQL/indexes.adoc
index 77e9ea9..f837435 100644
--- a/docs/_docs/developers-guide/SQL/indexes.adoc
+++ b/docs/_docs/SQL/indexes.adoc
@@ -1,12 +1,13 @@
 = Defining Indexes
 
 :javaFile: {javaCodeDir}/Indexes.java
+:csharpFile: {csharpCodeDir}/DefiningIndexes.cs
 
-In addition to common DDL commands, such as CREATE/DROP INDEX, developers can use Ignite's link:developers-guide/SQL/sql-api[SQL APIs] to define indexes.
+In addition to common DDL commands, such as CREATE/DROP INDEX, developers can use Ignite's link:SQL/sql-api[SQL APIs] to define indexes.
 
 [NOTE]
 ====
-Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from Java code, link:developers-guide/setup#enabling-modules[add this module to your classpath].
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from Java code, link:setup#enabling-modules[add this module to your classpath].
 ====
 
 Ignite automatically creates indexes for each primary key and affinity key field.
@@ -27,18 +28,18 @@ include::{javaFile}[tag=configuring-with-annotation,indent=0]
 tab:C#/.NET[]
 [source,csharp]
 ----
-include::code-snippets/dotnet/DefiningIndexes.cs[tag=idxAnnotationCfg,indent=0]
+include::{csharpFile}[tag=idxAnnotationCfg,indent=0]
 ----
 tab:C++[unsupported]
 --
 
-The type name is used as the table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained in the link:developers-guide/SQL/schemas[Schemas] section).
+The type name is used as the table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained in the link:SQL/schemas[Schemas] section).
 
 Both `id` and `salary` are indexed fields. `id` will be sorted in ascending order (default) and `salary` in descending order.
 
 If you do not want to index a field, but you still need to use it in SQL queries, then the field must be annotated without the `index = true` parameter.
 Such a field is called a _queryable field_.
-In the example above, `name` is defined as a link:developers-guide/SQL/sql-api#configuring-queryable-fields[queryable field].
+In the example above, `name` is defined as a link:SQL/sql-api#configuring-queryable-fields[queryable field].
 
 The `age` field is neither queryable nor is it an indexed field, and thus it will not be accessible from SQL queries.
 
@@ -119,10 +120,20 @@ After indexed and queryable fields are defined, they have to be registered in th
 
 To specify which types should be indexed, pass the corresponding key-value pairs in the `CacheConfiguration.setIndexedTypes()` method as shown in the example below.
 
+[tabs]
+--
+tab:Java[]
 [source,java]
 ----
 include::{javaFile}[tag=register-indexed-types,indent=0]
 ----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=register-indexed-types,indent=0]
+----
+tab:C++[unsupported]
+--
 
 This method accepts only pairs of types: one for key class and another for value class. Primitives are passed as boxed types.
 
@@ -133,7 +144,7 @@ This method accepts only pairs of types: one for key class and another for value
 In addition to all the fields marked with a `@QuerySqlField` annotation, each table will have two special predefined fields: `pass:[_]key` and `pass:[_]val`, which represent links to whole key and value objects. This is useful, for instance, when one of them is of a primitive type and you want to filter by its value. To do this, run a query like: `SELECT * FROM Person WHERE pass:[_]key = 100`.
 ====
 
-NOTE: Since Ignite supports link:developers-guide/key-value-api/binary-objects[Binary Objects], there is no need to add classes of indexed types to the classpath of cluster nodes. The SQL query engine can detect values of indexed and queryable fields, avoiding object deserialization.
+NOTE: Since Ignite supports link:key-value-api/binary-objects[Binary Objects], there is no need to add classes of indexed types to the classpath of cluster nodes. The SQL query engine can detect values of indexed and queryable fields, avoiding object deserialization.
 
 === Group Indexes
 
@@ -148,7 +159,7 @@ tab:Java[]
 ----
 include::{javaCodeDir}/Indexes_groups.java[tag=group-indexes,indent=0]
 ----
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::code-snippets/dotnet/DefiningIndexes.cs[tag=groupIdx,indent=0]
@@ -181,7 +192,7 @@ tab:Java[]
 include::{javaFile}[tag=index-using-queryentity,indent=0]
 ----
 
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::code-snippets/dotnet/DefiningIndexes.cs[tag=queryEntity,indent=0]
@@ -190,7 +201,7 @@ include::code-snippets/dotnet/DefiningIndexes.cs[tag=queryEntity,indent=0]
 tab:C++[unsupported]
 --
 
-A short name of the `valueType` is used as a table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained on the link:developers-guide/SQL/schemas[Schemas] page).
+A short name of the `valueType` is used as a table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained on the link:SQL/schemas[Schemas] page).
 
 Once the `QueryEntity` is defined, you can execute the SQL query as follows:
 
@@ -244,6 +255,10 @@ tab:Java[]
 include::{javaFile}[tag=annotation-with-inline-size,indent=0]
 ----
 tab:C#/.NET[]
+[source,java]
+----
+include::{csharpFile}[tag=annotation-with-inline-size,indent=0]
+----
 tab:C++[unsupported]
 --
 
@@ -257,6 +272,10 @@ tab:Java[]
 include::{javaFile}[tag=query-entity-with-inline-size,indent=0]
 ----
 tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=query-entity-with-inline-size,indent=0]
+----
 
 tab:C++[unsupported]
 --
@@ -302,7 +321,11 @@ tab:Java[]
 ----
 include::{javaFile}[tag=custom-key,indent=0]
 ----
-tab:.NET/C#[unsupported]
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=custom-key,indent=0]
+----
 tab:C++[unsupported]
 --
 
@@ -314,7 +337,7 @@ tab:C++[unsupported]
 
 If a custom key can be serialized into a binary form, then Ingnite calculates its hash code and implement the `equals()` method automatically.
 
-However, if the key's type is `Externalizable`, and if it cannot be serialized into the binary form, then you are required to implement the `hashCode` and `equals` methods manually. See the link:developers-guide/advanced-topics/binaryobject[Binary Objects] page for more details.
+However, if the key's type is `Externalizable`, and if it cannot be serialized into the binary form, then you are required to implement the `hashCode` and `equals` methods manually. See the link:advanced-topics/binaryobject[Binary Objects] page for more details.
 ====
 
 
diff --git a/docs/_docs/developers-guide/SQL/schemas.adoc b/docs/_docs/SQL/schemas.adoc
similarity index 85%
rename from docs/_docs/developers-guide/SQL/schemas.adoc
rename to docs/_docs/SQL/schemas.adoc
index 339a55d..86860c0 100644
--- a/docs/_docs/developers-guide/SQL/schemas.adoc
+++ b/docs/_docs/SQL/schemas.adoc
@@ -6,7 +6,7 @@ Ignite has a number of default schemas and supports creating custom schemas.
 
 There are two schemas that are available by default:
 
-- The SYS schema, which contains a number of system views with information about cluster nodes. You can't create tables in this schema. Refer to the link:administrators-guide/monitoring-metrics/system-views[System Views] page for further information.
+- The SYS schema, which contains a number of system views with information about cluster nodes. You can't create tables in this schema. Refer to the link:monitoring-metrics/system-views[System Views] page for further information.
 - The <<PUBLIC Schema,PUBLIC schema>>, which is used by default whenever a schema is not specified.
 
 Custom schemas are created in the following cases:
@@ -55,7 +55,7 @@ jdbc:ignite:thin://127.0.0.1/MY_SCHEMA
 ----
 
 == Cache and Schema Names
-When you create a cache with link:developers-guide/SQL/sql-api#configuring-queryable-fields[queryable fields], you can manipulate the cached data using the link:developers-guide/SQL/sql-api[SQL API]. In SQL terms, each such cache corresponds to a separate schema whose name equals the name of the cache.
+When you create a cache with link:SQL/sql-api#configuring-queryable-fields[queryable fields], you can manipulate the cached data using the link:SQL/sql-api[SQL API]. In SQL terms, each such cache corresponds to a separate schema whose name equals the name of the cache.
 
 Similarly, when you create a table via a DDL statement, you can access it as a key-value cache via Ignite's supported programming interfaces. The name of the corresponding cache can be specified by providing the `CACHE_NAME` parameter in the `WITH` part of the `CREATE TABLE` statement.
 
diff --git a/docs/_docs/developers-guide/SQL/sql-api.adoc b/docs/_docs/SQL/sql-api.adoc
similarity index 97%
rename from docs/_docs/developers-guide/SQL/sql-api.adoc
rename to docs/_docs/SQL/sql-api.adoc
index e670a43..0e58a67 100644
--- a/docs/_docs/developers-guide/SQL/sql-api.adoc
+++ b/docs/_docs/SQL/sql-api.adoc
@@ -13,7 +13,7 @@ NOTE: If you create tables using JDBC or SQL tools, you do not need to define qu
 
 [NOTE]
 ====
-Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from java code, link:developers-guide/setup#enabling-modules[add this module to the classpath of your application].
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from java code, link:setup#enabling-modules[add this module to the classpath of your application].
 ====
 
 In Java, queryable fields can be configured in two ways:
@@ -108,7 +108,7 @@ To force local execution of a query, use `SqlFieldsQuery.setLocal(true)`. In thi
 
 === Subqueries in WHERE Clause
 
-`SELECT` queries used in `INSERT` and `MERGE` statements as well as `SELECT` queries generated by `UPDATE` and `DELETE` operations are distributed and executed in either link:developers-guide/SQL/distributed-joins[colocated or non-colocated distributed modes].
+`SELECT` queries used in `INSERT` and `MERGE` statements as well as `SELECT` queries generated by `UPDATE` and `DELETE` operations are distributed and executed in either link:SQL/distributed-joins[colocated or non-colocated distributed modes].
 
 However, if there is a subquery that is executed as part of a `WHERE` clause, then it can be executed in the colocated mode only.
 
diff --git a/docs/_docs/developers-guide/SQL/sql-introduction.adoc b/docs/_docs/SQL/sql-introduction.adoc
similarity index 59%
rename from docs/_docs/developers-guide/SQL/sql-introduction.adoc
rename to docs/_docs/SQL/sql-introduction.adoc
index 9907f8d..35f8903 100644
--- a/docs/_docs/developers-guide/SQL/sql-introduction.adoc
+++ b/docs/_docs/SQL/sql-introduction.adoc
@@ -4,15 +4,15 @@ Ignite comes with ANSI-99 compliant, horizontally scalable and fault-tolerant di
 
 As a SQL database, Ignite supports all DML commands including SELECT, UPDATE, INSERT, and DELETE queries and also implements a subset of DDL commands relevant for distributed systems.
 
-You can interact with Ignite as you would with any other SQL enabled storage by connecting with link:developers-guide/SQL/JDBC/jdbc-driver/[JDBC] or link:developers-guide/SQL/sql-introduction/[ODBC] drivers from both external tools and applications. Java, .NET and C++ developers can leverage native  link:developers-guide/SQL/sql-api[SQL APIs].
+You can interact with Ignite as you would with any other SQL enabled storage by connecting with link:SQL/JDBC/jdbc-driver/[JDBC] or link:SQL/sql-introduction/[ODBC] drivers from both external tools and applications. Java, .NET and C++ developers can leverage native  link:SQL/sql-api[SQL APIs].
 
-Internally, SQL tables have the same data structure as link:developers-guide/data-modeling/data-modeling#key-value-cache-vs-sql-table[key-value caches]. It means that you can change partition distribution of your data and leverage link:developers-guide/data-modeling/affinity-collocation[affinity colocation techniques] for better performance.
+Internally, SQL tables have the same data structure as link:data-modeling/data-modeling#key-value-cache-vs-sql-table[key-value caches]. It means that you can change partition distribution of your data and leverage link:data-modeling/affinity-collocation[affinity colocation techniques] for better performance.
 
 Ignite's SQL engine uses H2 Database to parse and optimize queries and generate execution plans.
 
 == Distributed Queries
 
-Queries against link:developers-guide/data-modeling/data-partitioning#partitioned[partitioned] tables are executed in a distributed manner:
+Queries against link:data-modeling/data-partitioning#partitioned[partitioned] tables are executed in a distributed manner:
 
 - The query is parsed and split into multiple “map” queries and a single “reduce” query.
 - All the map queries are executed on all the nodes where required data resides.
@@ -22,11 +22,11 @@ You can force a query to be processed locally, i.e. on the subset of data that i
 
 == Local Queries
 
-If a query is executed over a link:developers-guide/data-modeling/data-partitioning#replicated[replicated] table, it will be run against the local data.
+If a query is executed over a link:data-modeling/data-partitioning#replicated[replicated] table, it will be run against the local data.
 
 Queries over partitioned tables are executed in a distributed manner.
 However, you can force local execution of a query over a partitioned table.
-See link:developers-guide/SQL/sql-api#local-execution[Local Execution] for details.
+See link:SQL/sql-api#local-execution[Local Execution] for details.
 
 
 ////
diff --git a/docs/_docs/developers-guide/SQL/sql-transactions.adoc b/docs/_docs/SQL/sql-transactions.adoc
similarity index 96%
rename from docs/_docs/developers-guide/SQL/sql-transactions.adoc
rename to docs/_docs/SQL/sql-transactions.adoc
index 4276209..8a9938f 100644
--- a/docs/_docs/developers-guide/SQL/sql-transactions.adoc
+++ b/docs/_docs/SQL/sql-transactions.adoc
@@ -6,7 +6,7 @@ IMPORTANT: Support for SQL transactions is currently in the beta stage. For prod
 
 
 == Overview
-SQL Transactions are supported for caches that use the `TRANSACTIONAL_SNAPSHOT` atomicity mode. The `TRANSACTIONAL_SNAPSHOT` mode is the implementation of multiversion concurrency control (MVCC) for Ignite caches. For more information about MVCC and current limitations, visit the link:developers-guide/transactions/mvcc[Multiversion Concurrency Control] page.
+SQL Transactions are supported for caches that use the `TRANSACTIONAL_SNAPSHOT` atomicity mode. The `TRANSACTIONAL_SNAPSHOT` mode is the implementation of multiversion concurrency control (MVCC) for Ignite caches. For more information about MVCC and current limitations, visit the link:transactions/mvcc[Multiversion Concurrency Control] page.
 
 See the link:sql-reference/transactions[Transactions] page for the transaction syntax supported by Ignite.
 
diff --git a/docs/_docs/administrators-guide/index.adoc b/docs/_docs/administrators-guide/index.adoc
deleted file mode 100644
index 5c20baf..0000000
--- a/docs/_docs/administrators-guide/index.adoc
+++ /dev/null
@@ -1,6 +0,0 @@
----
-layout: toc
----
-
-= Administrators Guide
-
diff --git a/docs/_docs/administrators-guide/introduction.adoc b/docs/_docs/administrators-guide/introduction.adoc
deleted file mode 100644
index b4ede3e..0000000
--- a/docs/_docs/administrators-guide/introduction.adoc
+++ /dev/null
@@ -1,18 +0,0 @@
-= Introduction
-Welcome to the Ignite Administrators Guide.
-
-This guide is designed for people tasked with Ignite cluster administration.
-
-Once you've installed Ignite, you will want or need to perform many different administrative tasks — everything from migration and security, to deploying, monitoring, and upgrading your clusters. Some of the topics listed in the table of contents are useful right away, and others you may not need until later (or not at all, depending on your use case).
-
-== Programming Languages
-
-include::includes/intro-languages.adoc[]
-
-== Related Documentation
-
-If you're looking for information about installing the product or getting started, see the link:getting-started[Getting Started Guide] and the link:installation-guide[Installation Guide].
-
-If you're looking for information on how to build an application, see the link:developers-guide/preface[Developers Guide].
-
-If you're looking to learn more about performance or troubleshooting, see the link:perf-troubleshooting-guide/general-perf-tips[Performance and Troubleshooting Guide].
diff --git a/docs/_docs/developers-guide/baseline-topology.adoc b/docs/_docs/baseline-topology.adoc
similarity index 84%
rename from docs/_docs/developers-guide/baseline-topology.adoc
rename to docs/_docs/baseline-topology.adoc
index e7428f6..91fcb1e 100644
--- a/docs/_docs/developers-guide/baseline-topology.adoc
+++ b/docs/_docs/baseline-topology.adoc
@@ -1,10 +1,11 @@
 = Baseline Topology
 
 :javaFile: {javaCodeDir}/BaselineTopology.java
+:csharpFile: {csharpCodeDir}/BaselineTopology.cs
 
 The _baseline topology_ is a set of nodes meant to hold data.
 The concept of baseline topology was introduced to give you the ability to control when you want to
-link:developers-guide/data-modeling/data-partitioning#rebalancing[rebalance the data in the cluster]. For example, if
+link:data-modeling/data-partitioning#rebalancing[rebalance the data in the cluster]. For example, if
 you have a cluster of 3 nodes where the data is distributed between the nodes, and you add 2 more nodes, the rebalancing
 process re-distributes the data between all 5 nodes. The rebalancing process happens when the
 baseline topology changes, which can either happen automatically or be triggered manually.
@@ -20,12 +21,12 @@ occasional network failures or scheduled server maintenance.
 Baseline topology changes automatically when <<Baseline Topology Autoadjustment>> is enabled. This is the default
 behavior for pure in-memory clusters. For persistent clusters, the baseline topology autoadjustment feature must be enabled
 manually. By default, it is disabled and you have to change the baseline topology manually. You can change the baseline
-topology using the link:administrators-guide/control-script#activation-deactivation-and-topology-management[control script].
+topology using the link:control-script#activation-deactivation-and-topology-management[control script].
 
 [CAUTION]
 ====
 Any attempt to create a cache while the baseline topology is being changed results in an exception.
-For more details, see link:developers-guide/key-value-api/basic-cache-operations#creating-caches-dynamically[Creating Caches Dynamically].
+For more details, see link:key-value-api/basic-cache-operations#creating-caches-dynamically[Creating Caches Dynamically].
 ====
 
 == Baseline Topology in Pure In-Memory Clusters
@@ -48,8 +49,8 @@ However, if some nodes do not join after a restart, you must to activate the clu
 
 You can activate the cluster using one of the following tools:
 
-* link:administrators-guide/control-script#activating-cluster[Control script]
-* link:developers-guide/restapi#activate[REST API command]
+* link:control-script#activating-cluster[Control script]
+* link:restapi#activate[REST API command]
 * Programmatically:
 +
 [tabs]
@@ -62,6 +63,10 @@ include::{javaFile}[tags=activate,indent=0]
 ----
 
 tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=activate,indent=0]
+----
 tab:C++[]
 --
 
@@ -90,7 +95,7 @@ the baseline topology.
 Baseline topology is autoadjusted only if the cluster is in the active state.
 
 To enable automatic baseline adjustment, you can use the
-link:administrators-guide/control-script#enabling-baseline-topology-autoadjustment[control script] or the
+link:control-script#enabling-baseline-topology-autoadjustment[control script] or the
 programmatic API methods shown below:
 
 [tabs]
@@ -103,6 +108,10 @@ include::{javaFile}[tags=enable-autoadjustment,indent=0]
 ----
 
 tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=enable-autoadjustment,indent=0]
+----
 tab:C++[]
 --
 
@@ -119,6 +128,10 @@ include::{javaFile}[tags=disable-autoadjustment,indent=0]
 ----
 
 tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=disable-autoadjustment,indent=0]
+----
 tab:C++[]
 --
 
@@ -127,6 +140,6 @@ tab:C++[]
 
 You can use the following tools to monitor and/or manage the baseline topology:
 
-* link:administrators-guide/control-script[Control Script]
-* link:administrators-guide/monitoring-metrics/metrics#monitoring-topology[JMX Beans]
+* link:control-script[Control Script]
+* link:monitoring-metrics/metrics#monitoring-topology[JMX Beans]
 
diff --git a/docs/_docs/clustering/clustering.adoc b/docs/_docs/clustering/clustering.adoc
new file mode 100644
index 0000000..7869d7f
--- /dev/null
+++ b/docs/_docs/clustering/clustering.adoc
@@ -0,0 +1,37 @@
+= Clustering
+
+== Overview
+
+In this chapter, we discuss different ways nodes can discover each other to form a cluster.
+
+On start-up, a node is assigned either one of the two roles: _server node_ or _client node_.
+Server nodes are the workhorses of the cluster; they cache data, execute compute tasks, etc.
+Client nodes join the topology as regular nodes but they do not store data. Client nodes are used to stream data into the cluster and execute user queries.
+
+To form a cluster, each node must be able to connect to all other nodes. To ensure that, a proper <<Discovery Mechanisms,discovery mechanism>> must be configured.
+
+
+NOTE: In addition to client nodes, you can use Thin Clients to define and manipulate data in the cluster.
+Learn more about the thin clients in the link:thin-clients/getting-started-with-thin-clients[Thin Clients] section.
+
+
+image::images/ignite_clustering.png[Ignite Cluster]
+
+
+
+== Discovery Mechanisms
+
+Nodes can automatically discover each other and form a cluster.
+This allows you to scale out when needed without having to restart the whole cluster.
+Developers can also leverage Ignite's hybrid cloud support that allows establishing connection between private and public clouds such as Amazon Web Services, providing them with the best of both worlds.
+
+Ignite provides two implementations of the discovery mechanism intended for different usage scenarios:
+
+* link:clustering/tcp-ip-discovery[TCP/IP Discovery] is designed and optimized for 100s of nodes.
+* link:clustering/zookeeper-discovery[ZooKeeper Discovery] that allows scaling Ignite clusters to 100s and 1000s of nodes preserving linear scalability and performance.
+
+
+
+
+
+
diff --git a/docs/_docs/developers-guide/clustering/connect-client-nodes.adoc b/docs/_docs/clustering/connect-client-nodes.adoc
similarity index 96%
rename from docs/_docs/developers-guide/clustering/connect-client-nodes.adoc
rename to docs/_docs/clustering/connect-client-nodes.adoc
index 38e74de..f857c36 100644
--- a/docs/_docs/developers-guide/clustering/connect-client-nodes.adoc
+++ b/docs/_docs/clustering/connect-client-nodes.adoc
@@ -62,7 +62,7 @@ There are two discovery events that are triggered on the client node when it is
 * `EVT_CLIENT_NODE_RECONNECTED`
 
 You can listen to these events and execute custom actions in response.
-Refer to the link:developers-guide/events/listening-to-events[Listening to events] section for a code example.
+Refer to the link:events/listening-to-events[Listening to events] section for a code example.
 
 == Managing Slow Client Nodes
 
diff --git a/docs/_docs/developers-guide/clustering/discovery-in-the-cloud.adoc b/docs/_docs/clustering/discovery-in-the-cloud.adoc
similarity index 94%
rename from docs/_docs/developers-guide/clustering/discovery-in-the-cloud.adoc
rename to docs/_docs/clustering/discovery-in-the-cloud.adoc
index 120aa23..3c0776e 100644
--- a/docs/_docs/developers-guide/clustering/discovery-in-the-cloud.adoc
+++ b/docs/_docs/clustering/discovery-in-the-cloud.adoc
@@ -26,11 +26,11 @@ TIP: Cloud-based IP Finders allow you to create your configuration once and reus
 
 == Apache jclouds IP Finder
 
-To mitigate the constantly changing IP addresses problem, Ignite supports automatic node discovery by utilizing Apache jclouds multi-cloud toolkit via `TcpDiscoveryCloudIpFinder`. 
+To mitigate the constantly changing IP addresses problem, Ignite supports automatic node discovery by utilizing Apache jclouds multi-cloud toolkit via `TcpDiscoveryCloudIpFinder`.
 For information about Apache jclouds please refer to https://jclouds.apache.org[jclouds.apache.org].
 
-The IP finder forms nodes addresses by getting the private and public IP addresses of all virtual machines running on the cloud and adding a port number to them. 
-The port is the one that is set with either `TcpDiscoverySpi.setLocalPort(int)` or `TcpDiscoverySpi.DFLT_PORT`. 
+The IP finder forms nodes addresses by getting the private and public IP addresses of all virtual machines running on the cloud and adding a port number to them.
+The port is the one that is set with either `TcpDiscoverySpi.setLocalPort(int)` or `TcpDiscoverySpi.DFLT_PORT`.
 This way all the nodes can try to connect to any formed IP address and initiate automatic grid node discovery.
 
 Refer to https://jclouds.apache.org/reference/providers/#compute[Apache jclouds providers section] to get the list of supported cloud platforms.
@@ -79,11 +79,11 @@ tab:C++[unsupported]
 
 == Amazon S3 IP Finder
 
-Amazon S3-based discovery allows Ignite nodes to register their IP addresses on start-up in an Amazon S3 store. 
-This way other nodes can try to connect to any of the IP addresses stored in S3 and initiate automatic node discovery. 
+Amazon S3-based discovery allows Ignite nodes to register their IP addresses on start-up in an Amazon S3 store.
+This way other nodes can try to connect to any of the IP addresses stored in S3 and initiate automatic node discovery.
 To use S3 based automatic node discovery, you need to configure the `TcpDiscoveryS3IpFindera` type of `ipFinder`.
 
-CAUTION: You must link:developers-guide/setup#enabling-modules[enable the 'ignite-aws' module].
+CAUTION: You must link:setup#enabling-modules[enable the 'ignite-aws' module].
 
 Here is an example of how to configure Amazon S3 based IP finder:
 
@@ -216,11 +216,11 @@ tab:C++[unsupported]
 
 == Google Compute Discovery
 
-Ignite supports automatic node discovery by utilizing Google Cloud Storage store. 
-This mechanism is implemented in `TcpDiscoveryGoogleStorageIpFinder`. 
+Ignite supports automatic node discovery by utilizing Google Cloud Storage store.
+This mechanism is implemented in `TcpDiscoveryGoogleStorageIpFinder`.
 On start-up, each node registers its IP address in the storage and discovers other nodes by reading the storage.
 
-IMPORTANT: To use `TcpDiscoveryGoogleStorageIpFinder`, enable the `ignite-gce` link:developers-guide/setup#enabling-modules[module] in your application.
+IMPORTANT: To use `TcpDiscoveryGoogleStorageIpFinder`, enable the `ignite-gce` link:setup#enabling-modules[module] in your application.
 
 Here is an example of how to configure Google Cloud Storage based IP finder:
 
diff --git a/docs/_docs/clustering/network-configuration.adoc b/docs/_docs/clustering/network-configuration.adoc
new file mode 100644
index 0000000..aabb55b
--- /dev/null
+++ b/docs/_docs/clustering/network-configuration.adoc
@@ -0,0 +1,171 @@
+= Network Configuration
+:javaFile: {javaCodeDir}/NetworkConfiguration.java
+:xmlFile: code-snippets/xml/network-configuration.xml
+
+== IPv4 vs IPv6
+
+Ignite tries to support IPv4 and IPv6 but this can sometimes lead to issues where the cluster becomes detached. A possible solution — unless you require IPv6 — is to restrict Ignite to IPv4 by setting the `-Djava.net.preferIPv4Stack=true` JVM parameter.
+
+
+== Discovery
+This section describes the network parameters of the default discovery mechanism, which uses the TCP/IP protocol to exahcange discovery messages and is implemented in the `TcpDiscoverySpi` class.
+
+You can change the properties of the discovery mechanism as follows:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;discovery, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=discovery, indent=0]
+
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
+
+The following table describes some most important properties of `TcpDiscoverySpi`.
+You can find the complete list of properties in the javadoc:org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi[] javadoc.
+
+[cols="1,2,1",opts="header"]
+|===
+|Property | Description| Default Value
+| `localAddress`| Local host IP address used for discovery. | By default, the node uses the first non-loopback address it finds. If there is no non-loopback address available, then `java.net.InetAddress.getLocalHost()` is used.
+| `localPort`  | The port that the node binds to. If set to a non-default value, other cluster nodes must know this port to be able to discover the node. | `47500`
+| `localPortRange`| If the `localPort` is busy, the node attempts to bind to the next port (incremented by 1) and continues this process until it finds a free port. The `localPortRange` property defines the number of ports the node will try (starting from `localPort`).
+   | `100`
+| `reconnectCount` | The number of times the node tries to (re)establish connection to another node. |`10`
+| `networkTimeout` |  The maximum network timeout in milliseconds for network operations. |`5000`
+| `socketTimeout` |  The socket operations timeout. This timeout is used to limit connection time and write-to-socket time. |`5000`
+| `ackTimeout`| The acknowledgement timeout for discovery messages.
+If an acknowledgement is not received within this timeout, the discovery SPI tries to resend the message.  |  `5000`
+| `joinTimeout` |  The join timeout defines how much time the node waits to join a cluster. If a non-shared IP finder is used and the node fails to connect to any address from the IP finder, the node keeps trying to join within this timeout. If all addresses are unresponsive, an exception is thrown and the node terminates.
+`0` means waiting indefinitely.  | `0`
+| `statisticsPrintFrequency` | Defines how often the node prints discovery statistics to the log.
+`0` indicates no printing. If the value is greater than 0, and quiet mode is disabled, then statistics is printed out at INFO level once every period. | `0`
+
+|===
+
+
+
+== Communication
+
+After the nodes discover each other and the cluster is formed, the nodes exchange messages via the communication SPI.
+The messages represent distributed cluster operations, such as task execution, data modification operations, queries, etc.
+The default implementation of the communication SPI uses the TCP/IP protocol to exchange messages (`TcpCommunicationSpi`).
+This section describes the properties of `TcpCommunicationSpi`.
+
+Each node opens a local communication port and address to which other nodes connect and send messages.
+At startup, the node tries to bind to the specified communication port (default is 47100).
+If the port is already used, the node increments the port number until it finds a free port.
+The number of attempts is defined by the `localPortRange` property (defaults to 100).
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;communication-spi, indent=0]
+----
+
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/ClusteringOverview.java[tag=commSpi,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringOverview.cs[tag=CommunicationSPI,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Below is a list of some important properties of `TcpCommunicationSpi`.
+You can find the list of all properties in the javadoc:org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi[] javadoc.
+
+[cols="1,2,1",opts="header"]
+|===
+|Property | Description| Default Value
+| `localAddress` | The local address for the communication SPI to bind to. |
+
+| `localPort` | The local port that the node uses for communication.  | `47100`
+
+| `localPortRange` | The range of ports the nodes tries to bind to sequentially until it finds a free one. |  `100`
+
+|`tcpNoDelay` | Sets the value for the `TCP_NODELAY` socket option. Each socket accepted or created will use the provided value.
+
+The option should be set to `true` (default) to reduce request/response time during communication over TCP. In most cases we do not recommend changing this option.| `true`
+
+|`idleConnectionTimeout` | The maximum idle connection timeout (in milliseconds) after which the connection is closed. |  `600000`
+
+|`usePairedConnections` | Whether dual socket connection between the nodes should be enforced. If set to `true`, two separate connections will be established between the communicating nodes: one for outgoing messages, and one for incoming messages. When set to `false`, a single TCP connection will be used for both directions.
+This flag is useful on some operating systems when messages take too long to be delivered.   | `false`
+
+| `directBuffer` | A boolean flag that indicates whether to allocate NIO direct buffer instead of NIO heap allocation buffer. Although direct buffers perform better, in some cases (especially on Windows) they may cause JVM crashes. If that happens in your environment, set this property to `false`.   | `true`
+
+|`directSendBuffer` | Whether to use NIO direct buffer instead of NIO heap allocation buffer when sending messages.   | `false`
+
+|`socketReceiveBuffer`| Receive buffer size for sockets created or accepted by the communication SPI. If set to `0`,   the operating system's default value is used. | `0`
+
+|`socketSendBuffer` | Send buffer size for sockets created or accepted by the communication SPI. If set to `0` the  operating system's default value is used. | `0`
+
+|===
+
+
+== Connection Timeouts
+
+////
+//Connection timeout is a period of time a cluster node waits before a connection to another node is considered "failed".
+
+Every node in a cluster is connected to every other node.
+When node A sends a message to node B, and node B does not reply in `failureDetectionTimeout` (in milliseconds), then node B will be removed from the cluster.
+////
+
+There are several properties that define connection timeouts:
+
+[cols="",opts="header"]
+|===
+|Property | Description | Default Value
+| `IgniteConfiguration.failureDetectionTimeout` | A timeout for basic network operations for server nodes. | `10000`
+
+| `IgniteConfiguration.clientFailureDetectionTimeout` | A timeout for basic network operations for client nodes.  | `30000`
+
+|===
+
+//CAUTION: The timeout automatically controls configuration parameters of `TcpDiscoverySpi`, such as socket timeout, message acknowledgment timeout and others. If any of these parameters is set explicitly, then the failure timeout setting will be ignored.
+
+:ths: &#8239;
+
+You can set the failure detection timeout in the node configuration as shown in the example below.
+//The default value is 10{ths}000 ms for server nodes and 30{ths}000 ms for client nodes.
+The default values allow the discovery SPI to work reliably on most on-premise and containerized deployments.
+However, in stable low-latency networks, you can set the parameter to {tilde}200 milliseconds in order to detect and react to​ failures more quickly.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/network-configuration.xml[tags=!*;ignite-config;failure-detection-timeout, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=failure-detection-timeout, indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
diff --git a/docs/_docs/developers-guide/clustering/tcp-ip-discovery.adoc b/docs/_docs/clustering/tcp-ip-discovery.adoc
similarity index 70%
rename from docs/_docs/developers-guide/clustering/tcp-ip-discovery.adoc
rename to docs/_docs/clustering/tcp-ip-discovery.adoc
index 1ed9e8f..30c8727 100644
--- a/docs/_docs/developers-guide/clustering/tcp-ip-discovery.adoc
+++ b/docs/_docs/clustering/tcp-ip-discovery.adoc
@@ -19,17 +19,7 @@ this finder via a Spring XML file or programmatically:
 tab:XML[]
 [source,xml]
 ----
-<bean class="org.apache.ignite.configuration.IgniteConfiguration">
-  <property name="discoverySpi">
-    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
-      <property name="ipFinder">
-        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
-          <property name="multicastGroup" value="228.10.10.157"/>
-        </bean>
-      </property>
-    </bean>
-  </property>
-</bean>
+include::code-snippets/xml/discovery-multicast.xml[tags=ignite-config, indent=0]
 ----
 tab:Java[]
 [source,java]
@@ -63,7 +53,8 @@ a port range.
 
 [TIP]
 ====
-By default, the `TcpDiscoveryVmIpFinder` is used in the 'non-shared' mode. If you plan to start a server node, then in this mode the list of IP addresses should contain an address of the local node as well. In this case, the node will not wait until other nodes join the cluster; instead, it will become the first cluster node and start to operate normally.
+By default, the `TcpDiscoveryVmIpFinder` is used in the 'non-shared' mode.
+If you plan to start a server node, then in this mode the list of IP addresses should contain the address of the local node as well. In this case, the node will not wait until other nodes join the cluster; instead, it will become the first cluster node and start to operate normally.
 ====
 
 You can configure the static IP finder via XML configuration or programmatically:
@@ -73,32 +64,7 @@ You can configure the static IP finder via XML configuration or programmatically
 tab:XML[]
 [source,xml]
 ----
-<bean class="org.apache.ignite.configuration.IgniteConfiguration">
-  <property name="discoverySpi">
-    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
-      <property name="ipFinder">
-        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
-          <property name="addresses">
-            <list>
-              <!--
-              Explicitly specifying address of a local node to let it start and
-              operate normally even if there is no more nodes in the cluster.
-              You can also optionally specify an individual port or port range.
-              -->
-              <value>1.2.3.4</value>
-
-              <!--
-              IP Address and optional port range of a remote node.
-              You can also optionally specify an individual port.
-              -->
-              <value>1.2.3.5:47500..47509</value>
-            </list>
-          </property>
-        </bean>
-      </property>
-    </bean>
-  </property>
-</bean>
+include::code-snippets/xml/discovery-static.xml[tags=ignite-config, indent=0]
 ----
 
 tab:Java[]
@@ -139,30 +105,7 @@ static IP addresses:
 tab:XML[]
 [source,xml]
 ----
-<bean class="org.apache.ignite.configuration.IgniteConfiguration">
-  <property name="discoverySpi">
-    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
-      <property name="ipFinder">
-        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
-          <property name="multicastGroup" value="228.10.10.157"/>
-
-          <!-- list of static IP addresses-->
-          <property name="addresses">
-            <list>
-              <value>1.2.3.4</value>
-
-              <!--
-                  IP Address and optional port range.
-                  You can also optionally specify an individual port.
-              -->
-              <value>1.2.3.5:47500..47509</value>
-            </list>
-          </property>
-        </bean>
-      </property>
-    </bean>
-  </property>
-</bean>
+include::code-snippets/xml/discovery-static-and-multicast.xml[tags=ignite-config, indent=0]
 ----
 
 tab:Java[]
@@ -343,7 +286,7 @@ configuration above.
 
 If the isolated clusters use Native Persistence, then every
 cluster has to store its persistence files under different paths in the
-file system. Refer to the link:developers-guide/persistence/native-persistence[Native Persistence documentation] to learn how you can change persistence related directories.
+file system. Refer to the link:persistence/native-persistence[Native Persistence documentation] to learn how you can change persistence related directories.
 ====
 
 
@@ -382,7 +325,7 @@ tab:Java[]
 include::{javaFile}[tag=jdbc,indent=0]
 ----
 
-tab:.NET/C#[unsupported]
+tab:C#/.NET[unsupported]
 
 tab:C++[unsupported]
 
@@ -417,7 +360,7 @@ tab:Java[]
 ----
 include::{javaFile}[tag=sharedFS,indent=0]
 ----
-tab:.NET/C#[unsupported]
+tab:C#/.NET[unsupported]
 tab:C++[unsupported]
 --
 
@@ -452,50 +395,8 @@ tab:Java[]
 include::{javaFile}[tag=zk,indent=0]
 ----
 
---
-
-== Failure Detection Timeout
-
-Failure detection timeout is used to determine how long a cluster node will wait before a connection to another node is considered "failed".
-
-Every node in a cluster is connected to every other node.
-When node A sends heartbeats and other system messages to node B, and node B does not reply in `failureDetectionTimeout` (in milliseconds), then node B will be removed from the cluster.
-This timeout is the easiest way to tune discovery SPI's failure detection feature depending on the network and hardware conditions of your environment.
-
-CAUTION: The timeout automatically controls configuration parameters of `TcpDiscoverySpi`, such as socket timeout, message acknowledgment timeout and others. If any of these parameters is set explicitly, then the failure timeout setting will be ignored.
-
-:ths: &#8239;
-
-The failure detection timeout can be set in the node configuration as shown in the example below.
-The default value is 10{ths}000 ms for server nodes and 30{ths}000 ms for client nodes.
-These values allow the discovery SPI to work reliably on most on-premise and containerized deployments.
-However, in stable low-latency networks, the parameter can be set to {tilde}200 milliseconds in order to detect and react to​ failures more quickly.
-
-[tabs]
---
-tab:XML[]
-[source,xml]
-----
-include::code-snippets/xml/tcp-ip-discovery.xml[tags=ignite-config;!discovery;failure-detection-timeout, indent=0]
-----
-
-tab:Java[]
-[source,java]
-----
-include::{javaFile}[tag=failure-detection-timeout, indent=0]
-----
-
-tab:C#/.NET[]
-[source,csharp]
-----
-not ready yet
-----
-
-tab:C++[]
-[source,cpp]
-----
-not ready yet
-----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
 
 --
 
diff --git a/docs/_docs/developers-guide/clustering/zookeeper-discovery.adoc b/docs/_docs/clustering/zookeeper-discovery.adoc
similarity index 98%
rename from docs/_docs/developers-guide/clustering/zookeeper-discovery.adoc
rename to docs/_docs/clustering/zookeeper-discovery.adoc
index fa9313a..9a7252e 100644
--- a/docs/_docs/developers-guide/clustering/zookeeper-discovery.adoc
+++ b/docs/_docs/clustering/zookeeper-discovery.adoc
@@ -12,7 +12,7 @@ need to preserve ease of scalability and linear performance.
 However, using both Ignite and ZooKeeper requires configuring and managing two
 distributed systems, which can be challenging.
 Therefore, we recommend that you use ZooKeeper Discovery only if you plan to scale to 100s or 1000s nodes.
-Otherwise, it is best to use link:developers-guide/clustering/tcp-ip-discovery[TCP/IP Discovery].
+Otherwise, it is best to use link:clustering/tcp-ip-discovery[TCP/IP Discovery].
 
 ZooKeeper Discovery uses ZooKeeper as a single point of synchronization
 and to organize the cluster into a star-shaped topology where a
diff --git a/docs/_docs/code-snippets/dotnet/BaselineTopology.cs b/docs/_docs/code-snippets/dotnet/BaselineTopology.cs
new file mode 100644
index 0000000..c992fd6
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/BaselineTopology.cs
@@ -0,0 +1,33 @@
+using System;
+using Apache.Ignite.Core;
+
+namespace dotnet_helloworld
+{
+    public static class BaselineTopology
+    {
+        public static void Activate()
+        {
+            // tag::activate[]
+            IIgnite ignite = Ignition.Start();
+            ignite.GetCluster().SetActive(true);
+            // end::activate[]
+        }
+
+        public static void EnableAutoAdjust()
+        {
+            // tag::enable-autoadjustment[]
+            IIgnite ignite = Ignition.Start();
+            ignite.GetCluster().SetBaselineAutoAdjustEnabledFlag(true);
+            ignite.GetCluster().SetBaselineAutoAdjustTimeout(30000);
+            // end::enable-autoadjustment[]
+        }
+
+        public static void DisableAutoAdjust()
+        {
+            IIgnite ignite = Ignition.Start();
+            // tag::disable-autoadjustment[]
+            ignite.GetCluster().SetBaselineAutoAdjustEnabledFlag(false);
+            // end::disable-autoadjustment[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/DefiningIndexes.cs b/docs/_docs/code-snippets/dotnet/DefiningIndexes.cs
index 679d45f..dfd5f02 100644
--- a/docs/_docs/code-snippets/dotnet/DefiningIndexes.cs
+++ b/docs/_docs/code-snippets/dotnet/DefiningIndexes.cs
@@ -6,9 +6,9 @@ namespace dotnet_helloworld
     //todo discuss about "Indexing Nested Objects"
     public class DefiningIndexes
     {
+        // tag::idxAnnotationCfg[]
         class Person
         {
-            // tag::idxAnnotationCfg[]
             // Indexed field. Will be visible to the SQL engine.
             [QuerySqlField(IsIndexed = true)] public long Id;
 
@@ -17,22 +17,27 @@ namespace dotnet_helloworld
 
             //Will NOT be visible to the SQL engine.
             public int Age;
-            /**
-              * Indexed field sorted in descending order.
-              * Will be visible to the SQL engine.
-            */
+
+            /** Indexed field sorted in descending order.
+              * Will be visible to the SQL engine. */
             [QuerySqlField(IsIndexed = true, IsDescending = true)]
             public float Salary;
-            // end::idxAnnotationCfg[]
         }
-        
+        // end::idxAnnotationCfg[]
+
         //todo indexing nested objects - will be deprecated, discuss with Artem
 
         public static void RegisteringIndexedTypes()
         {
-            // tag::registeringIndexedTypes[]
-            //looks like it's unsupported in dotnet
-            // end::registeringIndexedTypes[]
+            // tag::register-indexed-types[]
+            var ccfg = new CacheConfiguration
+            {
+                QueryEntities = new[]
+                {
+                    new QueryEntity(typeof(long), typeof(Person))
+                }
+            };
+            // end::register-indexed-types[]
         }
 
         public class GroupIndexes
@@ -69,17 +74,17 @@ namespace dotnet_helloworld
                             {
                                 Name = "id",
                                 FieldType = typeof(long)
-                            },   
+                            },
                             new QueryField
                             {
                                 Name = "name",
                                 FieldType = typeof(string)
-                            },   
+                            },
                             new QueryField
                             {
                                 Name = "salary",
                                 FieldType = typeof(long)
-                            },   
+                            },
                         },
                         Indexes = new[]
                         {
@@ -95,5 +100,72 @@ namespace dotnet_helloworld
             });
             // end::queryEntity[]
         }
+
+        private static void QueryEntityInlineSize()
+        {
+            // tag::query-entity-with-inline-size[]
+            var qe = new QueryEntity
+            {
+                Indexes = new[]
+                {
+                    new QueryIndex
+                    {
+                        InlineSize = 13
+                    }
+                }
+            };
+            // end::query-entity-with-inline-size[]
+        }
+
+        private static void QueryEntityKeyFields()
+        {
+            // tag::custom-key[]
+            var ccfg = new CacheConfiguration
+            {
+                Name = "personCache",
+                QueryEntities = new[]
+                {
+                    new QueryEntity
+                    {
+                        KeyTypeName = "CustomKey",
+                        ValueTypeName = "Person",
+                        Fields = new[]
+                        {
+                            new QueryField
+                            {
+                                Name = "intKeyField",
+                                FieldType = typeof(int),
+                                IsKeyField = true
+                            },
+                            new QueryField
+                            {
+                                Name = "strKeyField",
+                                FieldType = typeof(string),
+                                IsKeyField = true
+                            },
+                            new QueryField
+                            {
+                                Name = "firstName",
+                                FieldType = typeof(string)
+                            },
+                            new QueryField
+                            {
+                                Name = "lastName",
+                                FieldType = typeof(string)
+                            }
+                        }
+                    },
+                }
+            };
+            // end::custom-key[]
+        }
+
+        private class InlineSize
+        {
+            // tag::annotation-with-inline-size[]
+            [QuerySqlField(IsIndexed = true, IndexInlineSize = 13)]
+            public string Country { get; set; }
+            // end::annotation-with-inline-size[]
+        }
     }
 }
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusteringOverview.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusteringOverview.java
index d048f9e..5096d77 100644
--- a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusteringOverview.java
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusteringOverview.java
@@ -48,17 +48,16 @@ public class ClusteringOverview {
         // tag::commSpi[]
         TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
 
-        // Override local port.
+        // Set the local port.
         commSpi.setLocalPort(4321);
 
         IgniteConfiguration cfg = new IgniteConfiguration();
         // end::commSpi[]
         // tag::commSpi[]
 
-        // Override default communication SPI.
         cfg.setCommunicationSpi(commSpi);
 
-        // Start grid.
+        // Start the node.
         Ignition.start(cfg);
         // end::commSpi[]
         serverNode.close();
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NetworkConfiguration.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NetworkConfiguration.java
new file mode 100644
index 0000000..61bedad
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NetworkConfiguration.java
@@ -0,0 +1,36 @@
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
+import org.junit.jupiter.api.Test;
+
+public class NetworkConfiguration {
+
+    @Test
+    void discoveryConfigExample() {
+        //tag::discovery[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi().setLocalPort(8300);
+
+        cfg.setDiscoverySpi(discoverySpi);
+        Ignite ignite = Ignition.start(cfg);
+        //end::discovery[]
+        ignite.close();
+    }
+
+    @Test
+    void failureDetectionTimeout() {
+        //tag::failure-detection-timeout[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setFailureDetectionTimeout(5_000);
+
+        cfg.setClientFailureDetectionTimeout(10_000);
+        //end::failure-detection-timeout[]
+        Ignition.start(cfg).close();
+    }
+
+}
diff --git a/docs/_docs/code-snippets/xml/discovery-multicast.xml b/docs/_docs/code-snippets/xml/discovery-multicast.xml
new file mode 100644
index 0000000..4f52a81
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/discovery-multicast.xml
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
+                        <property name="multicastGroup" value="228.10.10.157"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
\ No newline at end of file
diff --git a/docs/_docs/code-snippets/xml/discovery-static-and-multicast.xml b/docs/_docs/code-snippets/xml/discovery-static-and-multicast.xml
new file mode 100644
index 0000000..4d4c2d7
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/discovery-static-and-multicast.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
+                        <property name="multicastGroup" value="228.10.10.157"/>
+                        <!-- list of static IP addresses-->
+                        <property name="addresses">
+                            <list>
+                                <value>1.2.3.4</value>
+                                <!--
+                                  IP Address and optional port range.
+                                  You can also optionally specify an individual port.
+                                 -->
+                                <value>1.2.3.5:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
\ No newline at end of file
diff --git a/docs/_docs/code-snippets/xml/discovery-static.xml b/docs/_docs/code-snippets/xml/discovery-static.xml
new file mode 100644
index 0000000..e43f855
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/discovery-static.xml
@@ -0,0 +1,32 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <!--
+                                  Explicitly specifying address of a local node to let it start and
+                                  operate normally even if there is no more nodes in the cluster.
+                                  You can also optionally specify an individual port or port range.
+                                  -->
+                                <value>1.2.3.4</value>
+                                <!--
+                                  IP Address and optional port range of a remote node.
+                                  You can also optionally specify an individual port.
+                                  -->
+                                <value>1.2.3.5:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
\ No newline at end of file
diff --git a/docs/_docs/code-snippets/xml/network-configuration.xml b/docs/_docs/code-snippets/xml/network-configuration.xml
new file mode 100644
index 0000000..99fb1e9
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/network-configuration.xml
@@ -0,0 +1,30 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::failure-detection-timeout[] -->
+
+        <property name="failureDetectionTimeout" value="5000"/>
+
+        <property name="clientFailureDetectionTimeout" value="10000"/>
+        <!-- end::failure-detection-timeout[] -->
+        <!-- tag::discovery[] -->
+
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="localPort" value="8300"/>  
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+        <!-- tag::communication-spi[] -->
+
+        <property name="communicationSpi">
+            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+                <property name="localPort" value="4321"/> 
+            </bean>
+        </property>
+        <!-- end::communication-spi[] -->
+
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
\ No newline at end of file
diff --git a/docs/_docs/developers-guide/collocated-computations.adoc b/docs/_docs/collocated-computations.adoc
similarity index 95%
rename from docs/_docs/developers-guide/collocated-computations.adoc
rename to docs/_docs/collocated-computations.adoc
index 9a3f116..e04f43a 100644
--- a/docs/_docs/developers-guide/collocated-computations.adoc
+++ b/docs/_docs/collocated-computations.adoc
@@ -19,7 +19,7 @@ The class definitions of the task to be executed on remote nodes must be availab
 You can ensure this in two ways:
 
 * Add the classes to the classpath of the nodes;
-* Enable link:developers-guide/peer-class-loading[peer class loading].
+* Enable link:peer-class-loading[peer class loading].
 ====
 
 == Colocating by Key
@@ -38,7 +38,7 @@ tab:Java[]
 ----
 include::{javaSourceFile}[tag=collocating-by-key,indent=0]
 ----
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::{dotnetSourceFile}[tag=affinityRun,indent=0]
@@ -68,7 +68,7 @@ include::{javaSourceFile}[tag=calculate-average,indent=0]
 ----
 tab:C#/.NET[unsupported]
 affinityCall(..) method with partition id as parameter is unsupported in Ignite .NET
-----
+
 tab:C++[unsupported]
 --
 
@@ -82,7 +82,7 @@ tab:Java[]
 include::{javaSourceFile}[tag=sum-by-partition,indent=0]
 ----
 tab:C#/.NET[unsupported]
-----
+
 tab:C++[unsupported]
 --
 
@@ -91,7 +91,7 @@ tab:C++[unsupported]
 ====
 [discrete]
 === Performance Considerations
-Colocated computations yield performance benefits when the amount of the data you want to process is sufficiently large. In some cases, when the amount of data is small, a link:developers-guide/key-value-api/using-scan-queries[scan query] may perform better.
+Colocated computations yield performance benefits when the amount of the data you want to process is sufficiently large. In some cases, when the amount of data is small, a link:key-value-api/using-scan-queries[scan query] may perform better.
 
 ====
 
diff --git a/docs/_docs/developers-guide/configuring-caches/atomicity-modes.adoc b/docs/_docs/configuring-caches/atomicity-modes.adoc
similarity index 75%
rename from docs/_docs/developers-guide/configuring-caches/atomicity-modes.adoc
rename to docs/_docs/configuring-caches/atomicity-modes.adoc
index 1c09fd8..4fe43c0 100644
--- a/docs/_docs/developers-guide/configuring-caches/atomicity-modes.adoc
+++ b/docs/_docs/configuring-caches/atomicity-modes.adoc
@@ -7,7 +7,7 @@ There is no partial execution of the operations.
 
 To enable support for transactions for a cache, set the `atomicityMode` parameter in the cache configuration to `TRANSACTIONAL`.
 
-CAUTION: If you configure multiple caches within one link:developers-guide/configuring-caches/cache-groups[cache group], the caches must be either all atomic, or all transactional. You cannot have both TRANSACTIONAL and ATOMIC caches in one cache group.
+CAUTION: If you configure multiple caches within one link:configuring-caches/cache-groups[cache group], the caches must be either all atomic, or all transactional. You cannot have both TRANSACTIONAL and ATOMIC caches in one cache group.
 
 Ignite supports 3 atomicity modes, which are described in the following table.
 
@@ -15,18 +15,18 @@ Ignite supports 3 atomicity modes, which are described in the following table.
 |===
 | Atomicity Mode | Description
 
-| ATOMIC | The default mode. 
-All operations are performed atomically, one at a time. 
-Transactions are not supported. 
-The `ATOMIC` mode provides better performance by avoiding transactional locks, whilst providing data atomicity and consistency for each single operation. 
-Bulk writes, such as the `putAll(...)` and `removeAll(...)` methods, are not executed in one transaction and can partially fail. 
-If this happens, a `CachePartialUpdateException` is thrown and contains a list of keys for which the update failed. 
+| ATOMIC | The default mode.
+All operations are performed atomically, one at a time.
+Transactions are not supported.
+The `ATOMIC` mode provides better performance by avoiding transactional locks, whilst providing data atomicity and consistency for each single operation.
+Bulk writes, such as the `putAll(...)` and `removeAll(...)` methods, are not executed in one transaction and can partially fail.
+If this happens, a `CachePartialUpdateException` is thrown and contains a list of keys for which the update failed.
 | TRANSACTIONAL
-a| Enables support for ACID-compliant transactions executed via the key-value API. 
-SQL transactions are not supported. 
-Transactions in this mode can have different link:developers-guide/key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency modes and isolation levels]. 
-Enable this mode only if you need support for ACID-compliant operations. 
-For more information about transactions, see link:developers-guide/key-value-api/transactions[Performing Transactions].
+a| Enables support for ACID-compliant transactions executed via the key-value API.
+SQL transactions are not supported.
+Transactions in this mode can have different link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency modes and isolation levels].
+Enable this mode only if you need support for ACID-compliant operations.
+For more information about transactions, see link:key-value-api/transactions[Performing Transactions].
 
 [NOTE]
 ====
@@ -37,7 +37,7 @@ The `TRANSACTIONAL` mode adds a performance cost to cache operations and should
 
 | TRANSACTIONAL_SNAPSHOT
 
-a| An experimental mode that implements multiversion concurrency control (MVCC) and supports both key-value transactions and SQL transactions. See link:developers-guide/transactions/mvcc[Multiversion Concurrency Control] for details about and limitations of this mode.
+a| An experimental mode that implements multiversion concurrency control (MVCC) and supports both key-value transactions and SQL transactions. See link:transactions/mvcc[Multiversion Concurrency Control] for details about and limitations of this mode.
 
 [WARNING]
 ====
diff --git a/docs/_docs/developers-guide/configuring-caches/cache-groups.adoc b/docs/_docs/configuring-caches/cache-groups.adoc
similarity index 92%
rename from docs/_docs/developers-guide/configuring-caches/cache-groups.adoc
rename to docs/_docs/configuring-caches/cache-groups.adoc
index 4231dea..6d79df3 100644
--- a/docs/_docs/developers-guide/configuring-caches/cache-groups.adoc
+++ b/docs/_docs/configuring-caches/cache-groups.adoc
@@ -2,7 +2,7 @@
 
 For each cache deployed in the cluster, there is always overhead: the cache is split into partitions whose state must be tracked on every cluster node.
 
-If link:developers-guide/persistence/native-persistence[Native Persistence] is enabled, then for every partition there is an open file on the disk that Ignite actively writes to and reads from. Thus, the more caches and partitions you have:
+If link:persistence/native-persistence[Native Persistence] is enabled, then for every partition there is an open file on the disk that Ignite actively writes to and reads from. Thus, the more caches and partitions you have:
 
 * The more Java heap is occupied by partition maps. Every cache has its own partition map.
 * The longer it might take for a new node to join the cluster.
diff --git a/docs/_docs/developers-guide/configuring-caches/configuration-overview.adoc b/docs/_docs/configuring-caches/configuration-overview.adoc
similarity index 84%
rename from docs/_docs/developers-guide/configuring-caches/configuration-overview.adoc
rename to docs/_docs/configuring-caches/configuration-overview.adoc
index b05d4d0..a55a705 100644
--- a/docs/_docs/developers-guide/configuring-caches/configuration-overview.adoc
+++ b/docs/_docs/configuring-caches/configuration-overview.adoc
@@ -21,7 +21,7 @@ include::code-snippets/xml/cache-configuration.xml[tags=ignite-config;!discovery
 ----
 
 //tag::params[]
-For the full list of parameters, refer to the link:{javadoc_base_url}/org/apache/ignite/configuration/CacheConfiguration.html[CacheConfiguration,window=_blank] javadoc.
+For the full list of parameters, refer to the javadoc:org.apache.ignite.configuration.CacheConfiguration[] javadoc.
 
 [cols="1,3,1",options="header",separator=|]
 |===
@@ -36,10 +36,10 @@ In the `PARTITIONED` mode (default), the overall data set is divided into partit
 
 In the `REPLICATED` mode, all the data is replicated to every node in the cluster.
 
-See the link:developers-guide/data-modeling/data-partitioning#partitionedreplicated-mode[Partitioned/Replicated Mode] section for more details.
+See the link:data-modeling/data-partitioning#partitionedreplicated-mode[Partitioned/Replicated Mode] section for more details.
 | `PARTITIONED`
 
-| `writeSynchronizationMode` | Write synchronization mode. Refer to the link:developers-guide/configuring-caches/configuring-backups[Configuring Partition Backups] section. | `PRIMARY_SYNC`
+| `writeSynchronizationMode` | Write synchronization mode. Refer to the link:configuring-caches/configuring-backups[Configuring Partition Backups] section. | `PRIMARY_SYNC`
 
 |`rebalanceMode`
 a| This parameter controls the way the rebalancing process is performed. Possible values include:
@@ -50,11 +50,11 @@ a| This parameter controls the way the rebalancing process is performed. Possibl
 | `ASYNC`
 
 |`backups`
-|The number of link:developers-guide/data-modeling/data-partitioning#backup-partitions[backup partitions] for the cache.
+|The number of link:data-modeling/data-partitioning#backup-partitions[backup partitions] for the cache.
 | `0`
 
 |`partitionLossPolicy`
-| link:developers-guide/partition-loss-policy[Partition loss policy].
+| link:partition-loss-policy[Partition loss policy].
 | `IGNORE`
 
 |`readFromBackup`
@@ -73,7 +73,7 @@ tab:Java[]
 include::{javaCodeDir}/ConfiguringCaches.java[tag=cfg,indent=0]
 ----
 
-include::developers-guide/configuring-caches/configuration-overview.adoc[tag=params]
+include::configuring-caches/configuration-overview.adoc[tag=params]
 
 tab:C#/.NET[]
 [source,csharp]
@@ -130,6 +130,3 @@ tab:C++[unsupported]
 --
 
 Once the cache template is registered in the cluster, as shown in the code snippet above, you can use it to create another cache with the same configuration.
-
-
-
diff --git a/docs/_docs/developers-guide/configuring-caches/configuring-backups.adoc b/docs/_docs/configuring-caches/configuring-backups.adoc
similarity index 96%
rename from docs/_docs/developers-guide/configuring-caches/configuring-backups.adoc
rename to docs/_docs/configuring-caches/configuring-backups.adoc
index 50996b1..517aaa7 100644
--- a/docs/_docs/developers-guide/configuring-caches/configuring-backups.adoc
+++ b/docs/_docs/configuring-caches/configuring-backups.adoc
@@ -1,6 +1,6 @@
 = Configuring Partition Backups
 
-include::developers-guide/data-modeling/data-partitioning.adoc[tag=partition-backups]
+include::data-modeling/data-partitioning.adoc[tag=partition-backups]
 
 == Configuring Backups
 
diff --git a/docs/_docs/developers-guide/configuring-caches/expiry-policies.adoc b/docs/_docs/configuring-caches/expiry-policies.adoc
similarity index 100%
rename from docs/_docs/developers-guide/configuring-caches/expiry-policies.adoc
rename to docs/_docs/configuring-caches/expiry-policies.adoc
diff --git a/docs/_docs/developers-guide/configuring-caches/on-heap-caching.adoc b/docs/_docs/configuring-caches/on-heap-caching.adoc
similarity index 94%
rename from docs/_docs/developers-guide/configuring-caches/on-heap-caching.adoc
rename to docs/_docs/configuring-caches/on-heap-caching.adoc
index f53e978..7c93bd5 100644
--- a/docs/_docs/developers-guide/configuring-caches/on-heap-caching.adoc
+++ b/docs/_docs/configuring-caches/on-heap-caching.adoc
@@ -2,7 +2,7 @@
 
 Ignite uses off-heap memory to allocate memory regions outside of Java heap. However, you can enable on-heap caching  by setting `CacheConfiguration.setOnheapCacheEnabled(true)`.
 
-On-heap caching is useful in scenarios when you do a lot of cache reads on server nodes that work with cache entries in link:developers-guide/data-modeling/data-modeling#binary-object-format[binary form] or invoke cache entries' deserialization. For instance, this might happen when a distributed computation or deployed service gets some data from caches for further processing.
+On-heap caching is useful in scenarios when you do a lot of cache reads on server nodes that work with cache entries in link:data-modeling/data-modeling#binary-object-format[binary form] or invoke cache entries' deserialization. For instance, this might happen when a distributed computation or deployed service gets some data from caches for further processing.
 
 
 [tabs]
diff --git a/docs/_docs/administrators-guide/control-script.adoc b/docs/_docs/control-script.adoc
similarity index 96%
rename from docs/_docs/administrators-guide/control-script.adoc
rename to docs/_docs/control-script.adoc
index d853fc2..e713ae3 100644
--- a/docs/_docs/administrators-guide/control-script.adoc
+++ b/docs/_docs/control-script.adoc
@@ -52,14 +52,14 @@ If you want to connect to a node that is running on a remove machine, specify th
 
 == Activation, Deactivation and Topology Management
 
-You can use the control script to activate or deactivate your cluster, and manage the link:developers-guide/baseline-topology[Baseline Topology].
+You can use the control script to activate or deactivate your cluster, and manage the link:baseline-topology[Baseline Topology].
 
 include::includes/note-on-deactivation.adoc[]
 
 === Activating Cluster
 
 Activation sets the baseline topology of the cluster to the set of nodes available at the moment of activation.
-Activation is required only if you use link:developers-guide/persistence/native-persistence[native persistence].
+Activation is required only if you use link:persistence/native-persistence[native persistence].
 
 To activate the cluster, run the following command:
 
@@ -168,7 +168,7 @@ Execution time: 333 ms
 === Adding Nodes to Baseline Topology
 
 To add a node to the baseline topology, run the command given below.
-After the node is added, the link:developers-guide/data-rebalancing[rebalancing process] will start.
+After the node is added, the link:data-rebalancing[rebalancing process] will start.
 
 [tabs]
 --
@@ -245,7 +245,7 @@ control.bat --baseline version _topologyVersion_ [--yes]
 
 === Enabling Baseline Topology Autoadjustment
 
-link:developers-guide/baseline-topology#baseline-topology-autoadjustment[Baseline topology autoadjustment] refers to automatic update of baseline topology after the topology has been stable for a specific amount of time.
+link:baseline-topology#baseline-topology-autoadjustment[Baseline topology autoadjustment] refers to automatic update of baseline topology after the topology has been stable for a specific amount of time.
 
 For in-memory clusters, autoadjustment is enabled by default with the timeout set to 0. It means that baseline topology changes immediately after server nodes join or leave the cluster.
 For clusters with persistence, the automatic baseline adjustment is disabled by default.
@@ -413,7 +413,7 @@ control.sh|bat --cache list counter-.* --seq
 == Resetting Lost Partitions
 
 You can use the control script to reset lost partitions for specific caches.
-Refer to link:developers-guide/partition-loss-policy[Partition Loss Policy] for details.
+Refer to link:partition-loss-policy[Partition Loss Policy] for details.
 
 [source, shell]
 ----
diff --git a/docs/_docs/developers-guide/data-modeling/affinity-collocation.adoc b/docs/_docs/data-modeling/affinity-collocation.adoc
similarity index 91%
rename from docs/_docs/developers-guide/data-modeling/affinity-collocation.adoc
rename to docs/_docs/data-modeling/affinity-collocation.adoc
index f0d1d91..87bedbf 100644
--- a/docs/_docs/developers-guide/data-modeling/affinity-collocation.adoc
+++ b/docs/_docs/data-modeling/affinity-collocation.adoc
@@ -1,16 +1,16 @@
 = Affinity Colocation
 
-In many cases it is beneficial to colocate different entries if they are often accessed together. 
-In this way, multi-entry queries are executed on one node (where the objects are stored). 
+In many cases it is beneficial to colocate different entries if they are often accessed together.
+In this way, multi-entry queries are executed on one node (where the objects are stored).
 This concept is known as _affinity colocation_.
 
-Entries are assigned to partitions by the affinity function. 
-The objects that have the same affinity keys go to the same partitions. 
-This allows you to design your data model in such a way that related entries are stored together. 
+Entries are assigned to partitions by the affinity function.
+The objects that have the same affinity keys go to the same partitions.
+This allows you to design your data model in such a way that related entries are stored together.
 "Related" here refers to the objects that are in a parent-child relationship or objects that are often queried together.
 
-For example, let's say you have `Person` and `Company` objects, and each person has the `companyId` field that indicates the company the person works for. 
-By specifying the `Person.companyId` and `Company.ID` as affinity keys, you ensure that all the persons working for the same company are stored on the same node, where the company object is stored as well. 
+For example, let's say you have `Person` and `Company` objects, and each person has the `companyId` field that indicates the company the person works for.
+By specifying the `Person.companyId` and `Company.ID` as affinity keys, you ensure that all the persons working for the same company are stored on the same node, where the company object is stored as well.
 Queries that request persons working for a specific company are processed on a single node.
 
 ////
@@ -23,13 +23,13 @@ And here is how data is distributed when you colocate persons with the companies
 *TODO image*
 ////
 
-You can also colocate a computation task with the data. See link:developers-guide/collocated-computations[Colocating Computations With Data].
+You can also colocate a computation task with the data. See link:collocated-computations[Colocating Computations With Data].
 ////
 *TODO: add examples and use cases*
 ////
 == Configuring Affinity Key
 
-If you do not specify the affinity key explicitly, the cache key is used as the default affinity key. 
+If you do not specify the affinity key explicitly, the cache key is used as the default affinity key.
 If you create your caches as SQL tables using SQL statements, the PRIMARY KEY is the default affinity key.
 
 If you want to colocate data from two caches by a different field, you have to use a complex object as the key. That object usually contains a field that uniquely identifies the object in that cache and a field that you want to use for colocation.
diff --git a/docs/_docs/developers-guide/data-modeling/data-modeling.adoc b/docs/_docs/data-modeling/data-modeling.adoc
similarity index 88%
rename from docs/_docs/developers-guide/data-modeling/data-modeling.adoc
rename to docs/_docs/data-modeling/data-modeling.adoc
index bb4fbfc..603a3a9 100644
--- a/docs/_docs/developers-guide/data-modeling/data-modeling.adoc
+++ b/docs/_docs/data-modeling/data-modeling.adoc
@@ -9,10 +9,10 @@ In this chapter, we discuss important components of the Ignite data distribution
 
 To understand how data is stored and used in Ignite, it is useful to draw a distinction between the physical organization of data in a cluster and the logical representation of data, i.e. how users are going to view their data in their applications.
 
-On the physical level, each data entry (either cache entry or table row) is stored in the form of a <<Binary Object Format,binary object>>, and the entire data set is divided into smaller sets called _partitions_. The partitions are evenly distributed between all the nodes. The way data is divided into partitions and partitions into nodes is controlled by the  link:developers-guide/data-modeling/affinity-collocation[affinity function].
+On the physical level, each data entry (either cache entry or table row) is stored in the form of a <<Binary Object Format,binary object>>, and the entire data set is divided into smaller sets called _partitions_. The partitions are evenly distributed between all the nodes. The way data is divided into partitions and partitions into nodes is controlled by the  link:data-modeling/affinity-collocation[affinity function].
 
-On the logical level, data should be represented in a way that is easy to work with and convenient for end users to use in their applications. 
-Ignite provides two distinct logical representations of data: _key-value cache_ and _SQL tables (schema)_. 
+On the logical level, data should be represented in a way that is easy to work with and convenient for end users to use in their applications.
+Ignite provides two distinct logical representations of data: _key-value cache_ and _SQL tables (schema)_.
 Although these two representations may seem different, in reality they are equivalent and can represent the same set of data.
 
 IMPORTANT: Keep in mind that, in Ignite, the concepts of a SQL table and a key-value cache are two equivalent representations of the same (internal) data structure. You can access your data using either the key-value API or SQL statements, or both.
@@ -34,7 +34,7 @@ Cache API supports the following features:
 * Continuous Queries
 * Events
 
-NOTE: Even after you get your cluster up and running, you can create both key-value caches and SQL tables link:developers-guide/key-value-api/basic-cache-operations#creating-caches-dynamically[dynamically].
+NOTE: Even after you get your cluster up and running, you can create both key-value caches and SQL tables link:key-value-api/basic-cache-operations#creating-caches-dynamically[dynamically].
 
 == Binary Object Format
 
@@ -47,12 +47,12 @@ Ignite stores data entries in a specific format called _binary objects_. This se
 
 Binary objects can be used only when the default binary marshaller is used (i.e., no other marshaller is set in the configuration).
 
-For more information on how to configure and use binary objects, refer to the link:developers-guide/key-value-api/binary-objects[Working with Binary Objects] page.
+For more information on how to configure and use binary objects, refer to the link:key-value-api/binary-objects[Working with Binary Objects] page.
 
 
 == Data Partitioning
 
-Data partitioning is a method of subdividing large sets of data into smaller chunks and distributing them between all server nodes in a balanced manner. Data partitioning is discussed at length in the link:developers-guide/data-modeling/data-partitioning[Data Partitioning] section.
+Data partitioning is a method of subdividing large sets of data into smaller chunks and distributing them between all server nodes in a balanced manner. Data partitioning is discussed at length in the link:data-modeling/data-partitioning[Data Partitioning] section.
 
 
 
diff --git a/docs/_docs/developers-guide/data-modeling/data-partitioning.adoc b/docs/_docs/data-modeling/data-partitioning.adoc
similarity index 86%
rename from docs/_docs/developers-guide/data-modeling/data-partitioning.adoc
rename to docs/_docs/data-modeling/data-partitioning.adoc
index 18680d6..0b249a6 100644
--- a/docs/_docs/developers-guide/data-modeling/data-partitioning.adoc
+++ b/docs/_docs/data-modeling/data-partitioning.adoc
@@ -2,25 +2,25 @@
 
 Data partitioning is a method of subdividing large sets of data into smaller chunks and distributing them between all server nodes in a balanced manner.
 
-Partitioning is controlled by the _affinity function_. 
-The affinity function determines the mapping between keys and partitions. 
-Each partition is identified by a number from a limited set (0 to 1023 by default). 
-The set of partitions is distributed between the server nodes available at the moment. 
-Thus, each key is mapped to a specific node and is stored on that node. 
+Partitioning is controlled by the _affinity function_.
+The affinity function determines the mapping between keys and partitions.
+Each partition is identified by a number from a limited set (0 to 1023 by default).
+The set of partitions is distributed between the server nodes available at the moment.
+Thus, each key is mapped to a specific node and is stored on that node.
 When the number of nodes in the cluster changes, the partitions are re-distributed — through a process called <<rebalancing,rebalancing>> — between the new set of nodes.
 
 image:images/partitioning.png[Data Partitioning]
 
-The affinity function takes the _affinity key_ as an argument. 
-The affinity key can be any field of the objects stored in the cache (any column in the SQL table). 
+The affinity function takes the _affinity key_ as an argument.
+The affinity key can be any field of the objects stored in the cache (any column in the SQL table).
 If the affinity key is not specified, the default key is used (in case of SQL tables, it is the PRIMARY KEY column).
 
-Partitioning boosts performance by distributing both read and write operations. 
-Moreover, you can design your data model in such a way that the data entries that are used together are stored together (i.e., in one partition). 
-When you request that data, only a small number of partitions is scanned. 
-This technique is called link:developers-guide/data-modeling/affinity-collocation[Affinity Colocation].
+Partitioning boosts performance by distributing both read and write operations.
+Moreover, you can design your data model in such a way that the data entries that are used together are stored together (i.e., in one partition).
+When you request that data, only a small number of partitions is scanned.
+This technique is called link:data-modeling/affinity-collocation[Affinity Colocation].
 
-Partitioning helps achieve linear scalability at virtually any scale. 
+Partitioning helps achieve linear scalability at virtually any scale.
 You can add more nodes to the cluster as your data set grows, and Ignite makes sure that the data is distributed "equally" among all the nodes.
 
 == Affinity Function
@@ -36,7 +36,7 @@ No data exchange happens between the remaining nodes.
 
 TODO:
 You can implement a custom affinity function if you want to control the way data is distributed in the cluster.
-See the link:developers-guide/advanced-topics/affinity-function[Affinity Function] section in Advanced Topics.
+See the link:advanced-topics/affinity-function[Affinity Function] section in Advanced Topics.
 
 ////////////////////////////////////////////////////////////////////////////////
 
@@ -47,8 +47,8 @@ When creating a cache or SQL table, you can choose between partitioned and repli
 
 === PARTITIONED
 
-In this mode, all partitions are split equally between all server nodes. 
-This mode is the most scalable distributed cache mode and allows you to store as much data as fits in the total memory (RAM and disk) available across all nodes. 
+In this mode, all partitions are split equally between all server nodes.
+This mode is the most scalable distributed cache mode and allows you to store as much data as fits in the total memory (RAM and disk) available across all nodes.
 Essentially, the more nodes you have, the more data you can store.
 
 Unlike the `REPLICATED` mode, where updates are expensive because every node in the cluster needs to be updated, with `PARTITIONED` mode, updates become cheap because only one primary node (and optionally 1 or more backup nodes) need to be updated for every key. However, reads are somewhat more expensive because only certain nodes have the data cached.
@@ -80,16 +80,16 @@ By default, Ignite keeps a single copy of each partition (a single copy of the e
 
 IMPORTANT: By default, backups are disabled.
 
-Backup copies are configured per cache (table). 
-If you configure 2 backup copies, the cluster maintains 3 copies of each partition. 
-One of the partitions is called the _primary_ partition, and the other two are called _backup_ partitions. 
-By extension, the node that has the primary partition is called the _primary node for the keys stored in the partition_. 
+Backup copies are configured per cache (table).
+If you configure 2 backup copies, the cluster maintains 3 copies of each partition.
+One of the partitions is called the _primary_ partition, and the other two are called _backup_ partitions.
+By extension, the node that has the primary partition is called the _primary node for the keys stored in the partition_.
 The node with backup partitions is called the _backup node_.
 
-When a node with the primary partition for some key leaves the cluster, Ignite triggers the partition map exchange (PME) process. 
+When a node with the primary partition for some key leaves the cluster, Ignite triggers the partition map exchange (PME) process.
 PME labels one of the backup partitions (if they are configured) for the key as primary.
 
-Backup partitions increase the availability of your data, and in some cases, the speed of read operations, since Ignite reads data from backed-up partitions if they are available on the local node (this is the default behavior that can be disabled. See link:developers-guide/configuring-caches/configuration-overview#readfrombackup[Cache Configuration] for details.). However, they also increase memory consumption or the size of the persistent storage (if enabled).
+Backup partitions increase the availability of your data, and in some cases, the speed of read operations, since Ignite reads data from backed-up partitions if they are available on the local node (this is the default behavior that can be disabled. See link:configuring-caches/configuration-overview#readfrombackup[Cache Configuration] for details.). However, they also increase memory consumption or the size of the persistent storage (if enabled).
 
 //end::partition-backups[]
 
@@ -97,7 +97,7 @@ Backup partitions increase the availability of your data, and in some cases, the
 *TODO: draw a diagram that illustrates backup partition distribution*
 ////////////////////////////////////////////////////////////////////////////////
 
-NOTE: Backup partitions can be configured in PARTITIONED mode only. Refer to the link:developers-guide/configuring-caches/configuring-backups[Configuring Partition Backups] section.
+NOTE: Backup partitions can be configured in PARTITIONED mode only. Refer to the link:configuring-caches/configuring-backups[Configuring Partition Backups] section.
 
 == Partition Map Exchange
 Partition map exchange (PME) is a process of sharing information about partition distribution (partition map) across the cluster so that every node knows where to look for specific keys. PME is required whenever the partition distribution for any cache changes, for example, when new nodes are added to the topology or old nodes leave the topology (whether on user request or due to a failure).
@@ -117,10 +117,10 @@ The PME process works in the following way: The coordinator node requests from a
 *TODO: the information from the https://apacheignite.readme.io/docs/rebalancing[data rebalancing] page can be useful*
 ////
 
-Refer to the link:developers-guide/data-rebalancing[Data Rebalancing] page for details.
+Refer to the link:data-rebalancing[Data Rebalancing] page for details.
 
 == Partition Loss Policy
 
-It may happen that throughout the cluster’s lifecycle, some of the data partitions are lost due to the failure of some primary node and backup nodes that held a copy of the partitions. Such a situation leads to a partial data loss and needs to be addressed according to your use case. For detailed information about partition loss policies, see link:developers-guide/partition-loss-policy[Partition Loss Policy].
+It may happen that throughout the cluster’s lifecycle, some of the data partitions are lost due to the failure of some primary node and backup nodes that held a copy of the partitions. Such a situation leads to a partial data loss and needs to be addressed according to your use case. For detailed information about partition loss policies, see link:partition-loss-policy[Partition Loss Policy].
 
 
diff --git a/docs/_docs/developers-guide/data-rebalancing.adoc b/docs/_docs/data-rebalancing.adoc
similarity index 91%
rename from docs/_docs/developers-guide/data-rebalancing.adoc
rename to docs/_docs/data-rebalancing.adoc
index c38d4e2..703adbc 100644
--- a/docs/_docs/developers-guide/data-rebalancing.adoc
+++ b/docs/_docs/data-rebalancing.adoc
@@ -4,14 +4,14 @@
 
 When a new node joins the cluster, some of the partitions are relocated to the new node so that the data remains distributed equally in the cluster. This process is called _data rebalancing_.
 
-If an existing node permanently leaves the cluster and backups are not configured, you lose the partitions stored on this node. 
+If an existing node permanently leaves the cluster and backups are not configured, you lose the partitions stored on this node.
 When backups are configured, one of the backup copies of the lost partitions becomes a primary partition and the rebalancing process is initiated.
 
 [CAUTION]
 ====
-Data rebalancing is triggered by changes in the link:developers-guide/baseline-topology[Baseline Topology].
+Data rebalancing is triggered by changes in the link:baseline-topology[Baseline Topology].
 In pure in-memory clusters, the default behavior is to start rebalancing immediately when a node leaves or joins the cluster (the baseline topology changes automatically).
-In clusters with persistence, the baseline topology has to be changed manually (default behavior), or can be changed automatically when link:developers-guide/baseline-topology#baseline-topology-autoadjustment[automatic baseline adjustment] is enabled.
+In clusters with persistence, the baseline topology has to be changed manually (default behavior), or can be changed automatically when link:baseline-topology#baseline-topology-autoadjustment[automatic baseline adjustment] is enabled.
 ====
 
 Rebalancing is configured per cache.
@@ -61,8 +61,8 @@ For example, if the cluster has two nodes and a single cache, all the cache's pa
 If the cluster has two nodes and two caches, then the caches will be re-balanced in-parallel *TODO*
 ////
 
-You can increase the number of threads that are taken from the system thread pool and used for rebalancing. 
-A system thread is taken from the pool every time a node needs to send a batch of data to a remote node or needs to process a batch that came from a remote node. 
+You can increase the number of threads that are taken from the system thread pool and used for rebalancing.
+A system thread is taken from the pool every time a node needs to send a batch of data to a remote node or needs to process a batch that came from a remote node.
 The thread is relinquished after the batch is processed.
 
 [tabs]
@@ -134,4 +134,4 @@ The following table lists the properties of `CacheConfiguration` related to reba
 
 == Monitoring Rebalancing Process
 
-You can monitor the link:administrators-guide/monitoring-metrics/metrics#monitoring-rebalancing[rebalancing process for specific caches using JMX].
+You can monitor the link:monitoring-metrics/metrics#monitoring-rebalancing[rebalancing process for specific caches using JMX].
diff --git a/docs/_docs/developers-guide/data-streaming.adoc b/docs/_docs/data-streaming.adoc
similarity index 96%
rename from docs/_docs/developers-guide/data-streaming.adoc
rename to docs/_docs/data-streaming.adoc
index 3e6a5a0..42530d6 100644
--- a/docs/_docs/developers-guide/data-streaming.adoc
+++ b/docs/_docs/data-streaming.adoc
@@ -4,7 +4,7 @@
 
 == Overview
 
-Ignite provides a Data Streaming API that can be used to inject large amounts of continuous streams of data into an Ignite cluster. 
+Ignite provides a Data Streaming API that can be used to inject large amounts of continuous streams of data into an Ignite cluster.
 The Data Streaming API is designed to be scalable and fault-tolerant, and provides _at-least-once_ delivery semantics for the data streamed into Ignite, meaning each entry is processed at least once.
 
 Data is streamed into a cache via a <<Data Streamers, data streamer>> associated with the cache. Data streamers automatically buffer the data and group it into batches for better performance and send it in parallel to multiple nodes.
@@ -15,7 +15,7 @@ The Data Streaming API provides the following features:
 * You can process the data concurrently in a colocated fashion.
 * Clients can perform concurrent SQL queries on the data as it is being streamed in.
 
-image:images/data-streaming.png[Data Streaming]
+image:images/data_streaming.png[Data Streaming]
 
 == Data Streamers
 A data streamer is associated with a specific cache and provides an interface for streaming data into the cache.
@@ -64,7 +64,7 @@ include::code-snippets/dotnet/DataStreaming.cs[tag=dataStreamer2,indent=0]
 tab:C++[unsupported]
 --
 
-NOTE: When `allowOverwrite` is set to `false` (default), the updates are not propagated to the link:developers-guide/persistence/external-storage[external storage] (if it is used).
+NOTE: When `allowOverwrite` is set to `false` (default), the updates are not propagated to the link:persistence/external-storage[external storage] (if it is used).
 
 == Processing Data
 In cases when you need to execute custom logic before adding new data, you can use a stream receiver.
@@ -95,7 +95,7 @@ NOTE: Note that a stream receiver does not put data into the cache automatically
 The class definitions of the stream receivers to be executed on remote nodes must be available on the nodes. This can be achieved in two ways:
 
 * Add the classes to the classpath of the nodes;
-* Enable link:developers-guide/peer-class-loading[peer class loading].
+* Enable link:peer-class-loading[peer class loading].
 ====
 
 === Stream Transformer
diff --git a/docs/_docs/developers-guide/clustering/clustering.adoc b/docs/_docs/developers-guide/clustering/clustering.adoc
deleted file mode 100644
index 3b72a5f..0000000
--- a/docs/_docs/developers-guide/clustering/clustering.adoc
+++ /dev/null
@@ -1,95 +0,0 @@
-= Clustering
-
-== Overview
-
-In this chapter, we discuss different ways nodes can discover each other to form a cluster.
-
-On start-up, a node is assigned either one of the two roles: _server node_ or _client node_.
-Server nodes are the workhorses of the cluster; they cache data, execute compute tasks, etc.
-Client nodes join the topology as regular nodes but they do not store data. Client nodes are used to stream data into the cluster and execute user queries.
-
-To form a cluster, each node must be able to connect to all other nodes. To ensure that, a proper <<Discovery Mechanisms,discovery mechanism>> must be configured.
-
-
-NOTE: In addition to client nodes, you can use Thin Clients to define and manipulate data in the cluster.
-Learn more about the thin clients in the link:developers-guide/thin-clients/getting-started-with-thin-clients[Thin Clients] section.
-
-
-image::images/ignite_clustering.png[Ignite Cluster]
-
-
-
-== Discovery Mechanisms
-
-Nodes can automatically discover each other and form a cluster.
-This allows you to scale out when needed without having to restart the whole cluster.
-Developers can also leverage Ignite's hybrid cloud support that allows establishing connection between private and public clouds such as Amazon Web Services, providing them with the best of both worlds.
-
-The discovery mechanism goes with two implementations intended for different usage scenarios:
-
-* link:developers-guide/clustering/tcp-ip-discovery[TCP/IP Discovery] is designed and optimized for 100s of nodes.
-* link:developers-guide/clustering/zookeeper-discovery[ZooKeeper Discovery] that allows scaling Ignite clusters to 100s and 1000s of nodes preserving linear scalability and performance.
-
-== Communication SPI
-
-`CommunicationSpi` provides basic plumbing to send and receive messages and is utilized for all distributed operations, such as task execution, monitoring data exchange, distributed event querying, and others.
-Ignite provides `TcpCommunicationSpi` as the default implementation of `CommunicationSpi`, that uses the TCP/IP to communicate with other nodes.
-
-To enable communication with other nodes, `TcpCommunicationSpi` adds `TcpCommunicationSpi.ATTR_ADDRS` and `TcpCommunicationSpi.ATTR_PORT` local node attributes.
-On start-up, this SPI tries to start listening to the local port specified by the `TcpCommunicationSpi.setLocalPort(int)` method.
-If the local port is occupied, then the SPI automatically increments the port number until it can successfully bind it.
-The `TcpCommunicationSpi.setLocalPortRange(int)` configuration parameter controls the maximum number of ports that the SPI tries before it fails.
-
-[TIP]
-====
-[discrete]
-=== Local port range
-
-Port range comes in very handy when starting multiple nodes on the
-same machine or even in the same VM. In this case, all nodes can be
-brought up without a single change in the configuration.
-====
-
-[TIP]
-====
-[discrete]
-=== IPv4 vs IPv6
-
-Ignite tries to support IPv4 and IPv6 but this can sometimes lead to issues where the cluster becomes detached. A possible solution — unless you require IPv6 — is to restrict Ignite to IPv4 via the `-Djava.net.preferIPv4Stack=true` JVM parameter.
-====
-
-Below is an example of `TcpCommunicationSpi` configuration. Refer to the
-link:{javadoc_base_url}/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html[TcpCommunicationSpi javadoc,window=_blank] for the complete list of parameters.
-
-[tabs]
---
-tab:XML[]
-[source,xml]
-----
-<bean class="org.apache.ignite.configuration.IgniteConfiguration">
-
-  <property name="communicationSpi">
-    <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
-      <!-- Override local port. -->
-      <property name="localPort" value="4321"/>
-    </bean>
-  </property>
-
-</bean>
-----
-tab:Java[]
-[source,java]
-----
-include::{javaCodeDir}/ClusteringOverview.java[tag=commSpi,indent=0]
-----
-tab:C#/.NET[]
-[source,csharp]
-----
-include::code-snippets/dotnet/ClusteringOverview.cs[tag=CommunicationSPI,indent=0]
-----
-tab:C++[unsupported]
---
-
-
-
-
diff --git a/docs/_docs/developers-guide/distributed-computing/cluster-groups.adoc b/docs/_docs/distributed-computing/cluster-groups.adoc
similarity index 100%
rename from docs/_docs/developers-guide/distributed-computing/cluster-groups.adoc
rename to docs/_docs/distributed-computing/cluster-groups.adoc
diff --git a/docs/_docs/developers-guide/distributed-computing/distributed-computing.adoc b/docs/_docs/distributed-computing/distributed-computing.adoc
similarity index 94%
rename from docs/_docs/developers-guide/distributed-computing/distributed-computing.adoc
rename to docs/_docs/distributed-computing/distributed-computing.adoc
index b74fa5d..983a713 100644
--- a/docs/_docs/developers-guide/distributed-computing/distributed-computing.adoc
+++ b/docs/_docs/distributed-computing/distributed-computing.adoc
@@ -2,7 +2,7 @@
 
 :javaFile: {javaCodeDir}/DistributedComputing.java
 
-Ignite provides an API for distributing computations across cluster nodes in a balanced and fault-tolerant manner. You can submit individual tasks for execution as well as implement the MapReduce pattern with automatic task splitting. The API provides fine-grained control over the link:developers-guide/distributed-computing/load-balancing[job distribution strategy].
+Ignite provides an API for distributing computations across cluster nodes in a balanced and fault-tolerant manner. You can submit individual tasks for execution as well as implement the MapReduce pattern with automatic task splitting. The API provides fine-grained control over the link:distributed-computing/load-balancing[job distribution strategy].
 
 
 ////
@@ -34,11 +34,11 @@ include::code-snippets/cpp/src/compute_get.cpp[tag=compute-get,indent=0]
 ----
 --
 
-The compute interface provides methods for distributing different types of tasks over cluster nodes and running link:developers-guide/collocated-computations[colocated computations].
+The compute interface provides methods for distributing different types of tasks over cluster nodes and running link:collocated-computations[colocated computations].
 
 == Specifying the Set of Nodes for Computations
 
-Each instance of the compute interface is associated with a link:developers-guide/distributed-computing/cluster-groups[set of nodes] on which the tasks are executed.
+Each instance of the compute interface is associated with a link:distributed-computing/cluster-groups[set of nodes] on which the tasks are executed.
 When called without arguments, `ignite.compute()` returns the compute interface that is associated with all server nodes.
 To obtain an instance for a specific subset of nodes, use `Ignite.compute(ClusterGroup group)`.
 In the following example, the compute interface is bound to the remote nodes only, i.e. all nodes except for the one that runs this code.
@@ -76,7 +76,7 @@ In order to run tasks on the remote nodes, make sure the class definitions of th
 You can do this in two ways:
 
 - Add the classes to the classpath of the nodes;
-- Enable link:developers-guide/peer-class-loading[peer class loading].
+- Enable link:peer-class-loading[peer class loading].
 ====
 
 === Executing a Runnable Task
@@ -283,7 +283,7 @@ include::code-snippets/cpp/src/compute_acessing_data.cpp[tag=compute-acessing-da
 
 Note that the example shown above may not be the most effective way.
 The reason is that the person object that corresponds to key `1` may be located on a node that is different from the node where the task is executed.
-In this case, the object is fetched via network. This can be avoided by link:developers-guide/collocated-computations[colocating the task with the data].
+In this case, the object is fetched via network. This can be avoided by link:collocated-computations[colocating the task with the data].
 
 [CAUTION]
 ====
@@ -297,7 +297,7 @@ If you want to use the key and value objects inside `IgniteCallable` and `Ignite
 
 In the cases where you do not need to colocate computations with data but simply want to process all data remotely, you can run local cache queries inside the `call()` method. Consider the following example.
 
-Let's say we have a cache that stores information about persons and we want to calculate the average age of all persons. One way to accomplish this is to run a link:developers-guide/key-value-api/querying[scan query] that will fetch the ages of all persons to the local node, where you can calculate the average age.
+Let's say we have a cache that stores information about persons and we want to calculate the average age of all persons. One way to accomplish this is to run a link:key-value-api/querying[scan query] that will fetch the ages of all persons to the local node, where you can calculate the average age.
 
 A more efficient way, however, is to avoid network calls to other nodes by running the query locally on each remote node and aggregating the result on the local node.
 
diff --git a/docs/_docs/developers-guide/distributed-computing/executor-service.adoc b/docs/_docs/distributed-computing/executor-service.adoc
similarity index 86%
rename from docs/_docs/developers-guide/distributed-computing/executor-service.adoc
rename to docs/_docs/distributed-computing/executor-service.adoc
index 37b236c..3151c1f 100644
--- a/docs/_docs/developers-guide/distributed-computing/executor-service.adoc
+++ b/docs/_docs/distributed-computing/executor-service.adoc
@@ -2,7 +2,7 @@
 
 :javaFile: {javaCodeDir}/IgniteExecutorService.java
 
-Ignite provides a distributed implementation of `java.util.concurrent.ExecutorService` that submits tasks to a cluster's server nodes for execution. 
+Ignite provides a distributed implementation of `java.util.concurrent.ExecutorService` that submits tasks to a cluster's server nodes for execution.
 The tasks are load balanced across the cluster nodes and are guaranteed to be executed as long as there is at least one node in the cluster.
 
 ////
@@ -15,7 +15,7 @@ An executor service can be obtained from an instance of `Ignite`:
 include::{javaFile}[tag=execute,indent=0]
 ----
 
-You can also limit the set of nodes available for the executor service by specifying a link:developers-guide/distributed-computing/cluster-groups[cluster group]:
+You can also limit the set of nodes available for the executor service by specifying a link:distributed-computing/cluster-groups[cluster group]:
 
 [source, java]
 -------------------------------------------------------------------------------
diff --git a/docs/_docs/developers-guide/distributed-computing/fault-tolerance.adoc b/docs/_docs/distributed-computing/fault-tolerance.adoc
similarity index 88%
rename from docs/_docs/developers-guide/distributed-computing/fault-tolerance.adoc
rename to docs/_docs/distributed-computing/fault-tolerance.adoc
index 9eed733..f0c545d 100644
--- a/docs/_docs/developers-guide/distributed-computing/fault-tolerance.adoc
+++ b/docs/_docs/distributed-computing/fault-tolerance.adoc
@@ -1,8 +1,8 @@
 = Fault Tolerance
 :javaFile: {javaCodeDir}/FaultTolerance.java
 
-Ignite supports automatic job failover. 
-In case of a node crash, jobs are automatically transferred to other available nodes for re-execution. 
+Ignite supports automatic job failover.
+In case of a node crash, jobs are automatically transferred to other available nodes for re-execution.
 As long as there is at least one node standing, no job is ever lost.
 
 The global failover strategy is controlled by the `IgniteConfiguration.failoverSpi` property.
@@ -48,4 +48,4 @@ tab:C#/.NET[unsupported]
 tab:C++[unsupported]
 --
 
-* `JobStealingFailoverSpi` — This implementation must be used only if you want to enable link:developers-guide/distributed-computing/load-balancing#job-stealing[job stealing].
+* `JobStealingFailoverSpi` — This implementation must be used only if you want to enable link:distributed-computing/load-balancing#job-stealing[job stealing].
diff --git a/docs/_docs/developers-guide/distributed-computing/job-scheduling.adoc b/docs/_docs/distributed-computing/job-scheduling.adoc
similarity index 81%
rename from docs/_docs/developers-guide/distributed-computing/job-scheduling.adoc
rename to docs/_docs/distributed-computing/job-scheduling.adoc
index 8058816..cf9cadd 100644
--- a/docs/_docs/developers-guide/distributed-computing/job-scheduling.adoc
+++ b/docs/_docs/distributed-computing/job-scheduling.adoc
@@ -2,15 +2,15 @@
 
 :javaFile: {javaCodeDir}/JobScheduling.java
 
-When jobs arrive at the destination node, they are submitted to a thread pool and scheduled for execution in random order. 
-However, you can change job ordering by configuring `CollisionSpi`. 
+When jobs arrive at the destination node, they are submitted to a thread pool and scheduled for execution in random order.
+However, you can change job ordering by configuring `CollisionSpi`.
 The `CollisionSpi` interface provides a way to control how jobs are scheduled for processing on each node.
 
 Ignite provides several implementations of the `CollisionSpi` interface:
 
 - `FifoQueueCollisionSpi` — simple FIFO ordering in multiple threads. This implementation is used by default;
 - `PriorityQueueCollisionSpi` — priority ordering;
-- `JobStealingFailoverSpi` — use this implementation to enable link:developers-guide/distributed-computing/load-balancing#job-stealing[job stealing].
+- `JobStealingFailoverSpi` — use this implementation to enable link:distributed-computing/load-balancing#job-stealing[job stealing].
 
 To enable a specific collision spi, change the `IgniteConfiguration.collisionSpi` property.
 
@@ -54,7 +54,7 @@ tab:C#/.NET[unsupported]
 tab:C++[unsupported]
 --
 
-Task priorities are set in the link:developers-guide/distributed-computing/map-reduce#distributed-task-session[task session] via the `grid.task.priority` attribute. If no priority is assigned to a task, then the default priority of 0 is used.
+Task priorities are set in the link:distributed-computing/map-reduce#distributed-task-session[task session] via the `grid.task.priority` attribute. If no priority is assigned to a task, then the default priority of 0 is used.
 
 
 [source, java]
diff --git a/docs/_docs/developers-guide/distributed-computing/load-balancing.adoc b/docs/_docs/distributed-computing/load-balancing.adoc
similarity index 78%
rename from docs/_docs/developers-guide/distributed-computing/load-balancing.adoc
rename to docs/_docs/distributed-computing/load-balancing.adoc
index 8899e0b..a357296 100644
--- a/docs/_docs/developers-guide/distributed-computing/load-balancing.adoc
+++ b/docs/_docs/distributed-computing/load-balancing.adoc
@@ -2,7 +2,7 @@
 
 :javaFile: {javaCodeDir}/LoadBalancing.java
 
-Ignite automatically load balances jobs produced by a link:developers-guide/distributed-computing/map-reduce[compute task] as well as individual tasks submitted via the distributed computing API. Individual tasks submitted via `IgniteCompute.run(...)` and other compute methods are treated as tasks producing a single job.
+Ignite automatically load balances jobs produced by a link:distributed-computing/map-reduce[compute task] as well as individual tasks submitted via the distributed computing API. Individual tasks submitted via `IgniteCompute.run(...)` and other compute methods are treated as tasks producing a single job.
 
 ////////////////////////////////////////////////////////////////////////////////
 
@@ -18,14 +18,14 @@ By default, Ignite uses a round-robin algorithm (`RoundRobinLoadBalancingSpi`),
 
 [NOTE]
 ====
-Load balancing does not apply to link:developers-guide/collocated-computations[colocated computations].
+Load balancing does not apply to link:collocated-computations[colocated computations].
 ====
 
 The load balancing algorithm is controlled by the `IgniteConfiguration.loadBalancingSpi` property.
 
 == Round-Robin Load Balancing
 
-`RoundRobinLoadBalancingSpi` iterates through the available nodes in a round-robin fashion and picks the next sequential node. The available nodes are defined when you link:developers-guide/distributed-computing/distributed-computing#getting-the-compute-interface[get the compute instance] through which you execute your tasks.
+`RoundRobinLoadBalancingSpi` iterates through the available nodes in a round-robin fashion and picks the next sequential node. The available nodes are defined when you link:distributed-computing/distributed-computing#getting-the-compute-interface[get the compute instance] through which you execute your tasks.
 
 Round-Robin load balancing supports two modes of operation: per-task and global.
 
@@ -80,12 +80,18 @@ tab:C++[unsupported]
 
 == Job Stealing
 
-Quite often grids are deployed across many computers some of which may be more powerful or under-utilized than others. Enabling `JobStealingCollisionSpi` helps avoid jobs being stuck at an over-utilized node, as they will be stolen by an under-utilized node.
+Quite often clusters are deployed across many computers some of which may be more powerful or under-utilized than others. Enabling `JobStealingCollisionSpi` helps avoid jobs being stuck at an over-utilized node, as they will be stolen by an under-utilized node.
 
 `JobStealingCollisionSpi` supports job stealing from over-utilized nodes to under-utilized nodes. This SPI is especially useful if you have some jobs that complete quickly, while others are sitting in the waiting queue on over-utilized nodes. In such a case, the waiting jobs will be stolen from the slower node and moved to the fast/under-utilized node.
 
 `JobStealingCollisionSpi` adopts a "late" load balancing technique, which allows reassigning a job from node A to node B after the job has been scheduled for execution on node A​.
 
+[IMPORTANT]
+====
+If you want to enable job stealing, you have to configure `JobStealingFailoverSpi` as the failover SPI. See link:distributed-computing/fault-tolerance[Fault Tolerance] for details.
+====
+
+
 Here is an example of how to configure `JobStealingCollisionSpi`:
 
 [tabs]
@@ -105,7 +111,3 @@ tab:C++[unsupported]
 --
 
 
-[IMPORTANT]
-====
-If you want to enable job stealing, you have to configure `org.apache.ignite.spi.failover.jobstealing.JobStealingFailoverSpi`.
-====
diff --git a/docs/_docs/developers-guide/distributed-computing/map-reduce.adoc b/docs/_docs/distributed-computing/map-reduce.adoc
similarity index 91%
rename from docs/_docs/developers-guide/distributed-computing/map-reduce.adoc
rename to docs/_docs/distributed-computing/map-reduce.adoc
index 428cd99..55df1c6 100644
--- a/docs/_docs/developers-guide/distributed-computing/map-reduce.adoc
+++ b/docs/_docs/distributed-computing/map-reduce.adoc
@@ -11,7 +11,7 @@ with each job executed separately. The results produced by each job are
 aggregated into the final results (the reducing phase).
 
 In a distributed system such as Ignite, the jobs are distributed between
-the nodes according to the preconfigured link:developers-guide/distributed-computing/load-balancing[load balancing strategy] and the results are aggregated on the node that submitted the task.
+the nodes according to the preconfigured link:distributed-computing/load-balancing[load balancing strategy] and the results are aggregated on the node that submitted the task.
 
 The MapReduce pattern is provided by the `ComputeTask` interface.
 
@@ -19,7 +19,7 @@ The MapReduce pattern is provided by the `ComputeTask` interface.
 ====
 Use `ComputeTask` only when you need fine-grained control over the
 job-to-node mapping, or custom fail-over logic. For all other cases you
-should use link:developers-guide/distributed-computing/distributed-computing#executing-an-igniteclosure[simple closures].
+should use link:distributed-computing/distributed-computing#executing-an-igniteclosure[simple closures].
 ====
 
 == Understanding Compute Task Interface
@@ -32,7 +32,7 @@ The `result()` method is called after completion of each job and returns an inst
 
 - `WAIT` - wait for all remaining jobs to complete (if any);
 - `REDUCE` - immediately move to the reduce step, discarding all the remaining jobs and results not yet received;
-- `FAILOVER` - failover the job to another node (see link:developers-guide/distributed-computing/fault-tolerance[Fault Tolerance]).
+- `FAILOVER` - failover the job to another node (see link:distributed-computing/fault-tolerance[Fault Tolerance]).
 
 The `reduce()` method is called during the reduce step, when all the jobs have completed (or the `result()` method returned the `REDUCE` result policy for a particular job). The method receives a list with all completed results and returns the final result of the computation.
 
@@ -59,7 +59,7 @@ include::code-snippets/dotnet/MapReduceApi.cs[tag=mapReduceComputeTask,indent=0]
 tab:C++[unsupported]
 --
 
-You can limit the execution of jobs to a subset of nodes by using a link:developers-guide/distributed-computing/cluster-groups[cluster group].
+You can limit the execution of jobs to a subset of nodes by using a link:distributed-computing/cluster-groups[cluster group].
 
 
 == Handling Job Failures
diff --git a/docs/_docs/developers-guide/events/events.adoc b/docs/_docs/events/events.adoc
similarity index 88%
rename from docs/_docs/developers-guide/events/events.adoc
rename to docs/_docs/events/events.adoc
index 6dab863..6c20477 100644
--- a/docs/_docs/developers-guide/events/events.adoc
+++ b/docs/_docs/events/events.adoc
@@ -7,8 +7,7 @@
 
 This page describes different event types, when and where they are generated, and how you can use them.
 
-You can always find the most complete and up to date list of events in the javadocs: link:{javadoc_base_url}/org/apache/ignite/events/EventType.html[Ignite events, window=_blank].
-
+You can always find the most complete and up to date list of events in the javadoc:org.apache.ignite.events.EventType[] javadoc.
 
 == General Information
 
@@ -49,30 +48,6 @@ Some events contain the `subjectID` field, which represents the ID of the entity
 
 Check the specific event class to learn if the `subjectID` field is present.
 
-This capability can be used for link:administrators-guide/security/auditing-events[auditing purposes].
-
-
-////
-== Node Lifecycle Events
-
-Refer to the lifecycle section.
-
-[cols="2,5",opts="header"]
-|===
-|Event Type | Event Description
-|BEFORE_NODE_START
-|Invoked before Ignite node startup routine is initiated.
-
-|AFTER_NODE_START
-|Invoked right after Ignite node has started.
-
-|BEFORE_NODE_STOP
-|Invoked right before Ignite stop routine is initiated.
-
-|AFTER_NODE_STOP
-|Invoked right after Ignite node has stopped.
-|===
-////
 
 == Cluster Activation Events
 
@@ -138,9 +113,9 @@ Cache events are also generated when you use DML commands.
 
 | EVT_CACHE_OBJECT_READ
 | An object is read from a cache.
-This event is not emitted when you use link:developers-guide/key-value-api/using-scan-queries[scan queries] (use <<Cache Query Events>> to monitor scan queries).
+This event is not emitted when you use link:key-value-api/using-scan-queries[scan queries] (use <<Cache Query Events>> to monitor scan queries).
 | The node where read operation is executed.
-It can be either the primary or backup node (the latter case is only possible when link:developers-guide/configuring-caches/configuration-overview#readfrombackup[reading from backups is enabled]).
+It can be either the primary or backup node (the latter case is only possible when link:configuring-caches/configuration-overview#readfrombackup[reading from backups is enabled]).
 In transactional caches, the event can be generated on both the primary and backup nodes depending on the concurrency and isolation levels.
 
 | EVT_CACHE_OBJECT_REMOVED | An object is removed from a cache. |The primary and backup nodes for the entry.
@@ -153,13 +128,13 @@ User actions that acquire a lock include the following cases:
 * The user explicitly acquires a lock by calling `IgniteCache.lock()` or `IgniteCache.lockAll()`.
 * A lock is acquired for every atomic (non-transactional) data modifying operation (put, update, remove).
 In this case, the event is triggered on both primary and backup nodes for the key.
-* Locks are acquired on the keys accessed within a transaction (depending on the link:developers-guide/key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency and isolation levels]).
+* Locks are acquired on the keys accessed within a transaction (depending on the link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency and isolation levels]).
 
-| The primary or/and backup nodes for the entry depending on the link:developers-guide/key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency and isolation levels].
+| The primary or/and backup nodes for the entry depending on the link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency and isolation levels].
 
 | EVT_CACHE_OBJECT_UNLOCKED | A lock on a key is released. | The primary node for the entry.
 
-| EVT_CACHE_OBJECT_EXPIRED | The event is fired when a cache entry expires. This happens only if an link:developers-guide/configuring-caches/expiry-policies[expiry policy] is configured.  | The primary and backup nodes for the entry.
+| EVT_CACHE_OBJECT_EXPIRED | The event is fired when a cache entry expires. This happens only if an link:configuring-caches/expiry-policies[expiry policy] is configured.  | The primary and backup nodes for the entry.
 | EVT_CACHE_ENTRY_CREATED | This event is triggered when Ignite creates an internal entry for working with a specific object from a cache. We don't recommend using this event. If you want to monitor cache put operations, the `EVT_CACHE_OBJECT_PUT` event should be enough for most cases. | The primary and backup nodes for the entry.
 
 | EVT_CACHE_ENTRY_DESTROYED
@@ -181,7 +156,7 @@ There are two types of events that are related to cache queries:
 [cols="2,5,3",opts="header"]
 |===
 | Event Type | Event Description | Where Event Is Fired
-| EVT_CACHE_QUERY_OBJECT_READ | An object is read as part of a query execution. This event is generated for every object that matches the link:developers-guide/key-value-api/using-scan-queries#executing-scan-queries[query filter]. | The primary node of the object that is read.
+| EVT_CACHE_QUERY_OBJECT_READ | An object is read as part of a query execution. This event is generated for every object that matches the link:key-value-api/using-scan-queries#executing-scan-queries[query filter]. | The primary node of the object that is read.
 | EVT_CACHE_QUERY_EXECUTED  |  This event is generated when a query is executed. | All server nodes that host the cache.
 |===
 
@@ -235,8 +210,8 @@ Discovery events are instances of the link:{events_url}/DiscoveryEvent.html[Disc
 
 == Task Execution Events
 
-Task execution events are associated with different stages of link:developers-guide/distributed-computing/map-reduce[task execution].
-They are also generated when you execute link:developers-guide/distributed-computing/distributed-computing[simple closures] because internally a closure is treated as a task that produces a single job.
+Task execution events are associated with different stages of link:distributed-computing/map-reduce[task execution].
+They are also generated when you execute link:distributed-computing/distributed-computing[simple closures] because internally a closure is treated as a task that produces a single job.
 
 ////
 This is what happens when you execute a task through the compute interface:
@@ -257,7 +232,7 @@ Task Execution events are instances of the link:{events_url}/TaskEvent.html[Task
 | EVT_TASK_FINISHED | The execution of the task finishes. | The node where the task was started.
 | EVT_TASK_FAILED | The task failed  | The node where the task was started.
 | EVT_TASK_TIMEDOUT |  The execution of the task timed out. This can happen when `Ignite.compute().withTimeout(...)` to execute tasks. When a task times out, it cancels all jobs that are being executed. It also generates the `EVT_TASK_FAILED` event.| The node where the task was started.
-| EVT_TASK_SESSION_ATTR_SET | A job sets an attribute in the link:developers-guide/distributed-computing/map-reduce#distributed-task-session[session]. | The node where the job is executed.
+| EVT_TASK_SESSION_ATTR_SET | A job sets an attribute in the link:distributed-computing/map-reduce#distributed-task-session[session]. | The node where the job is executed.
 |===
 
 {sp}+
@@ -286,7 +261,7 @@ The event contains information about the task that produced the job (task name,
 
 | EVT_JOB_TIMEDOUT | The job timed out. |
 
-| EVT_JOB_REJECTED | The job is rejected. The job can be rejected if a link:developers-guide/distributed-computing/job-scheduling[collision spi] is configured. | The node where the job is rejected.
+| EVT_JOB_REJECTED | The job is rejected. The job can be rejected if a link:distributed-computing/job-scheduling[collision spi] is configured. | The node where the job is rejected.
 
 | EVT_JOB_CANCELLED | The job was cancelled. | The node where the job is being executed.
 |===
@@ -339,7 +314,7 @@ They allow you to get notification about different stages of transaction executi
 == Management Task Events
 
 Management task events represent the tasks that are executed by Visor or Web Console.
-This event type can be used to monitor a link:administrators-guide/security/cluster-monitor-audit[Web Console activity].
+This event type can be used to monitor a link:security/cluster-monitor-audit[Web Console activity].
 
 [cols="2,5,3",opts="header"]
 |===
diff --git a/docs/_docs/developers-guide/events/listening-to-events.adoc b/docs/_docs/events/listening-to-events.adoc
similarity index 90%
rename from docs/_docs/developers-guide/events/listening-to-events.adoc
rename to docs/_docs/events/listening-to-events.adoc
index 33dfa1a..df5758c 100644
--- a/docs/_docs/developers-guide/events/listening-to-events.adoc
+++ b/docs/_docs/events/listening-to-events.adoc
@@ -5,7 +5,7 @@
 == Overview
 Ignite can generate events for a variety of operations happening in the cluster and notify your application about those operations. There are many types of events, including cache events, node discovery events, distributed task execution events, and many more.
 
-The list of events is available in the link:developers-guide/events/events[Events] section.
+The list of events is available in the link:events/events[Events] section.
 
 == Enabling Events
 By default, events are disabled, and you have to enable each event type explicitly if you want to use it in your application.
@@ -51,7 +51,7 @@ include::code-snippets/dotnet/WorkingWithEvents.cs[tag=gettingEventsInterface1,i
 tab:C++[unsupported]
 --
 
-The events interface can be associated with a link:developers-guide/distributed-computing/cluster-groups[set of nodes]. This means that you can access events that happen on a given set of nodes. In the following example, the events interface is obtained for the set of nodes that host the data for the Person cache.
+The events interface can be associated with a link:distributed-computing/cluster-groups[set of nodes]. This means that you can access events that happen on a given set of nodes. In the following example, the events interface is obtained for the set of nodes that host the data for the Person cache.
 
 [tabs]
 --
@@ -75,7 +75,7 @@ You can listen to either local or remote events. Local events are events that ar
 
 Note that some events may be fired on multiple nodes even if the corresponding real-world event happens only once. For example, when a node leaves the cluster, the `EVT_NODE_LEFT` event is generated on every remaining node.
 
-Another example is when you put an object into a cache. In this case, the `EVT_CACHE_OBJECT_PUT` event occurs on the node that hosts the link:developers-guide/data-modeling/data-partitioning#backup-partitions[primary partition] into which the object is actually written, which may be different from the node where the `put(...)` method is called. In addition, the event is fired on all nodes that hold the link:developers-guide/data-modeling/data-partitioning#backup-partitions[backup partiti [...]
+Another example is when you put an object into a cache. In this case, the `EVT_CACHE_OBJECT_PUT` event occurs on the node that hosts the link:data-modeling/data-partitioning#backup-partitions[primary partition] into which the object is actually written, which may be different from the node where the `put(...)` method is called. In addition, the event is fired on all nodes that hold the link:data-modeling/data-partitioning#backup-partitions[backup partitions] for the cache if they are con [...]
 
 The events interface provides methods for listening to local events only, and for listening to both local and remote events.
 
diff --git a/docs/_docs/images/collocated_joins.png b/docs/_docs/images/collocated_joins.png
index 6568f9d..04fffaa 100644
Binary files a/docs/_docs/images/collocated_joins.png and b/docs/_docs/images/collocated_joins.png differ
diff --git a/docs/_docs/images/data_streaming.png b/docs/_docs/images/data_streaming.png
new file mode 100644
index 0000000..c407447
Binary files /dev/null and b/docs/_docs/images/data_streaming.png differ
diff --git a/docs/_docs/images/ignite_clustering.png b/docs/_docs/images/ignite_clustering.png
new file mode 100644
index 0000000..25edce7
Binary files /dev/null and b/docs/_docs/images/ignite_clustering.png differ
diff --git a/docs/_docs/images/jconsole.png b/docs/_docs/images/jconsole.png
new file mode 100644
index 0000000..120a309
Binary files /dev/null and b/docs/_docs/images/jconsole.png differ
diff --git a/docs/_docs/images/network_segmentation.png b/docs/_docs/images/network_segmentation.png
new file mode 100644
index 0000000..26876b0
Binary files /dev/null and b/docs/_docs/images/network_segmentation.png differ
diff --git a/docs/_docs/images/non_collocated_joins.png b/docs/_docs/images/non_collocated_joins.png
new file mode 100644
index 0000000..7e30eb2
Binary files /dev/null and b/docs/_docs/images/non_collocated_joins.png differ
diff --git a/docs/_docs/images/partitioned_cache.png b/docs/_docs/images/partitioned_cache.png
new file mode 100644
index 0000000..0dab468
Binary files /dev/null and b/docs/_docs/images/partitioned_cache.png differ
diff --git a/docs/_docs/images/replicated_cache.png b/docs/_docs/images/replicated_cache.png
new file mode 100644
index 0000000..89f19aa
Binary files /dev/null and b/docs/_docs/images/replicated_cache.png differ
diff --git a/docs/_docs/images/segmentation_resolved.png b/docs/_docs/images/segmentation_resolved.png
new file mode 100644
index 0000000..b28d6d2
Binary files /dev/null and b/docs/_docs/images/segmentation_resolved.png differ
diff --git a/docs/_docs/images/split_brain.png b/docs/_docs/images/split_brain.png
new file mode 100644
index 0000000..a49c986
Binary files /dev/null and b/docs/_docs/images/split_brain.png differ
diff --git a/docs/_docs/images/split_brain_resolved.png b/docs/_docs/images/split_brain_resolved.png
new file mode 100644
index 0000000..ef9635f
Binary files /dev/null and b/docs/_docs/images/split_brain_resolved.png differ
diff --git a/docs/_docs/images/zookeeper.png b/docs/_docs/images/zookeeper.png
new file mode 100644
index 0000000..8db3997
Binary files /dev/null and b/docs/_docs/images/zookeeper.png differ
diff --git a/docs/_docs/images/zookeeper_split.png b/docs/_docs/images/zookeeper_split.png
new file mode 100644
index 0000000..9cb643a
Binary files /dev/null and b/docs/_docs/images/zookeeper_split.png differ
diff --git a/docs/_docs/includes/installggqsg.adoc b/docs/_docs/includes/installggqsg.adoc
index 54fd474..a638ede 100644
--- a/docs/_docs/includes/installggqsg.adoc
+++ b/docs/_docs/includes/installggqsg.adoc
@@ -3,7 +3,7 @@ To get started with the Apache Ignite binary distribution:
 .  Download the https://ignite.apache.org/download.cgi#binaries[Ignite binary, window="_blank"]
 as a zip archive.
 .  Unzip the zip archive into the installation folder in your system.
-. (Optional) Enable required link:developers-guide/setup#enabling-modules[modules].
+. (Optional) Enable required link:setup#enabling-modules[modules].
 . (Optional) Set the `IGNITE_HOME` environment variable or Windows PATH to
 point to the installation folder and make sure there is no trailing `/` (or
 `\` for Windows) in the path.
diff --git a/docs/_docs/includes/note-on-deactivation.adoc b/docs/_docs/includes/note-on-deactivation.adoc
index 771a581..57f71ad 100644
--- a/docs/_docs/includes/note-on-deactivation.adoc
+++ b/docs/_docs/includes/note-on-deactivation.adoc
@@ -1,5 +1,5 @@
 [WARNING]
 ====
 Deactivation deallocates all memory resources, including your application data, on all cluster nodes and disables public cluster API.
-If you have in-memory caches that are not backed up by a persistent storage (neither link:developers-guide/persistence/native-persistence[native persistent storage] nor link:developers-guide/persistence/external-storage[external storage]), you will lose the data and will have to repopulate these caches.
+If you have in-memory caches that are not backed up by a persistent storage (neither link:persistence/native-persistence[native persistent storage] nor link:persistence/external-storage[external storage]), you will lose the data and will have to repopulate these caches.
 ====
diff --git a/docs/_docs/includes/thick-and-thin-clients.adoc b/docs/_docs/includes/thick-and-thin-clients.adoc
index 7d3688e..b907a2e 100644
--- a/docs/_docs/includes/thick-and-thin-clients.adoc
+++ b/docs/_docs/includes/thick-and-thin-clients.adoc
@@ -1,5 +1,5 @@
 Ignite clients come in several different flavors, each with various capabilities.
-link:developers-guide/SQL/JDBC/jdbc-driver[JDBC] and link:developers-guide/SQL/ODBC/odbc-driver[ODBC] drivers
+link:SQL/JDBC/jdbc-driver[JDBC] and link:SQL/ODBC/odbc-driver[ODBC] drivers
 are useful for SQL-only applications and SQL-based tools. Thick and thin clients go beyond SQL capabilities and
 support many more APIs. Finally, ORM frameworks like Spring Data or Hibernate are also integrated with Ignite and
 can be used as an access point to your cluster.
diff --git a/docs/_docs/installation-guide/index.adoc b/docs/_docs/installation-guide/index.adoc
deleted file mode 100644
index 2b682cb..0000000
--- a/docs/_docs/installation-guide/index.adoc
+++ /dev/null
@@ -1,6 +0,0 @@
----
-layout: toc
----
-
-= Installation and Upgrade
-
diff --git a/docs/_docs/installation-guide/deb-rpm.adoc b/docs/_docs/installation/deb-rpm.adoc
similarity index 100%
rename from docs/_docs/installation-guide/deb-rpm.adoc
rename to docs/_docs/installation/deb-rpm.adoc
diff --git a/docs/_docs/developers-guide/index.adoc b/docs/_docs/installation/index.adoc
similarity index 50%
rename from docs/_docs/developers-guide/index.adoc
rename to docs/_docs/installation/index.adoc
index e1c90c1..0392ffa 100644
--- a/docs/_docs/developers-guide/index.adoc
+++ b/docs/_docs/installation/index.adoc
@@ -1,4 +1,4 @@
 ---
 layout: toc
 ---
-= Developer's Guide
+
diff --git a/docs/_docs/installation-guide/installing-using-docker.adoc b/docs/_docs/installation/installing-using-docker.adoc
similarity index 90%
rename from docs/_docs/installation-guide/installing-using-docker.adoc
rename to docs/_docs/installation/installing-using-docker.adoc
index f1e6c92..21125b9 100644
--- a/docs/_docs/installation-guide/installing-using-docker.adoc
+++ b/docs/_docs/installation/installing-using-docker.adoc
@@ -64,7 +64,7 @@ sudo docker run -d apacheignite/ignite:{version}
 
 == Running Persistent Cluster
 
-If you use link:developers-guide/persistence/native-persistence[Native Persistence], Ignite stores the user data under the default work directory (`{IGNITE_HOME}/work`) in the file system of the container. This directory will be erased if you restart the docker container. To avoid this, you can:
+If you use link:persistence/native-persistence[Native Persistence], Ignite stores the user data under the default work directory (`{IGNITE_HOME}/work`) in the file system of the container. This directory will be erased if you restart the docker container. To avoid this, you can:
 
 - Use a persistent volume to store the data; or
 - Mount a local directory
@@ -163,7 +163,7 @@ docker run -e "EXTERNAL_LIBS=http://url_to_your_jar" apacheignite/ignite
 
 == Enabling Modules
 
-To enable specific link:developers-guide/setup#enabling-modules[modules], specify their names in the "OPTION_LIBS" system variable as follows:
+To enable specific link:setup#enabling-modules[modules], specify their names in the "OPTION_LIBS" system variable as follows:
 
 [source, shell]
 ----
@@ -188,7 +188,7 @@ The following parameters can be passed as environment variables in the docker co
 | `CONFIG_URI` | URL to the Ignite configuration file (can also be relative to the META-INF folder on the class path).
 The downloaded config file is saved to ./ignite-config.xml | N/A
 
-| `OPTION_LIBS` | A list of link:developers-guide/setup#enabling-modules[modules] that will be enabled for the node. | ignite-log4j, ignite-spring, ignite-indexing
+| `OPTION_LIBS` | A list of link:setup#enabling-modules[modules] that will be enabled for the node. | ignite-log4j, ignite-spring, ignite-indexing
 
 | `JVM_OPTS` | JVM arguments passed to the Ignite instance.| N/A
 
diff --git a/docs/_docs/installation-guide/installing-using-zip.adoc b/docs/_docs/installation/installing-using-zip.adoc
similarity index 100%
rename from docs/_docs/installation-guide/installing-using-zip.adoc
rename to docs/_docs/installation/installing-using-zip.adoc
diff --git a/docs/_docs/installation-guide/kubernetes/amazon-eks-deployment.adoc b/docs/_docs/installation/kubernetes/amazon-eks-deployment.adoc
similarity index 88%
rename from docs/_docs/installation-guide/kubernetes/amazon-eks-deployment.adoc
rename to docs/_docs/installation/kubernetes/amazon-eks-deployment.adoc
index 5bc2e30..1eb6a51 100644
--- a/docs/_docs/installation-guide/kubernetes/amazon-eks-deployment.adoc
+++ b/docs/_docs/installation/kubernetes/amazon-eks-deployment.adoc
@@ -9,9 +9,9 @@
 
 This page is a step-by-step guide on how to deploy an Ignite cluster on Amazon EKS.
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=intro]
+include::installation/kubernetes/generic-configuration.adoc[tag=intro]
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=kube-version]
+include::installation/kubernetes/generic-configuration.adoc[tag=kube-version]
 
 In this guide, we will use the `eksctl` command line tool to create a Kubernetes cluster.
 Please follow link:https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html[this guide,window=_blank] to install the required resources and get familiar with the tool.
@@ -51,4 +51,4 @@ kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   6m49s
 
 == Kubernetes Configuration
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
+include::installation/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
diff --git a/docs/_docs/installation-guide/kubernetes/azure-deployment.adoc b/docs/_docs/installation/kubernetes/azure-deployment.adoc
similarity index 90%
rename from docs/_docs/installation-guide/kubernetes/azure-deployment.adoc
rename to docs/_docs/installation/kubernetes/azure-deployment.adoc
index 7be98d0..7cd71f5 100644
--- a/docs/_docs/installation-guide/kubernetes/azure-deployment.adoc
+++ b/docs/_docs/installation/kubernetes/azure-deployment.adoc
@@ -9,9 +9,9 @@
 
 This page is a step-by-step guide on how to deploy an Ignite  cluster on Microsoft Azure Kubernetes Service.
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=intro]
+include::installation/kubernetes/generic-configuration.adoc[tag=intro]
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=kube-version]
+include::installation/kubernetes/generic-configuration.adoc[tag=kube-version]
 
 == Creating the AKS Cluster
 
@@ -66,5 +66,5 @@ Now you can start creating Kubernetes resources.
 
 == Kubernetes Configuration
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
+include::installation/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
 
diff --git a/docs/_docs/installation-guide/kubernetes/generic-configuration.adoc b/docs/_docs/installation/kubernetes/generic-configuration.adoc
similarity index 90%
rename from docs/_docs/installation-guide/kubernetes/generic-configuration.adoc
rename to docs/_docs/installation/kubernetes/generic-configuration.adoc
index c7190b3..7c440f8 100644
--- a/docs/_docs/installation-guide/kubernetes/generic-configuration.adoc
+++ b/docs/_docs/installation/kubernetes/generic-configuration.adoc
@@ -6,9 +6,6 @@ published: false
 :command: kubectl
 :soft_name: Kubernetes
 :serviceName:
-//:configDir: /code-snippets/k8s
-//:script: /code-snippets/k8s/setup.sh
-//javaFile: /{javaCodeDir}/k8s/K8s.java
 
 
 //tag::kube-version[]
@@ -22,7 +19,7 @@ We will consider two deployment modes: stateful and stateless.
 Stateless deployments are suitable for in-memory use cases where your cluster keeps the application data in RAM for better performance.
 A stateful deployment differs from a stateless deployment in that it includes setting up persistent volumes for the cluster's storage.
 
-CAUTION: This guide focuses on deploying server nodes on Kubernetes. If you want to run client nodes on Kubernetes while your cluster is deployed elsewhere, you need to enable the communication mode designed for client nodes running behind a NAT. Refer to link:developers-guide/clustering/running-client-nodes-behind-nat[this section].
+CAUTION: This guide focuses on deploying server nodes on Kubernetes. If you want to run client nodes on Kubernetes while your cluster is deployed elsewhere, you need to enable the communication mode designed for client nodes running behind a NAT. Refer to link:clustering/running-client-nodes-behind-nat[this section].
 
 //end::intro[]
 
@@ -122,7 +119,7 @@ include::{configDir}/stateless/node-configuration.xml[]
 tab:Configuration with persistence[]
 In the configuration file, we will:
 
-* Enable link:developers-guide/persistence/native-persistence[native persistence] and specify the `workDirectory`, `walPath`, and `walArchivePath`. These directories will be mounted in each pod that runs an Ignite node. Volume configuration is part of the <<Creating Pod Configuration,pod configuration>>.
+* Enable link:persistence/native-persistence[native persistence] and specify the `workDirectory`, `walPath`, and `walArchivePath`. These directories will be mounted in each pod that runs an Ignite node. Volume configuration is part of the <<Creating Pod Configuration,pod configuration>>.
 * Use the `TcpDiscoveryKubernetesIpFinder` IP finder. {kubernetes-ip-finder-description}
 
 The file will look like this:
@@ -162,7 +159,7 @@ Our Deployment configuration will deploy a ReplicaSet with two pods running Igni
 
 In the container's configuration, we will:
 
-* Enable the “ignite-kubernetes” and “ignite-rest-http” link:installation-guide/installing-using-docker#enabling-modules[modules].
+* Enable the “ignite-kubernetes” and “ignite-rest-http” link:installing-using-docker#enabling-modules[modules].
 * Use the configuration file from the ConfigMap we created earlier.
 * Open a number of ports:
 ** 47100 — the communication port
@@ -192,7 +189,7 @@ Our StatefulSet configuration will deploy 2 pods running Ignite {version}.
 
 In the container's configuration we will:
 
-* Enable the “ignite-kubernetes” and “ignite-rest-http” link:installation-guide/installing-using-docker#enabling-modules[modules].
+* Enable the “ignite-kubernetes” and “ignite-rest-http” link:installing-using-docker#enabling-modules[modules].
 * Use the configuration file from the ConfigMap we created earlier.
 * Mount volumes for the work directory (where application data is stored), WAL files, and WAL archive.
 * Open a number of ports:
@@ -294,7 +291,7 @@ Execute the following command:
 /opt/gridgain/bin/control.sh --activate
 ----
 
-You can also activate the cluster using the link:developers-guide/restapi#activate[REST API]. Refer to the <<Connecting to the Cluster>> section for details about connection to the cluster's REST API.
+You can also activate the cluster using the link:restapi#activate[REST API]. Refer to the <<Connecting to the Cluster>> section for details about connection to the cluster's REST API.
 
 
 == Scaling the Cluster
@@ -323,11 +320,11 @@ To scale your StatefulSet, run the following command:
 {command} scale sts ignite-cluster --replicas=3 -n ignite
 ----
 
-After scaling the cluster, link:administrators-guide/control-script#activation-deactivation-and-topology-management[change the baseline topology] accordingly.
+After scaling the cluster, link:control-script#activation-deactivation-and-topology-management[change the baseline topology] accordingly.
 
 --
 
-CAUTION: If you reduce the number of nodes by more than the link:developers-guide/configuring-caches/configuring-backups[number of partition backups], you may lose data. The proper way to scale down is to redistribute the data after removing a node by changing the link:administrators-guide/control-script#removing-nodes-from-baseline-topology[baseline topology].
+CAUTION: If you reduce the number of nodes by more than the link:configuring-caches/configuring-backups[number of partition backups], you may lose data. The proper way to scale down is to redistribute the data after removing a node by changing the link:control-script#removing-nodes-from-baseline-topology[baseline topology].
 
 == Connecting to the Cluster
 
@@ -371,7 +368,7 @@ You will need to configure the discovery mechanism to use `TcpDiscoveryKubernete
 
 === Connecting with Thin Clients
 
-The following code snippet illustrates how to connect to your cluster using the link:developers-guide/thin-clients/java-thin-client[java thin client]. You can use other thin clients in the same way.
+The following code snippet illustrates how to connect to your cluster using the link:thin-clients/java-thin-client[java thin client]. You can use other thin clients in the same way.
 Note that we use the external IP address (LoadBalancer Ingress) of the service.
 
 [source, java]
diff --git a/docs/_docs/installation-guide/kubernetes/gke-deployment.adoc b/docs/_docs/installation/kubernetes/gke-deployment.adoc
similarity index 91%
rename from docs/_docs/installation-guide/kubernetes/gke-deployment.adoc
rename to docs/_docs/installation/kubernetes/gke-deployment.adoc
index 45ffba8..f90fc77 100644
--- a/docs/_docs/installation-guide/kubernetes/gke-deployment.adoc
+++ b/docs/_docs/installation/kubernetes/gke-deployment.adoc
@@ -10,9 +10,9 @@
 
 This page explains how to deploy an Ignite  cluster on Google Kubernetes Engine.
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=intro]
+include::installation/kubernetes/generic-configuration.adoc[tag=intro]
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=kube-version]
+include::installation/kubernetes/generic-configuration.adoc[tag=kube-version]
 
 == Creating a GKE Cluster
 A cluster in GKE is a set of nodes that provision resources for the applications that are deployed in the cluster.
@@ -57,7 +57,7 @@ Now you are ready to create Kubernetes resources.
 
 == Kubernetes Configuration
 
-include::installation-guide/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
+include::installation/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
 
 
 
diff --git a/docs/_docs/developers-guide/key-value-api/basic-cache-operations.adoc b/docs/_docs/key-value-api/basic-cache-operations.adoc
similarity index 97%
rename from docs/_docs/developers-guide/key-value-api/basic-cache-operations.adoc
rename to docs/_docs/key-value-api/basic-cache-operations.adoc
index 86ee21d..da57673 100644
--- a/docs/_docs/developers-guide/key-value-api/basic-cache-operations.adoc
+++ b/docs/_docs/key-value-api/basic-cache-operations.adoc
@@ -47,7 +47,7 @@ tab:Java[]
 include::{javaFile}[tag=createCache,indent=0]
 ----
 
-Refer to the link:developers-guide/configuring-caches/configuration-overview[Cache Configuration] section for the list of cache parameters.
+Refer to the link:configuring-caches/configuration-overview[Cache Configuration] section for the list of cache parameters.
 tab:C#/.NET[]
 [source,csharp]
 ----
@@ -129,10 +129,10 @@ include::code-snippets/cpp/src/cache_get_put.cpp[tag=cache-get-put,indent=0]
 
 [NOTE]
 ====
-Bulk operations such as `putAll()` or `removeAll()` are executed as a sequence of atomic operations and can partially fail. 
+Bulk operations such as `putAll()` or `removeAll()` are executed as a sequence of atomic operations and can partially fail.
 If this happens, a `CachePartialUpdateException` is thrown and contains a list of keys for which the update failed.
 
-To update a collection of entries within a single operation, consider using link:developers-guide/key-value-api/transactions[transactions].
+To update a collection of entries within a single operation, consider using link:key-value-api/transactions[transactions].
 ====
 
 Below are more examples of basic atomic operations:
diff --git a/docs/_docs/developers-guide/key-value-api/binary-objects.adoc b/docs/_docs/key-value-api/binary-objects.adoc
similarity index 93%
rename from docs/_docs/developers-guide/key-value-api/binary-objects.adoc
rename to docs/_docs/key-value-api/binary-objects.adoc
index d32d39c..6a53c1d 100644
--- a/docs/_docs/developers-guide/key-value-api/binary-objects.adoc
+++ b/docs/_docs/key-value-api/binary-objects.adoc
@@ -1,7 +1,7 @@
 = Working with Binary Objects
 
 == Overview
-In Ignite, data is stored in link:developers-guide/data-modeling/data-modeling#binary-object-format[binary format] and is deserialized into objects every time you call cache methods. However, you can work directly with the binary objects avoiding deserialization.
+In Ignite, data is stored in link:data-modeling/data-modeling#binary-object-format[binary format] and is deserialized into objects every time you call cache methods. However, you can work directly with the binary objects avoiding deserialization.
 
 ////
 *TODO* ARTEM, should we explain why we'd want to avoid deserialization?
@@ -33,7 +33,7 @@ There are several restrictions that are implied by the binary object format impl
 By default, when you request entries from a cache, they are returned in the deserialized format.
 To work with the binary format, obtain an instance of the cache using the `withKeepBinary()` method.
 This instance returns objects in the binary format (when possible).
-//and also passes binary objects to link:developers-guide/collocated-computations#entry-processor[entry processors], if any are used.
+//and also passes binary objects to link:collocated-computations#entry-processor[entry processors], if any are used.
 // and cache interceptors.
 
 
@@ -57,7 +57,7 @@ The following classes are never converted (e.g., the `toBinary(Object)` method r
 * `Enums` and array of enums
 * Maps, collections and arrays of objects (but the objects inside them are reconverted if they are binary)
 
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 ICache<int, IBinaryObject> binaryCache = cache.WithKeepBinary<int, IBinaryObject>();
@@ -101,7 +101,7 @@ tab:Java[]
 include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=binaryBuilder,indent=0]
 ----
 
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 IIgnite ignite = Ignition.Start();
@@ -133,7 +133,7 @@ tab:Java[]
 include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=cacheEntryProc,indent=0]
 ----
 
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::code-snippets/dotnet/WorkingWithBinaryObjects.cs[tag=entryProcessor,indent=0]
@@ -173,10 +173,10 @@ We strongly recommend you should always add fields to binary objects in the same
 
 A null field would normally take five bytes to store — four bytes for the field ID plus one byte for the field length.
 Memory-wise, it's preferable to not include a field, rather than include a null field.
-However, if you do not include a field, Ignite creates a new schema for this object, and that schema is different from the schema of the objects that do include the field. 
+However, if you do not include a field, Ignite creates a new schema for this object, and that schema is different from the schema of the objects that do include the field.
 If you have multiple fields that are set to `null` in random combinations, Ignite maintains a different Binary Object schema for each combination, and your heap may be exhausted by the total size of the Binary Object schemas.
-It is better to have a few schemas for your Binary Objects, with the same set of fields of same types, set in the same order. 
-Choose one of them when creating Binary Object by supplying the same set of fields, even with null value. 
+It is better to have a few schemas for your Binary Objects, with the same set of fields of same types, set in the same order.
+Choose one of them when creating Binary Object by supplying the same set of fields, even with null value.
 This is also the reason you need to supply field type for null field.
 
 You can also nest your Binary Objects if you have a subset of fields which are optional but either all absent or all present.
@@ -212,7 +212,7 @@ tab:Java[]
 include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=cfg,indent=0]
 ----
 
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::code-snippets/dotnet/WorkingWithBinaryObjects.cs[tag=binaryCfg,indent=0]
diff --git a/docs/_docs/developers-guide/key-value-api/continuous-queries.adoc b/docs/_docs/key-value-api/continuous-queries.adoc
similarity index 94%
rename from docs/_docs/developers-guide/key-value-api/continuous-queries.adoc
rename to docs/_docs/key-value-api/continuous-queries.adoc
index 04995d0..4677670 100644
--- a/docs/_docs/developers-guide/key-value-api/continuous-queries.adoc
+++ b/docs/_docs/key-value-api/continuous-queries.adoc
@@ -14,7 +14,7 @@ You can also specify a remote filter to narrow down the range of entries that ar
 ====
 [discrete]
 === Continuous Queries and MVCC
-Continuous queries have a number of link:developers-guide/transactions/mvcc[functional limitations] when used with MVCC-enabled caches.
+Continuous queries have a number of link:transactions/mvcc[functional limitations] when used with MVCC-enabled caches.
 ====
 
 
@@ -106,7 +106,7 @@ In order to use remote filters, make sure the class definitions of the filters a
 You can do this in two ways:
 
 * Add the classes to the classpath of every server node;
-* link:developers-guide/peer-class-loading[Enable peer class loading].
+* link:peer-class-loading[Enable peer class loading].
 ====
 
 
@@ -133,7 +133,7 @@ In order to use transformers, make sure the class definitions of the transformer
 You can do this in two ways:
 
 * Add the classes to the classpath of every server node;
-* link:developers-guide/peer-class-loading[Enable peer class loading].
+* link:peer-class-loading[Enable peer class loading].
 ====
 
 
diff --git a/docs/_docs/developers-guide/key-value-api/transactions.adoc b/docs/_docs/key-value-api/transactions.adoc
similarity index 97%
rename from docs/_docs/developers-guide/key-value-api/transactions.adoc
rename to docs/_docs/key-value-api/transactions.adoc
index 241d2ff..44ce0c3 100644
--- a/docs/_docs/developers-guide/key-value-api/transactions.adoc
+++ b/docs/_docs/key-value-api/transactions.adoc
@@ -5,7 +5,7 @@
 == Overview
 
 To enable transactional support for a specific cache, set the `atomicityMode` parameter in the cache configuration to `TRANSACTIONAL`.
-See link:developers-guide/configuring-caches/atomicity-modes[Atomicity Modes] for details.
+See link:configuring-caches/atomicity-modes[Atomicity Modes] for details.
 
 Transactions allow you to group multiple cache operations, on one or more keys, into a single atomic transaction.
 These operations are executed without any other interleaved operations on the specified keys, and either all succeed or all fail.
@@ -309,6 +309,6 @@ tab:C++[unsupported]
 
 == Monitoring Transactions
 
-Refer to the link:administrators-guide/monitoring-metrics/metrics#monitoring-transactions[Monitoring Transactions] section for the list of metrics that expose some transaction-related information.
+Refer to the link:monitoring-metrics/metrics#monitoring-transactions[Monitoring Transactions] section for the list of metrics that expose some transaction-related information.
 
-You can also use the link:administrators-guide/control-script#transaction-management[control script] to get information about, or cancel, specific transactions being executed in the cluster.
+You can also use the link:control-script#transaction-management[control script] to get information about, or cancel, specific transactions being executed in the cluster.
diff --git a/docs/_docs/developers-guide/key-value-api/using-scan-queries.adoc b/docs/_docs/key-value-api/using-scan-queries.adoc
similarity index 94%
rename from docs/_docs/developers-guide/key-value-api/using-scan-queries.adoc
rename to docs/_docs/key-value-api/using-scan-queries.adoc
index ec2af62..1636d97 100644
--- a/docs/_docs/developers-guide/key-value-api/using-scan-queries.adoc
+++ b/docs/_docs/key-value-api/using-scan-queries.adoc
@@ -106,5 +106,5 @@ include::code-snippets/cpp/src/scan_query.cpp[tag=set-local,indent=0]
 
 == Related Topics
 
-* link:developers-guide/restapi#sql-scan-query-execute[Execute scan query via REST API]
-* link:developers-guide/events/events#cache-query-events[Cache Query Events]
+* link:restapi#sql-scan-query-execute[Execute scan query via REST API]
+* link:events/events#cache-query-events[Cache Query Events]
diff --git a/docs/_docs/developers-guide/key-value-api/with-expiry-policy.adoc b/docs/_docs/key-value-api/with-expiry-policy.adoc
similarity index 100%
rename from docs/_docs/developers-guide/key-value-api/with-expiry-policy.adoc
rename to docs/_docs/key-value-api/with-expiry-policy.adoc
diff --git a/docs/_docs/developers-guide/logging.adoc b/docs/_docs/logging.adoc
similarity index 91%
rename from docs/_docs/developers-guide/logging.adoc
rename to docs/_docs/logging.adoc
index 86366de..571f162 100644
--- a/docs/_docs/developers-guide/logging.adoc
+++ b/docs/_docs/logging.adoc
@@ -37,7 +37,7 @@ You can provide a custom configuration file via the `java.util.logging.config.fi
 
 == Using Log4j
 
-NOTE: Before using Log4j, enable the link:developers-guide/setup#enabling-modules[ignite-log4j] module.
+NOTE: Before using Log4j, enable the link:setup#enabling-modules[ignite-log4j] module.
 
 To enable Log4j logger, set the `gridLogger` property of `IgniteConfiguration`, as shown in the following example:
 
@@ -62,7 +62,7 @@ tab:C++[unsupported]
 In the above example, the path to `log4j-config.xml` can be either an absolute path, a local path relative to META-INF in classpath or to `IGNITE_HOME`. An example log4j configuration file can be found in the distribution package (`$IGNITE_HOME/config/ignite-log4j.xml`).
 
 == Using Log4j2
-NOTE: Before using Log4j2, enable the link:developers-guide/setup#enabling-modules[ignite-log4j2] module.
+NOTE: Before using Log4j2, enable the link:setup#enabling-modules[ignite-log4j2] module.
 
 To enable Log4j2 logger, set the `gridLogger` property of `IgniteConfiguration`, as shown below:
 
@@ -91,7 +91,7 @@ In the above example, the path to `log4j2-config.xml` can be either an absolute
 NOTE: Log4j2 supports runtime reconfiguration, i.e. changes in the configuration file is applied without the need to restart the application.
 
 == Using JCL
-NOTE: Before using JCL, enable the link:developers-guide/setup#enabling-modules[ignite-jcl] module.
+NOTE: Before using JCL, enable the link:setup#enabling-modules[ignite-jcl] module.
 
 NOTE: Note that JCL simply forwards logging messages to an underlying logging system, which needs to be properly configured. Refer to the link:https://commons.apache.org/proper/commons-logging/guide.html#Configuration[JCL official documentation] for more information. For example, if you want to use Log4j, make sure you add the required libraries to your classpath.
 
@@ -118,7 +118,7 @@ tab:C++[unsupported]
 
 == Using SLF4J
 
-NOTE: Before using SLF4J, enable the link:developers-guide/setup#enabling-modules[ignite-slf4j] module.
+NOTE: Before using SLF4J, enable the link:setup#enabling-modules[ignite-slf4j] module.
 
 To enable the SLF4J logger, set the `gridLogger` property of `IgniteConfiguration`, as shown below:
 
@@ -154,7 +154,7 @@ You can prevent such information from being written to the log by setting the `I
 ./ignite.sh -J-DIGNITE_TO_STRING_INCLUDE_SENSITIVE=false
 ----
 
-See link:developers-guide/starting-nodes#setting-jvm-options[Setting JVM Options] to learn about different ways to set system properties.
+See link:starting-nodes#setting-jvm-options[Setting JVM Options] to learn about different ways to set system properties.
 
 == Logging Configuration Example
 
diff --git a/docs/_docs/developers-guide/memory-architecture.adoc b/docs/_docs/memory-architecture.adoc
similarity index 93%
rename from docs/_docs/developers-guide/memory-architecture.adoc
rename to docs/_docs/memory-architecture.adoc
index 328f7f6..5af1a99 100644
--- a/docs/_docs/developers-guide/memory-architecture.adoc
+++ b/docs/_docs/memory-architecture.adoc
@@ -6,7 +6,7 @@ Ignite memory architecture allows storing and processing data and indexes both i
 
 image::images/durable-memory-overview.png[Memory architecture]
 
-The multi-tiered storage operates in a way similar to the virtual memory of operating systems, such as Linux. 
+The multi-tiered storage operates in a way similar to the virtual memory of operating systems, such as Linux.
 However, one significant difference between these two types of architecture is that the multi-tiered storage always treats the disk as the superset of the data (if persistence is enabled), capable of surviving crashes and restarts, while the traditional virtual memory uses the disk only as a swap extension, which gets erased once the process stops.
 
 == Memory Architecture
@@ -48,8 +48,8 @@ If during an update an entry size expands beyond the free space available in its
 
 Ignite performs memory defragmentation automatically and does not require any explicit action from a user.
 
-Over time, an individual data page might be updated multiple times by different CRUD operations. 
-This can lead to the page and overall memory fragmentation. 
+Over time, an individual data page might be updated multiple times by different CRUD operations.
+This can lead to the page and overall memory fragmentation.
 To minimize memory fragmentation, Ignite uses _page compaction_ whenever a page becomes too fragmented.
 
 A compacted data page looks like the one in the picture below:
@@ -75,5 +75,5 @@ However, when the whole free space available in the page is needed or some fragm
 
 Ignite provides a number of features that let you persist your data on disk with consistency guarantees.
 You can restart the cluster without losing the data, be resilient to crashes, and provide a storage for data when the amount of RAM is not sufficient. When native persistence is enabled, Ignite always stores all the data on disk, and loads as much data as
-it can into RAM for processing. Refer to the link:developers-guide/persistence/native-persistence[Ignite Persistence] section for further information.
+it can into RAM for processing. Refer to the link:persistence/native-persistence[Ignite Persistence] section for further information.
 
diff --git a/docs/_docs/developers-guide/memory-configuration/data-regions.adoc b/docs/_docs/memory-configuration/data-regions.adoc
similarity index 87%
rename from docs/_docs/developers-guide/memory-configuration/data-regions.adoc
rename to docs/_docs/memory-configuration/data-regions.adoc
index bc3d5a7..fb68b1c 100644
--- a/docs/_docs/developers-guide/memory-configuration/data-regions.adoc
+++ b/docs/_docs/memory-configuration/data-regions.adoc
@@ -1,13 +1,13 @@
 = Configuring Data Regions
 
 == Overview
-Ignite uses the concept of _data regions_ to control the amount of RAM available to a cache or a group of caches. A data region is a logical extendable area in RAM in which cached data resides. You can control the initial size of the region and the maximum size it can occupy. In addition to the size, data regions control link:developers-guide/persistence/native-persistence[persistence settings] for caches.
+Ignite uses the concept of _data regions_ to control the amount of RAM available to a cache or a group of caches. A data region is a logical extendable area in RAM in which cached data resides. You can control the initial size of the region and the maximum size it can occupy. In addition to the size, data regions control link:persistence/native-persistence[persistence settings] for caches.
 
 By default, there is one data region that can take up to 20% of RAM available to the node, and all caches you create are placed in that region; but you can add as many regions as you want. There are a couple of reasons why you may want to have multiple regions:
 
 * Regions allow you to configure the amount of RAM available to a cache or number of caches.
 * Persistence parameters are configured per region. If you want to have both in-memory only caches and the caches that store their content to disk, you need to configure two (or more) data regions with different persistence settings: one for in-memory caches and one for persistent caches.
-* Some memory parameters, such as link:developers-guide/memory-configuration/eviction-policies[eviction policies], are configured per data region.
+* Some memory parameters, such as link:memory-configuration/eviction-policies[eviction policies], are configured per data region.
 
 See the following section to learn how to change the parameters of the default data region or configure multiple data regions.
 
@@ -43,7 +43,7 @@ tab:C++[unsupported]
 == Adding Custom Data Regions
 
 In addition to the default data region, you can add more data regions with custom settings.
-In the following example, we configure a data region that can take up to 40 MB and uses the link:developers-guide/memory-configuration/eviction-policies#random-2-lru[Random-2-LRU] eviction policy.
+In the following example, we configure a data region that can take up to 40 MB and uses the link:memory-configuration/eviction-policies#random-2-lru[Random-2-LRU] eviction policy.
 Note that further below in the configuration, we create a cache that resides in the new data region.
 
 [tabs]
diff --git a/docs/_docs/developers-guide/memory-configuration/eviction-policies.adoc b/docs/_docs/memory-configuration/eviction-policies.adoc
similarity index 85%
rename from docs/_docs/developers-guide/memory-configuration/eviction-policies.adoc
rename to docs/_docs/memory-configuration/eviction-policies.adoc
index b60a0d1..b20235b 100644
--- a/docs/_docs/developers-guide/memory-configuration/eviction-policies.adoc
+++ b/docs/_docs/memory-configuration/eviction-policies.adoc
@@ -1,15 +1,15 @@
 = Eviction Policies
 
-When link:developers-guide/persistence/native-persistence[Native Persistence] is off, Ignite holds all cache entries in the off-heap memory and allocates pages as new data comes in.
+When link:persistence/native-persistence[Native Persistence] is off, Ignite holds all cache entries in the off-heap memory and allocates pages as new data comes in.
 When a memory limit is reached and Ignite cannot allocate a page, some of the data must be purged from memory to avoid OutOfMemory errors.
 This process is called _eviction_. Eviction prevents the system from running out of memory but at the cost of losing data and having to reload it when you need it again.
 
 Eviction is used in following cases:
 
-* for off-heap memory when link:developers-guide/persistence/native-persistence[Native Persistence] is off;
-* for off-heap memory when Ignite is used with an link:developers-guide/persistence/external-storage[external storage];
-* for link:developers-guide/configuring-caches/on-heap-caching[on-heap caches];
-* for link:developers-guide/near-cache[near caches] if configured.
+* for off-heap memory when link:persistence/native-persistence[Native Persistence] is off;
+* for off-heap memory when Ignite is used with an link:persistence/external-storage[external storage];
+* for link:configuring-caches/on-heap-caching[on-heap caches];
+* for link:near-cache[near caches] if configured.
 
 When Native Persistence is on, a similar process — called _page replacement_ — is used to free up off-heap memory when Ignite cannot allocate a new page.
 The difference is that the data is not lost (because it is stored in the persistent storage), and therefore you are less concerned about losing data than about efficiency.
@@ -27,8 +27,8 @@ Thus, either the entire page or a large chunk of it is emptied and is ready to b
 image::images/off_heap_memory_eviction.png[Off-Heap Memory Eviction Mechanism]
 
 By default, off-heap memory eviction is disabled, which means that the used memory constantly grows until it reaches its limit.
-To enable eviction, specify the page eviction mode in the link:developers-guide/memory-configuration/data-regions/[data region configuration].
-Note that off-heap memory eviction is configured per link:developers-guide/memory-configuration/data-regions[data region].
+To enable eviction, specify the page eviction mode in the link:memory-configuration/data-regions/[data region configuration].
+Note that off-heap memory eviction is configured per link:memory-configuration/data-regions[data region].
 If you don't use data regions, you have to explicitly add default data region parameters in your configuration to be able to configure eviction.
 
 By default, eviction starts when the overall RAM consumption by a region gets to 90%.
@@ -160,4 +160,4 @@ Random-LRU-2 outperforms LRU by resolving the "one-hit wonder" problem: if a dat
 
 == On-Heap Cache Eviction
 
-Refer to the link:developers-guide/configuring-caches/on-heap-caching#configuring-eviction-policy[Configuring Eviction Policy for On-Heap Caches] section for the instruction on how to configure eviction policy for on-heap caches.
+Refer to the link:configuring-caches/on-heap-caching#configuring-eviction-policy[Configuring Eviction Policy for On-Heap Caches] section for the instruction on how to configure eviction policy for on-heap caches.
diff --git a/docs/_docs/developers-guide/memory-configuration/index.adoc b/docs/_docs/memory-configuration/index.adoc
similarity index 100%
rename from docs/_docs/developers-guide/memory-configuration/index.adoc
rename to docs/_docs/memory-configuration/index.adoc
diff --git a/docs/_docs/administrators-guide/monitoring-metrics/configuring-metrics.adoc b/docs/_docs/monitoring-metrics/configuring-metrics.adoc
similarity index 92%
rename from docs/_docs/administrators-guide/monitoring-metrics/configuring-metrics.adoc
rename to docs/_docs/monitoring-metrics/configuring-metrics.adoc
index f96c505..a73a820 100644
--- a/docs/_docs/administrators-guide/monitoring-metrics/configuring-metrics.adoc
+++ b/docs/_docs/monitoring-metrics/configuring-metrics.adoc
@@ -50,7 +50,7 @@ group=<Cache_Name>,name="org.apache.ignite.internal.processors.cache.CacheLocalM
 group=<Cache_Name>,name="org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl"
 ----
 
-//See link:administrators-guide/monitoring-metrics/monitoring-with-jconsole[Monitoring with JConsole] for the information on how to access JMX beans.
+//See link:monitoring-metrics/monitoring-with-jconsole[Monitoring with JConsole] for the information on how to access JMX beans.
 
 
 == Enabling Data Region Metrics
@@ -59,7 +59,7 @@ Enable data region metrics for every region you want to collect the metrics for.
 
 Data region metrics can be enabled in two ways:
 
-* in the link:developers-guide/memory-configuration/data-regions[configuration of the region]
+* in the link:memory-configuration/data-regions[configuration of the region]
 * via JMX Beans
 
 The following example illustrates how to enable metrics for the default data region and one custom data region.
diff --git a/docs/_docs/administrators-guide/monitoring-metrics/intro.adoc b/docs/_docs/monitoring-metrics/intro.adoc
similarity index 70%
rename from docs/_docs/administrators-guide/monitoring-metrics/intro.adoc
rename to docs/_docs/monitoring-metrics/intro.adoc
index ebd860a..760b433 100644
--- a/docs/_docs/administrators-guide/monitoring-metrics/intro.adoc
+++ b/docs/_docs/monitoring-metrics/intro.adoc
@@ -5,34 +5,11 @@ This chapter covers monitoring and metrics for Ignite. We'll start with an overv
 == Overview
 The basic task of monitoring in Ignite involves metrics. You have several approaches for accessing metrics:
 
--  via link:administrators-guide/monitoring-metrics/metrics[JMX]
+-  via link:monitoring-metrics/metrics[JMX]
 -  Programmatically
--  link:administrators-guide/monitoring-metrics/system-views[System views]
+-  link:monitoring-metrics/system-views[System views]
 
 
-[NOTE]
-====
-[discrete]
-=== Monitoring vs Auditing
-
-Auditing/Event information (Who did what? When? etc.), while not covered in this chapter, is often used in conjunction with metrics and monitoring. For more information on Auditing, see link:administrators-guide/security/auditing-events[Security and Auditing].
-====
-
-////
-== Information Display and Gathering
-Any good monitoring approach includes reactive and proactive measures. You can rely on dashboards to provide a summary of the current status, and you can proactively monitor and search the finer details in the logs to get a deeper understanding of what is happening and what might happen.
-
-Dashboard:
-
-- Shows the current status.
-- Helps prevent upcoming issues.
-- Reactionary (discover and react to issues that have already happened).
-
-Logging:
-
-- Focus on mitigation, find the reason/root cause, and prevent it from happening again.
-////
-
 == What to Monitor
 You can start by monitoring:
 
diff --git a/docs/_docs/administrators-guide/monitoring-metrics/metrics.adoc b/docs/_docs/monitoring-metrics/metrics.adoc
similarity index 88%
rename from docs/_docs/administrators-guide/monitoring-metrics/metrics.adoc
rename to docs/_docs/monitoring-metrics/metrics.adoc
index d6fb7a4..a8db872 100644
--- a/docs/_docs/administrators-guide/monitoring-metrics/metrics.adoc
+++ b/docs/_docs/monitoring-metrics/metrics.adoc
@@ -4,13 +4,13 @@
 
 == Overview
 
-Ignite exposes a large number of metrics useful for monitoring your cluster or application. 
-You can use JMX and a monitoring tool, such as JConsole to access these metrics via JMX. 
+Ignite exposes a large number of metrics useful for monitoring your cluster or application.
+You can use JMX and a monitoring tool, such as JConsole to access these metrics via JMX.
 You can also access them programmatically.
 
 On this page, we've collected the most useful metrics and grouped them into various common categories based on the monitoring task.
 
-// link:administrators-guide/monitoring-metrics/configuring-metrics[Configuring Metrics]
+// link:monitoring-metrics/configuring-metrics[Configuring Metrics]
 
 == Understanding MBean's ObjectName
 
@@ -36,13 +36,13 @@ image::images/jconsole.png[]
 
 == Monitoring the Amount of Data
 
-If you do not use link:developers-guide/persistence/native-persistence[Native persistence] (i.e., all your data is kept in memory), you would want to monitor RAM usage.
+If you do not use link:persistence/native-persistence[Native persistence] (i.e., all your data is kept in memory), you would want to monitor RAM usage.
 If you use Native persistence, in addition to RAM, you should monitor the size of the data storage on disk.
 
 The size of the data loaded into a node is available at different levels of aggregation. You can monitor for:
 
 * The total size of the data the node keeps on disk or in RAM. This amount is the sum of the size of each configured data region (in the simplest case, only the default data region) plus the sizes of the system data regions.
-* The size of a specific link:developers-guide/memory-configuration/data-regions[data region] on that node. The data region size is the sum of the sizes of all cache groups.
+* The size of a specific link:memory-configuration/data-regions[data region] on that node. The data region size is the sum of the sizes of all cache groups.
 * The size of a specific cache/cache group on that node, including the backup partitions.
 
 These metrics can be enabled/disabled for each level separately and are exposed via different JMX beans listed below.
@@ -60,7 +60,7 @@ It is reused when new entries need to be added to the storage on subsequent writ
 The allocated size is available at the level of data storage, data region, and cache group metrics.
 The metric is called `TotalAllocatedSize`.
 
-You can also get an estimate of the actual size of data by multiplying the number of link:developers-guide/memory-centric-storage#data-pages[data pages] in use by the fill factor. The fill factor is the ratio of the size of data in a page to the page size, averaged over all pages. The number of pages in use and the fill factor are available at the level of data <<Data Region Size,region metrics>>.
+You can also get an estimate of the actual size of data by multiplying the number of link:memory-centric-storage#data-pages[data pages] in use by the fill factor. The fill factor is the ratio of the size of data in a page to the page size, averaged over all pages. The number of pages in use and the fill factor are available at the level of data <<Data Region Size,region metrics>>.
 
 Add up the estimated size of all data regions to get the estimated total amount of data on the node.
 
@@ -92,7 +92,7 @@ If you have multiple data regions, add up the sizes of all data regions to get t
 === Monitoring Storage Size
 
 Persistent storage, when enabled, saves all application data on disk.
-The total amount of data each node keeps on disk consists of the persistent storage (application data), the link:developers-guide/persistence/native-persistence#write-ahead-log[WAL files], and link:developers-guide/persistence/native-persistence#wal-archive[WAL Archive] files.
+The total amount of data each node keeps on disk consists of the persistent storage (application data), the link:persistence/native-persistence#write-ahead-log[WAL files], and link:persistence/native-persistence#wal-archive[WAL Archive] files.
 
 ==== Persistent Storage Size
 To monitor the size of the persistent storage on disk, use the following metrics:
@@ -121,7 +121,7 @@ group="Persistent Store",name=DataStorageMetrics
 
 ==== Data Region Size
 
-For each configured data region, Ignite creates a separate JMX Bean that exposes specific information about the region. Metrics collection for data regions are disabled by default. You can link:administrators-guide/monitoring-metrics/configuring-metrics#enabling-data-region-metrics[enable it in the data region configuration, or via JMX at runtime] (see the Bean's operations below).
+For each configured data region, Ignite creates a separate JMX Bean that exposes specific information about the region. Metrics collection for data regions are disabled by default. You can link:monitoring-metrics/configuring-metrics#enabling-data-region-metrics[enable it in the data region configuration, or via JMX at runtime] (see the Bean's operations below).
 
 The size of the data region on a node comprises the size of all partitions (including backup partitions) that this node owns for all caches in that data region.
 
@@ -155,7 +155,7 @@ group=DataRegionMetrics,name=<Data Region name>
 
 ==== Cache Group Size
 
-If you don't use link:developers-guide/configuring-caches/cache-groups[cache groups], each cache will be its own group.
+If you don't use link:configuring-caches/cache-groups[cache groups], each cache will be its own group.
 There is a separate JMX bean for each cache group.
 The name of the bean corresponds to the name of the group.
 
@@ -194,7 +194,7 @@ Mbean's Object Name: ::
 
 
 == Monitoring Rebalancing
-link:developers-guide/data-rebalancing[Rebalancing] is the process of moving partitions between the cluster nodes so that the data is always distributed in a balanced manner. Rebalancing is triggered when a new node joins, or an existing node leaves the cluster.
+link:data-rebalancing[Rebalancing] is the process of moving partitions between the cluster nodes so that the data is always distributed in a balanced manner. Rebalancing is triggered when a new node joins, or an existing node leaves the cluster.
 
 If you have multiple caches, they will be rebalanced sequentially.
 There are several metrics that you can use to monitor the progress of the rebalancing process for a specific cache.
@@ -230,7 +230,7 @@ group=Kernal,name=ClusterMetricsMXBeanImpl
 | Attribute | Type | Description | Scope
 | TotalServerNodes| long  |The number of server nodes in the cluster.| Global
 | TotalClientNodes| long |The number of client nodes in the cluster. | Global
-| TotalBaselineNodes | long | The number of nodes that are registered in the link:developers-guide/baseline-topology[baseline topology]. When a node goes down, it remains registered in the baseline topology and you need to remote it manually. |  Global
+| TotalBaselineNodes | long | The number of nodes that are registered in the link:baseline-topology[baseline topology]. When a node goes down, it remains registered in the baseline topology and you need to remote it manually. |  Global
 | ActiveBaselineNodes | long | The number of nodes that are currently active in the baseline topology.  |  Global
 |===
 --
@@ -381,7 +381,7 @@ group=TODO ,name= TODO
 
 == Monitoring Data Center Replication
 
-Refer to the link:administrators-guide/data-center-replication/managing-and-monitoring#dr_jmx[Managing and Monitoring Replication] page.
+Refer to the link:data-center-replication/managing-and-monitoring#dr_jmx[Managing and Monitoring Replication] page.
 
 
 ////
diff --git a/docs/_docs/administrators-guide/monitoring-metrics/system-views.adoc b/docs/_docs/monitoring-metrics/system-views.adoc
similarity index 86%
rename from docs/_docs/administrators-guide/monitoring-metrics/system-views.adoc
rename to docs/_docs/monitoring-metrics/system-views.adoc
index cbb46e5..41f1df4 100644
--- a/docs/_docs/administrators-guide/monitoring-metrics/system-views.adoc
+++ b/docs/_docs/monitoring-metrics/system-views.adoc
@@ -4,7 +4,7 @@ WARNING: The system views are an experimental feature and can be changed in futu
 
 Ignite provides a number of built-in SQL views that contain information about cluster nodes and node metrics.
 These views are contained in the SYS schema.
-See the link:developers-guide/SQL/schemas[Understanding Schemas] page for the information on how to access a non-default schema.
+See the link:SQL/schemas[Understanding Schemas] page for the information on how to access a non-default schema.
 
 [IMPORTANT]
 ====
@@ -217,7 +217,7 @@ The TABLES view contains information about the SQL tables.
 
 CAUTION: Experimental
 
-The CACHE_GROUPS view contains information about the link:developers-guide/configuring-caches/cache-groups[cache groups].
+The CACHE_GROUPS view contains information about the link:configuring-caches/cache-groups[cache groups].
 
 [cols="2,1,4",opts="header,stretch"]
 |===
@@ -228,18 +228,18 @@ The CACHE_GROUPS view contains information about the link:developers-guide/confi
 |IS_SHARED|BOOLEAN | If this group contains more than one cache.
 |CACHE_COUNT|INT | The number of caches in the cache group.
 |CACHE_MODE | VARCHAR | The cache mode.
-|ATOMICITY_MODE | VARCHAR | The link:developers-guide/configuring-caches/atomicity-modes[atomicity mode] of the cache group.
+|ATOMICITY_MODE | VARCHAR | The link:configuring-caches/atomicity-modes[atomicity mode] of the cache group.
 |AFFINITY| VARCHAR | The string representation (as returned by the `toString()` method) of the affinity function defined for the cache group.
 |PARTITIONS_COUNT|INT | The number of partitions.
 |NODE_FILTER | VARCHAR | The string representation (as returned by the `toString()` method) of the node filter defined for the cache group.
 
-|DATA_REGION_NAME | VARCHAR | The name of the link:developers-guide/memory-configuration/data-regions[data region].
+|DATA_REGION_NAME | VARCHAR | The name of the link:memory-configuration/data-regions[data region].
 |TOPOLOGY_VALIDATOR | VARCHAR |  The string representation (as returned by the `toString()` method) of the topology validator defined for the cache group.
-|PARTITION_LOSS_POLICY | VARCHAR | link:developers-guide/partition-loss-policy[Partition loss policy].
-|REBALANCE_MODE | VARCHAR  | link:developers-guide/data-rebalancing#configuring-rebalancing-mode[Rebalancing mode].
-|REBALANCE_DELAY|LONG | link:developers-guide/data-rebalancing#other-properties[Rebalancing delay].
-|REBALANCE_ORDER|INT | link:developers-guide/data-rebalancing#other-properties[Rebalancing order].
-|BACKUPS|INT | The number of link:developers-guide/configuring-caches/configuring-backups[backup partitions] configured for the cache group.
+|PARTITION_LOSS_POLICY | VARCHAR | link:partition-loss-policy[Partition loss policy].
+|REBALANCE_MODE | VARCHAR  | link:data-rebalancing#configuring-rebalancing-mode[Rebalancing mode].
+|REBALANCE_DELAY|LONG | link:data-rebalancing#other-properties[Rebalancing delay].
+|REBALANCE_ORDER|INT | link:data-rebalancing#other-properties[Rebalancing order].
+|BACKUPS|INT | The number of link:configuring-caches/configuring-backups[backup partitions] configured for the cache group.
 |===
 
 == LOCAL_SQL_RUNNING_QUERIES
@@ -253,20 +253,20 @@ This view contains information about the SQL queries currently executing on the
 | QUERY_ID| VARCHAR   | The ID of the query (generated internally).
 | SQL| VARCHAR | The SQL query.
 | SCHEMA_NAME| VARCHAR | The name of the schema.
-| LOCAL| BOOLEAN | Indicates whether the query is link:developers-guide/SQL/sql-api#local-execution[local] or not.
+| LOCAL| BOOLEAN | Indicates whether the query is link:SQL/sql-api#local-execution[local] or not.
 | START_TIME| TIMESTAMP| A timestamp identifying when the query was started.
 | DURATION| BIGINT | The duration of the query up to the current moment.
 | MEMORY_CURRENT| BIGINT | The amount of memory the query uses at the current moment, in bytes.
 | MEMORY_MAX| BIGINT | The maximum amount of memory the query has used during its execution, in bytes.
-| DISK_ALLOCATION_CURRENT| BIGINT | The amount of disk space the query uses at the moment, in bytes. See link:developers-guide/memory-configuration/memory-quotas#query-offloading[Query Offloading] for details.
+| DISK_ALLOCATION_CURRENT| BIGINT | The amount of disk space the query uses at the moment, in bytes. See link:memory-configuration/memory-quotas#query-offloading[Query Offloading] for details.
 | DISK_ALLOCATION_MAX| BIGINT | Maximum amount of disk space the query has used during its execution, in bytes.
 | DISK_ALLOCATION_TOTAL| BIGINT | The query can allocate different amount of disk space at different stages of execution (because query execution follows the map-reduce pattern). This column returns the sum of those amounts, in bytes.
 | INITIATOR_ID| VARCHAR a| A string identifying the entity that started the query. By default, the `initiator_id` has different format depending on where the query was started:
 
-* link:developers-guide/SQL/JDBC/jdbc-driver[JDBC thin driver]: `jdbc-thin:<client_IP_host>:<client_IP_port>@<user_name>`
+* link:SQL/JDBC/jdbc-driver[JDBC thin driver]: `jdbc-thin:<client_IP_host>:<client_IP_port>@<user_name>`
 * Thin clients: `cli:<client_IP_host>:<client_IP_port>@<user_name>`
-* link:developers-guide/SQL/JDBC/jdbc-client-driver[JDBC client driver]: `jdbc-v2:<client_IP_host>:sqlGrid-ignite-jdbc-driver-<UUID>`
-* link:developers-guide/distributed-computing/distributed-computing#executing-tasks[Task]: `<job_class_name>:<job_ID>`
+* link:SQL/JDBC/jdbc-client-driver[JDBC client driver]: `jdbc-v2:<client_IP_host>:sqlGrid-ignite-jdbc-driver-<UUID>`
+* link:distributed-computing/distributed-computing#executing-tasks[Task]: `<job_class_name>:<job_ID>`
 
 The query initiator can be set via SQL API: link:{javadoc_base_url}/org/apache/ignite/cache/query/SqlFieldsQuery.html[SqlFieldsQuery].
 
@@ -283,7 +283,7 @@ This system view contains the list of SQL queries executed on the node where the
 
 | SCHEMA_NAME| VARCHAR| The name of the schema.
 | SQL| VARCHAR| The SQL query.
-| LOCAL| BOOLEAN| Indicates whether the query is link:developers-guide/SQL/sql-api#local-execution[local].
+| LOCAL| BOOLEAN| Indicates whether the query is link:SQL/sql-api#local-execution[local].
 | EXECUTIONS| BIGINT| How many times the query was executed since the start of the cluster.
 | FAILURES| BIGINT| How many times the query failed.
 | DURATION_MIN| BIGINT| Minimum duration of the query.
@@ -291,7 +291,7 @@ This system view contains the list of SQL queries executed on the node where the
 | LAST_START_TIME| TIMESTAMP| The timestamp when the query was started last time.
 | MEMORY_MIN| BIGINT| Minimum amount of memory the query has used, in bytes.
 | MEMORY_MAX| BIGINT| Maximum amount of memory the query has used, in bytes.
-| DISK_ALLOCATION_MIN| BIGINT| Minimum allocated disk space the query has used. See link:developers-guide/memory-configuration/memory-quotas#query-offloading[Query Offloading] for details.
+| DISK_ALLOCATION_MIN| BIGINT| Minimum allocated disk space the query has used. See link:memory-configuration/memory-quotas#query-offloading[Query Offloading] for details.
 | DISK_ALLOCATION_MAX| BIGINT| Maximum allocated disk space the query has used.
 | DISK_ALLOCATION_TOTAL_MIN| BIGINT| Minimum total allocated disk space the query has used. See the explanation for the `DISK_ALLOCATION_TOTAL` column in <<LOCAL_SQL_RUNNING_QUERIES>>.
 | DISK_ALLOCATION_TOTAL_MAX| BIGINT |Maximum total allocated disk space the query has used.
@@ -325,7 +325,7 @@ The INDEXES view contains information about SQL indexes.
 
 == Examples
 
-To query the system views using the link:administrators-guide/tools-analytics/sqlline[SQLLine] tool, connect to the SYS schema as follows:
+To query the system views using the link:tools-analytics/sqlline[SQLLine] tool, connect to the SYS schema as follows:
 
 [source, shell]
 ----
@@ -346,7 +346,7 @@ select CUR_CPU_LOAD * 100 from NODE_METRICS where NODE_ID = 'a1b77663-b37f-4ddf-
 
 ----
 
-The same example using link:developers-guide/thin-clients/java-thin-client[Java Thin Client]:
+The same example using link:thin-clients/java-thin-client[Java Thin Client]:
 
 [source, java]
 ----
diff --git a/docs/_docs/developers-guide/near-cache.adoc b/docs/_docs/near-cache.adoc
similarity index 87%
rename from docs/_docs/developers-guide/near-cache.adoc
rename to docs/_docs/near-cache.adoc
index 8bb537d..74be447 100644
--- a/docs/_docs/developers-guide/near-cache.adoc
+++ b/docs/_docs/near-cache.adoc
@@ -38,7 +38,7 @@ include::code-snippets/dotnet/NearCaches.cs[tag=nearCacheConf,indent=0]
 tab:C++[unsupported]
 --
 
-Once configured in this way, the near cache is created on any node that requests data from the underlying cache, including both server nodes and client nodes. 
+Once configured in this way, the near cache is created on any node that requests data from the underlying cache, including both server nodes and client nodes.
 When you get an instance of the cache, as shown in the following example, the data requests go through the near cache.
 
 [tabs]
@@ -52,21 +52,21 @@ int value = cache.get(1);
 ----
 --
 
-Most configuration parameters available in the cache configuration that make sense for the near cache are inherited from the underlying cache configuration. 
-For example, if the underlying cache has an link:developers-guide/configuring-caches/expiry-policies[expiry policy] configured, entries in the near cache are expired based on the same policy.
+Most configuration parameters available in the cache configuration that make sense for the near cache are inherited from the underlying cache configuration.
+For example, if the underlying cache has an link:configuring-caches/expiry-policies[expiry policy] configured, entries in the near cache are expired based on the same policy.
 
 The parameters listed in the table below are not inherited from the underlying cache configuration.
 
 [cols="1,3,1",opts="autowidth.stretch,header"]
 |===
 |Parameter | Description | Default Value
-|nearEvictionPolicy| The eviction policy for the near cache. See the link:developers-guide/memory-configuration/eviction-policies[Eviction policies] page for details. | none
+|nearEvictionPolicy| The eviction policy for the near cache. See the link:memory-configuration/eviction-policies[Eviction policies] page for details. | none
 |nearStartSize| The initial capacity of the near cache (the number of entries it can hold). | 375,000
 |===
 
 == Creating Near Cache Dynamically On Client Nodes
-When making request from a client node to a cache that hasn't been configured to use a near cache, you can create a near cache for that cache dynamically. 
-This increases performance by storing "hot" data locally on the client side. 
+When making request from a client node to a cache that hasn't been configured to use a near cache, you can create a near cache for that cache dynamically.
+This increases performance by storing "hot" data locally on the client side.
 This cache is operable only on the node where it was created.
 
 To do this, create a near cache configuration and pass it as an argument to the method that gets the instance of the cache.
diff --git a/docs/_docs/developers-guide/partition-loss-policy.adoc b/docs/_docs/partition-loss-policy.adoc
similarity index 95%
rename from docs/_docs/developers-guide/partition-loss-policy.adoc
rename to docs/_docs/partition-loss-policy.adoc
index 38e7357..6317f37 100644
--- a/docs/_docs/developers-guide/partition-loss-policy.adoc
+++ b/docs/_docs/partition-loss-policy.adoc
@@ -5,7 +5,7 @@ Throughout the cluster’s lifecycle, it may happen that some data partitions ar
 Such a situation leads to a partial data loss and needs to be addressed according to your use case.
 
 A partition is lost when both the primary copy and all backup copies of the partition are not available to the cluster, i.e. when the primary and backup nodes for the partition become unavailable. It means that for a given cache, you cannot afford to lose more than `number_of_backups` nodes.
-You can set the number of backup partitions for a cache in the link:developers-guide/configuring-caches/configuring-backups[cache configuration].
+You can set the number of backup partitions for a cache in the link:configuring-caches/configuring-backups[cache configuration].
 
 When the cluster topology changes, Ignite checks if the change resulted in a partition loss, and, depending on the configured partition loss policy and baseline autoadjustment settings, allows or prohibits operations on caches.
 See the description of each policy in the next section.
@@ -63,7 +63,7 @@ This event is fired for every partition that is lost and contains the number of
 Partition loss events are triggered only when either `READ_WRITE_SAFE` or `READ_ONLY_SAFE` policy is used.
 
 Enable the event in the cluster configuration first.
-See link:developers-guide/events/listening-to-events#enabling-events[Enabling Events].
+See link:events/listening-to-events#enabling-events[Enabling Events].
 
 [tabs]
 --
@@ -79,7 +79,7 @@ tab:C#/.NET[]
 tab:C++[]
 --
 
-See link:developers-guide/events/events#cache-rebalancing-events[Cache Rebalancing Events] for the information about other events related to rebalancing of partitions.
+See link:events/events#cache-rebalancing-events[Cache Rebalancing Events] for the information about other events related to rebalancing of partitions.
 
 == Handling Partition Loss
 
diff --git a/docs/_docs/developers-guide/peer-class-loading.adoc b/docs/_docs/peer-class-loading.adoc
similarity index 88%
rename from docs/_docs/developers-guide/peer-class-loading.adoc
rename to docs/_docs/peer-class-loading.adoc
index 5582d86..e9f5953 100644
--- a/docs/_docs/developers-guide/peer-class-loading.adoc
+++ b/docs/_docs/peer-class-loading.adoc
@@ -6,14 +6,14 @@ Peer class loading refers to loading classes from a local node where they are de
 With peer class loading enabled, you don't have to manually deploy your Java code on each node in the cluster and re-deploy it each time it changes.
 Ignite automatically loads the classes from the node where they are defined to the nodes where they are required.
 
-For example, when link:developers-guide/key-value-api/using-scan-queries[querying data] with a custom transformer, you only need to define your tasks on the client node that initiates the computation, and Ignite loads the classes to the server nodes.
+For example, when link:key-value-api/using-scan-queries[querying data] with a custom transformer, you only need to define your tasks on the client node that initiates the computation, and Ignite loads the classes to the server nodes.
 
 When enabled, peer class loading is used to deploy the following classes:
 
-* Tasks and jobs submitted via the link:developers-guide/distributed-computing/distributed-computing[compute interface].
-* Transformers and filters used with link:developers-guide/key-value-api/using-scan-queries[scan queries] and link:developers-guide/key-value-api/continuous-queries[continuous queries].
-* Stream transformers, receivers and visitors used with link:developers-guide/data-streaming#data-streamers[data streamers].
-* link:developers-guide/collocated-computations#entry-processor[Entry processors].
+* Tasks and jobs submitted via the link:distributed-computing/distributed-computing[compute interface].
+* Transformers and filters used with link:key-value-api/using-scan-queries[scan queries] and link:key-value-api/continuous-queries[continuous queries].
+* Stream transformers, receivers and visitors used with link:data-streaming#data-streamers[data streamers].
+* link:collocated-computations#entry-processor[Entry processors].
 
 When defining the classes listed above, we recommend that each class is created as either a separate class or inner static class and not as a lambda or anonymous inner class. Non-static inner classes are serialized together with its enclosing class. If some fields of the enclosing class cannot be serialized, you will get serialization exceptions.
 
@@ -37,9 +37,9 @@ This is what happens when a class is required on remote nodes:
 ====
 [discrete]
 === Deploying 3rd Party Libraries
-When utilizing peer class loading, you should be aware of the libraries that get loaded from peer nodes vs. libraries that are already available locally in the class path. 
-We suggest you should include all 3rd party libraries into the class path of every node. 
-This can be achieved by copying your JAR files into the `{IGNITE_HOME}/libs` folder. 
+When utilizing peer class loading, you should be aware of the libraries that get loaded from peer nodes vs. libraries that are already available locally in the class path.
+We suggest you should include all 3rd party libraries into the class path of every node.
+This can be achieved by copying your JAR files into the `{IGNITE_HOME}/libs` folder.
 This way you do not transfer megabytes of 3rd party classes to remote nodes every time you change a line of code.
 ====
 
@@ -126,8 +126,8 @@ In this mode, classes do not get un-deployed even if all the master nodes leave
 
 The classes deployed with peer class loading have their own lifecycle. On certain events (when the master node leaves or the user version changes, depending on deployment mode), the class information is un-deployed from the cluster: the class definition is erased from all nodes and the user resources linked with that class definition are also optionally erased (again, depending on deployment mode).
 
-User version comes into play whenever you want to redeploy classes deployed in `SHARED` or `CONTINUOUS` modes. 
-By default, Ignite automatically detects if the class loader has changed or a node is restarted. 
+User version comes into play whenever you want to redeploy classes deployed in `SHARED` or `CONTINUOUS` modes.
+By default, Ignite automatically detects if the class loader has changed or a node is restarted.
 However, if you would like to change and redeploy the code on a subset of nodes, or in the case of `CONTINUOUS` mode, kill every living deployment, you should change the user version.
 User version is specified in the `META-INF/ignite.xml` file of your class path as follows:
 
@@ -139,6 +139,6 @@ User version is specified in the `META-INF/ignite.xml` file of your class path a
 </bean>
 -------------------------------------------------------------------------------
 
-By default, all Ignite startup scripts (ignite.sh or ignite.bat) pick up the user version from the `IGNITE_HOME/config/userversion` folder. 
-Usually, you just need to update the user version under that folder. 
+By default, all Ignite startup scripts (ignite.sh or ignite.bat) pick up the user version from the `IGNITE_HOME/config/userversion` folder.
+Usually, you just need to update the user version under that folder.
 However, in case of GAR or JAR deployment, you should remember to provide the `META-INF/ignite.xml` file with the desired user version in it.
diff --git a/docs/_docs/developers-guide/persistence/custom-cache-store.adoc b/docs/_docs/persistence/custom-cache-store.adoc
similarity index 87%
rename from docs/_docs/developers-guide/persistence/custom-cache-store.adoc
rename to docs/_docs/persistence/custom-cache-store.adoc
index b4a60c4..c2a01be 100644
--- a/docs/_docs/developers-guide/persistence/custom-cache-store.adoc
+++ b/docs/_docs/persistence/custom-cache-store.adoc
@@ -25,11 +25,11 @@ To load the data on a single node, call `IgniteCache.localLoadCache()` on that n
 Cache store sessions are used to hold the context between multiple operations on the store and mainly employed to provide transactional support. The operations within one transaction are executed using the same database connection, and the connection is committed when the transaction commits.
 Cache store session is represented by an object of the `CacheStoreSession` class, which can be injected into your `CacheStore` implementation via the `@GridCacheStoreSessionResource` annotation.
 
-An example of how to implement a transactional cache store can be found on https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java[GitHub].
+An example of how to implement a transactional cache store can be found on link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java[GitHub].
 
 == Example
 
-Below is an example of a non-transactional implementation of `CacheStore`. For an example of the implementation with support for transactions, please refer to the https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java[CacheJdbcPersonStore.java] file on GitHub.
+Below is an example of a non-transactional implementation of `CacheStore`. For an example of the implementation with support for transactions, please refer to the link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java[CacheJdbcPersonStore.java] file on GitHub.
 
 
 
@@ -52,7 +52,7 @@ The need for this section is questionable
 
 === Partition-Aware Data Loading
 
-When you call `IgniteCache.loadCache()`, it delegates to the underlying `CacheStore.loadCache()`, which is called on all server nodes. The default implementation of that method simply iterates over all records and skips those keys that do not link:developers-guide/data-modeling/data-partitioning[belong to the node]. This is not very efficient because every node loads *TODO*
+When you call `IgniteCache.loadCache()`, it delegates to the underlying `CacheStore.loadCache()`, which is called on all server nodes. The default implementation of that method simply iterates over all records and skips those keys that do not link:data-modeling/data-partitioning[belong to the node]. This is not very efficient because every node loads *TODO*
 
 
 
diff --git a/docs/_docs/developers-guide/persistence/external-storage.adoc b/docs/_docs/persistence/external-storage.adoc
similarity index 93%
rename from docs/_docs/developers-guide/persistence/external-storage.adoc
rename to docs/_docs/persistence/external-storage.adoc
index 7117720..adea41b 100644
--- a/docs/_docs/developers-guide/persistence/external-storage.adoc
+++ b/docs/_docs/persistence/external-storage.adoc
@@ -7,13 +7,13 @@ You can use Ignite as a caching layer on top of an existing database such as an
 This use case accelerates the underlying database by employing in-memory processing.
 
 Ignite provides an out-of-the-box integration with Apache Cassandra.
-For other NoSQL databases for which integration is not available off-the-shelf, you can provide your own link:developers-guide/persistence/custom-cache-store[implementation of the `CacheStore` interface].
+For other NoSQL databases for which integration is not available off-the-shelf, you can provide your own link:persistence/custom-cache-store[implementation of the `CacheStore` interface].
 
 The two main use cases where an external storage can be used include:
 
 * A caching layer to an existing database. In this scenario, you can dramatically improve the processing speed by loading data into memory. You can also bring SQL support to a database that does not have it (when all data is loaded into memory).
 
-* You want to persist the data in an external database (instead of using the link:developers-guide/persistence/native-persistence[native persistence]).
+* You want to persist the data in an external database (instead of using the link:persistence/native-persistence[native persistence]).
 
 image:images/3rd_party_persistence.png[]
 
@@ -99,14 +99,14 @@ With `CacheJdbcPojoStore`, you can store objects as a set of fields and can conf
 +
 --
 * `dataSourceBean` -- database connection credentials: URL, user, password.
-* `dialect` -- the class that implements the SQL dialect compatible with your database. 
-Ignite provides out-of-the-box implementations for MySQL, Oracle, H2, SQLServer, and DB2 databases. 
+* `dialect` -- the class that implements the SQL dialect compatible with your database.
+Ignite provides out-of-the-box implementations for MySQL, Oracle, H2, SQLServer, and DB2 databases.
 These dialects can be found in the `org.apache.ignite.cache.store.jdbc.dialect` package of the Ignite distribution.
 * `types` -- this property is required to define mappings between the database table and the corresponding POJO (see POJO configuration example below).
 --
-. Optionally, configure link:developers-guide/SQL/sql-api#query-entities[query entities] if you want to execute SQL queries on the cache.
+. Optionally, configure link:SQL/sql-api#query-entities[query entities] if you want to execute SQL queries on the cache.
 
-The following example demonstrates how to configure an Ignite cache on top of a MySQL table. 
+The following example demonstrates how to configure an Ignite cache on top of a MySQL table.
 The table has 2 columns: `id` (INTEGER) and `name` (VARCHAR), which are mapped to objects of the `Person` class.
 
 
@@ -139,7 +139,7 @@ include::{javaFile}[tag=person,indent=0]
 It creates a table named 'ENTRIES', with the 'akey' and 'val' columns (both have the `binary` type).
 
 You can change the default table definition by providing a custom create table query and DML queries used to load, delete, and update the data.
-Refer to link:{javadoc_base_url}/org/apache/ignite/cache/store/jdbc/CacheJdbcBlobStore.html[CacheJdbcBlobStore] for details.
+Refer to javadoc:org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore[] for details.
 
 In the example below, the objects of the Person class are stored as an array of bytes in a single column.
 
@@ -206,5 +206,5 @@ Refer to the dedicated section on Cassandra integration for more information.
 ////
 == Implementing Custom CacheStore
 
-See link:developers-guide/advanced-topics/custom-cache-store[Implementing Custom Cache Store].
+See link:advanced-topics/custom-cache-store[Implementing Custom Cache Store].
 ////
diff --git a/docs/_docs/developers-guide/persistence/native-persistence.adoc b/docs/_docs/persistence/native-persistence.adoc
similarity index 93%
rename from docs/_docs/developers-guide/persistence/native-persistence.adoc
rename to docs/_docs/persistence/native-persistence.adoc
index ec65a91..a28dfd4 100644
--- a/docs/_docs/developers-guide/persistence/native-persistence.adoc
+++ b/docs/_docs/persistence/native-persistence.adoc
@@ -4,13 +4,13 @@
 
 == Overview
 
-Ignite Persistence, or Native Persistence, is a set of features designed to provide persistent storage. 
-When it is enabled, Ignite always stores all the data on disk, and loads as much data as it can into RAM for processing. 
+Ignite Persistence, or Native Persistence, is a set of features designed to provide persistent storage.
+When it is enabled, Ignite always stores all the data on disk, and loads as much data as it can into RAM for processing.
 For example, if there are 100 entries and RAM has the capacity to store only 20, then all 100 are stored on disk and only 20 are cached in RAM for better performance.
 
 When Native persistence is turned off and no external storage is used, Ignite behaves as a pure in-memory store.
 
-When persistence is enabled, every server node persists a subset of the data that only includes the partitions that are assigned to that node (including link:developers-guide/data-modeling/data-partitioning#backup-partitions[backup partitions] if backups are enabled).
+When persistence is enabled, every server node persists a subset of the data that only includes the partitions that are assigned to that node (including link:data-modeling/data-partitioning#backup-partitions[backup partitions] if backups are enabled).
 
 The Native Persistence functionality is based on the following features:
 
@@ -22,8 +22,8 @@ The Native Persistence functionality is based on the following features:
 *TODO: diagram: update operation + wal + checkpointing*
 ////
 
-When persistence is enabled, Ignite stores each partition in a separate file on disk. 
-The data format of the partition files is the same as that of the data when it is kept in memory. 
+When persistence is enabled, Ignite stores each partition in a separate file on disk.
+The data format of the partition files is the same as that of the data when it is kept in memory.
 If partition backups are enabled, they are also saved on disk.
 In addition to data partitions, Ignite stores indexes and metadata.
 
@@ -38,7 +38,7 @@ To avoid unnecessary data transfer, you can decide when you want to start rebala
 
 ////
 
-Because persistence is configured per link:developers-guide/memory-configuration/data-regions[data region], in-memory data regions differ from regions with persistence with respect to data rebalancing:
+Because persistence is configured per link:memory-configuration/data-regions[data region], in-memory data regions differ from regions with persistence with respect to data rebalancing:
 
 [cols="1,1",options="header"]
 |===
@@ -58,8 +58,8 @@ Because persistence is configured per link:developers-guide/memory-configuration
 
 == Enabling Persistent Storage
 
-Native Persistence is configured per link:developers-guide/memory-configuration/data-regions[data region].
-To enable persistent storage, set the `persisteceEnabled` property to `true` in the data region configuration.
+Native Persistence is configured per link:memory-configuration/data-regions[data region].
+To enable persistent storage, set the `persistenceEnabled` property to `true` in the data region configuration.
 You can have in-memory data regions and data regions with persistence at the same time.
 
 The following example shows how to enable persistent storage for the default data region.
@@ -79,7 +79,7 @@ tab:Java[]
 include::{javaFile}[tags=cfg;!storage-path,indent=0]
 ----
 
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::code-snippets/dotnet/PersistenceIgnitePersistence.cs[tags=cfg;!storage-path,indent=0]
@@ -89,7 +89,7 @@ tab:C++[unsupported]
 
 == Configuring Persistent Storage Directory
 
-When persistence is enabled, the node stores user data, indexes and WAL files in the `{IGNITE_WORK_DIR}/db` directory. 
+When persistence is enabled, the node stores user data, indexes and WAL files in the `{IGNITE_WORK_DIR}/db` directory.
 This directory is referred to as the storage directory.
 You can change the storage directory by setting the `storagePath` property of the `DataStorageConfiguration` object, as shown below.
 
@@ -128,7 +128,7 @@ tab:Java[]
 include::{javaFile}[tags=cfg,indent=0]
 ----
 
-tab:.NET/C#[]
+tab:C#/.NET[]
 [source,csharp]
 ----
 include::code-snippets/dotnet/PersistenceIgnitePersistence.cs[tags=cfg,indent=0]
@@ -258,9 +258,9 @@ tab:C++[unsupported]
 WARNING: If WAL is disabled and you restart a node, all data is removed from the persistent storage on that node. This is implemented because without WAL data consistency cannot be guaranteed in case of node crash or restart.
 
 === WAL Archive Compaction
-You can enable WAL Archive compaction to reduce the space occupied by the WAL Archive. 
-By default, WAL Archive contains segments for the last 20 checkpoints (this number is configurable). 
-If compaction is enabled, all archived segments that are 1 checkpoint old are compressed in ZIP format. 
+You can enable WAL Archive compaction to reduce the space occupied by the WAL Archive.
+By default, WAL Archive contains segments for the last 20 checkpoints (this number is configurable).
+If compaction is enabled, all archived segments that are 1 checkpoint old are compressed in ZIP format.
 If the segments are needed (for example, to re-balance data between nodes), they are uncompressed to RAW format.
 
 See the <<Configuration Properties>> section below to learn how to enable WAL archive compaction.
@@ -277,8 +277,8 @@ It is safe to disable WAL archiving because a cluster without the WAL archive pr
 *TODO: Artem, should we mention why someone would want to use WAL Archiving, if it can impact performance and a cluster without the archive has the same guarantees?*
 ////
 
-To disable archiving, set the WAL path and the WAL archive path to the same value. 
-In this case, Ignite does not copy segments to the archive; instead, it creates new segments in the WAL folder. 
+To disable archiving, set the WAL path and the WAL archive path to the same value.
+In this case, Ignite does not copy segments to the archive; instead, it creates new segments in the WAL folder.
 Old segments are deleted as the WAL grows, based on the WAL Archive size setting.
 
 
@@ -296,7 +296,7 @@ This process helps to utilize disk space frugally by keeping pages in the most u
 
 See the following related documentation:
 
-* link:administrators-guide/monitoring-metrics/metrics#monitoring-checkpointing-operations[Monitoring Checkpointing Operations].
+* link:monitoring-metrics/metrics#monitoring-checkpointing-operations[Monitoring Checkpointing Operations].
 * link:perf-troubleshooting-guide/persistence-tuning#adjusting-checkpointing-buffer-size[Adjusting Checkpointing Buffer Size]
 
 == Configuration Properties
diff --git a/docs/_docs/developers-guide/persistence/swap.adoc b/docs/_docs/persistence/swap.adoc
similarity index 81%
rename from docs/_docs/developers-guide/persistence/swap.adoc
rename to docs/_docs/persistence/swap.adoc
index 86bf5c7..5df80da 100644
--- a/docs/_docs/developers-guide/persistence/swap.adoc
+++ b/docs/_docs/persistence/swap.adoc
@@ -2,29 +2,29 @@
 
 == Overview
 
-When using a pure in-memory storage, it is possible that the size of data loaded into a node exceeds the physical RAM size, leading to out of memory errors (OOMEs). 
-If you do not want to use the native persistence or an external storage, you can enable swapping, in which case the in-memory data is moved to the swap space located on disk. 
-Please note that Ignite does not provide its own implementation of swap space. 
+When using a pure in-memory storage, it is possible that the size of data loaded into a node exceeds the physical RAM size, leading to out of memory errors (OOMEs).
+If you do not want to use the native persistence or an external storage, you can enable swapping, in which case the in-memory data is moved to the swap space located on disk.
+Please note that Ignite does not provide its own implementation of swap space.
 Instead, it takes advantage of the swapping functionality provided by the operating system (OS).
 
-When swap space is enabled, Ignite stores data in memory-mapped files (MMF) whose content is swapped to disk by the OS according to the current RAM consumption; 
-however, in that scenario the data access time is longer. 
-Moreover, there are no data durability guarantees. 
-Which means that the data from the swap space is available only as long as the node is alive. 
-Once the node where the swap space exists shuts down, the data is lost. 
+When swap space is enabled, Ignite stores data in memory-mapped files (MMF) whose content is swapped to disk by the OS according to the current RAM consumption;
+however, in that scenario the data access time is longer.
+Moreover, there are no data durability guarantees.
+Which means that the data from the swap space is available only as long as the node is alive.
+Once the node where the swap space exists shuts down, the data is lost.
 Therefore, you should use swap space as an extension to RAM only to give yourself enough time to add more nodes to the cluster in order to re-distribute data and avoid OOMEs which might happen if the cluster is not scaled in time.
 
 [CAUTION]
 ====
 Since swap space is located on disk, it should not be considered as a replacement to native persistence.
 Data from the swap space is available as long as the node is active. Once the node shuts down, the data lost.
-To ensure that data is available at all times, you should either enable link:developers-guide/persistence/native-persistence/[native persistence] or use an link:developers-guide/persistence/external-storage[external storage].
+To ensure that data is available at all times, you should either enable link:persistence/native-persistence/[native persistence] or use an link:persistence/external-storage[external storage].
 ====
 
 == Enabling Swapping
 
-Data Region `maxSize` defines the total `maxSize` of the region. 
-You will get out of memory errors if your data size exceeds `maxSize` and neither native persistence nor an external database is used. 
+Data Region `maxSize` defines the total `maxSize` of the region.
+You will get out of memory errors if your data size exceeds `maxSize` and neither native persistence nor an external database is used.
 To avoid this situation with the swapping capabilities, you need to:
 
 * Set `maxSize` to a value that is bigger than the total RAM size. In this case, the OS takes care of the swapping.
diff --git a/docs/_docs/developers-guide/preface.adoc b/docs/_docs/preface.adoc
similarity index 100%
rename from docs/_docs/developers-guide/preface.adoc
rename to docs/_docs/preface.adoc
diff --git a/docs/_docs/developers-guide/restapi.adoc b/docs/_docs/restapi.adoc
similarity index 98%
rename from docs/_docs/developers-guide/restapi.adoc
rename to docs/_docs/restapi.adoc
index f3d8b54..eca5728 100644
--- a/docs/_docs/developers-guide/restapi.adoc
+++ b/docs/_docs/restapi.adoc
@@ -10,7 +10,7 @@ Internally, Ignite uses Jetty to provide HTTP server features. See <<Configurati
 
 To enable HTTP connectivity, make sure that the `ignite-rest-http` module is enabled.
 If you use the binary distribution, copy the `ignite-rest-http` module from `IGNITE_HOME/libs/optional/` to the `IGNITE_HOME/libs` folder.
-See link:developers-guide/setup#enabling-modules[Enabling modules] for details.
+See link:setup#enabling-modules[Enabling modules] for details.
 
 Explicit configuration is not required; the connector starts up automatically and listens on port `8080`. You can check if it works with curl:
 
@@ -90,7 +90,7 @@ include::code-snippets/xml/jetty.xml[tags=, indent=0]
 
 //NOTE: Refer to the link:https://www.gridgain.com/docs/tutorials/security/ssl-guide[SSL Guide] for a comprehensive instruction on SSL.
 
-When link:administrators-guide/security/authentication[authentication] is configured in the cluster, all applications that use REST API request authentication by providing security credentials.
+When link:security/authentication[authentication] is configured in the cluster, all applications that use REST API request authentication by providing security credentials.
 The authentication request returns a session token that can be used with any command within that session.
 
 There are two ways to request authorization:
@@ -955,7 +955,7 @@ http://host:port/ignite?cmd=rep&key=repKey&val=newValue&cacheName={cacheName}&de
 | `exp`
 | long
 | Yes
-| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:developers-guide/configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
 | 60000
 
 |=======
@@ -1676,7 +1676,7 @@ http://host:port/ignite?cmd=add&key=newKey&val=newValue&cacheName={cacheName}&de
 |`exp`
 | long
 | Yes
-| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:developers-guide/configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
 | 60000
 
 |=======
@@ -1754,7 +1754,7 @@ http://host:port/ignite?cmd=put&key=newKey&val=newValue&cacheName={cacheName}&de
 |`exp`
 | long
 | Yes
-|Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:developers-guide/configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+|Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
 | 60000
 
 |=======
@@ -1903,7 +1903,7 @@ http://host:port/ignite?cmd=putifabs&key={getKey}&val={newVal}&cacheName={cacheN
 |`exp`
 | long
 | Yes
-| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:developers-guide/configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
 | 60000
 
 |=======
@@ -2088,7 +2088,7 @@ http://host:port/ignite?cmd=getorcreate&cacheName={cacheName}
 |`templateName`
 | String
 | Yes
-| Name of the cache template registered in Ignite to use as a configuration for the distributed cache. See the link:developers-guide/configuring-caches/configuration-overview#cache-templates[Cache Template, window=_blank] section for more information.
+| Name of the cache template registered in Ignite to use as a configuration for the distributed cache. See the link:configuring-caches/configuration-overview#cache-templates[Cache Template, window=_blank] section for more information.
 
 |`cacheGroup`
 | String
diff --git a/docs/_docs/administrators-guide/security/authentication.adoc b/docs/_docs/security/authentication.adoc
similarity index 71%
rename from docs/_docs/administrators-guide/security/authentication.adoc
rename to docs/_docs/security/authentication.adoc
index 0bebd1e..36940fe 100644
--- a/docs/_docs/administrators-guide/security/authentication.adoc
+++ b/docs/_docs/security/authentication.adoc
@@ -6,7 +6,7 @@
 
 
 You can enable Ignite Authentication by setting the `authenticationEnabled` property to `true` in the node's configuration.
-This type of authentication requires link:developers-guide/persistence/native-persistence[persistent storage] be enabled for at least one data region.
+This type of authentication requires link:persistence/native-persistence[persistent storage] be enabled for at least one data region.
 
 [tabs]
 --
@@ -43,9 +43,9 @@ You can manage users using the following SQL commands:
 
 When authentication is configured in the cluster, all client applications must provide user credentials. Refer to the following pages for the information about specific clients:
 
-* link:developers-guide/thin-clients/getting-started-with-thin-clients#authentication[Thin clients]
-* link:developers-guide/SQL/JDBC/jdbc-driver#parameters[JDBC driver]
-* link:developers-guide/SQL/ODBC/connection-string-dsn#supported-arguments[ODBC driver]
-* link:developers-guide/restapi#security[REST API]
+* link:thin-clients/getting-started-with-thin-clients#authentication[Thin clients]
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC driver]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC driver]
+* link:restapi#security[REST API]
 
 
diff --git a/docs/_docs/administrators-guide/security/index.adoc b/docs/_docs/security/index.adoc
similarity index 100%
copy from docs/_docs/administrators-guide/security/index.adoc
copy to docs/_docs/security/index.adoc
diff --git a/docs/_docs/administrators-guide/security/ssl-tls.adoc b/docs/_docs/security/ssl-tls.adoc
similarity index 97%
rename from docs/_docs/administrators-guide/security/ssl-tls.adoc
rename to docs/_docs/security/ssl-tls.adoc
index 31fe46a..270f40f 100644
--- a/docs/_docs/administrators-guide/security/ssl-tls.adoc
+++ b/docs/_docs/security/ssl-tls.adoc
@@ -58,7 +58,7 @@ Security status [authentication=off, *tls/ssl=on*]
 ////
 == SSL/TLS for Thin Clients
 
-To enable SSL/TLS for thin clients, refer to the link:developers-guide/thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[thin client documentation].
+To enable SSL/TLS for thin clients, refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[thin client documentation].
 ////
 
 == SSL/TLS for Thin Clients and JDBC/ODBC [[ssl-for-clients]]
diff --git a/docs/_docs/administrators-guide/security/tde.adoc b/docs/_docs/security/tde.adoc
similarity index 94%
rename from docs/_docs/administrators-guide/security/tde.adoc
rename to docs/_docs/security/tde.adoc
index 0bf4b96..0522f5e 100644
--- a/docs/_docs/administrators-guide/security/tde.adoc
+++ b/docs/_docs/security/tde.adoc
@@ -5,7 +5,7 @@ WARNING: This feature is in beta and not recommended for use in production envir
 == Overview
 Transparent data encryption (TDE) allows users to encrypt their data at rest.
 
-When link:developers-guide/persistence/native-persistence[Ignite persistence] is turned on, encryption can be enabled per cache/table, in which case the following data will be encrypted:
+When link:persistence/native-persistence[Ignite persistence] is turned on, encryption can be enabled per cache/table, in which case the following data will be encrypted:
 
 - Data on disk
 - WAL records
diff --git a/docs/_docs/developers-guide/setup.adoc b/docs/_docs/setup.adoc
similarity index 98%
rename from docs/_docs/developers-guide/setup.adoc
rename to docs/_docs/setup.adoc
index 70fd986..01506d3 100644
--- a/docs/_docs/developers-guide/setup.adoc
+++ b/docs/_docs/setup.adoc
@@ -27,7 +27,7 @@ The easiest way to start developing with Ignite is to use Maven.
 
 == Using Docker
 
-If you want to run Ignite in Docker, refer to the link:installation-guide/on-premises-deployment#installing-using-docker[Docker Deployment] section.
+If you want to run Ignite in Docker, refer to the link:on-premises-deployment#installing-using-docker[Docker Deployment] section.
 
 == Configuring Work Directory
 
diff --git a/docs/_docs/sql-reference/aggregate-functions.adoc b/docs/_docs/sql-reference/aggregate-functions.adoc
new file mode 100644
index 0000000..0662790
--- /dev/null
+++ b/docs/_docs/sql-reference/aggregate-functions.adoc
@@ -0,0 +1,383 @@
+= Aggregate Functions
+
+== AVG
+
+
+[source,sql]
+----
+AVG ([DISTINCT] expression)
+----
+
+The average (mean) value. If no rows are selected, the result is `NULL`. Aggregates are only allowed in select statements. The returned value is of the same data type as the parameter.
+
+=== Parameters
+
+- `DISTINCT` - optional keyword. If presents, will average the unique values.
+
+
+=== Examples
+Calculating average players' age:
+
+
+[source,sql]
+----
+SELECT AVG(age) "AverageAge" FROM Players;
+----
+
+
+== BIT_AND
+
+
+[source,sql]
+----
+BIT_AND (expression)
+----
+
+The bitwise AND of all non-null values. If no rows are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+A logical AND operation is performed on each pair of corresponding bits of two binary expressions of equal length.
+
+In each pair, it returns 1 if the first bit is 1 AND the second bit is 1. Else, it returns 0.
+
+
+== BIT_OR
+
+
+[source,sql]
+----
+BIT_OR (expression)
+----
+
+The bitwise OR of all non-null values. If no rows are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+A logical OR operation is performed on each pair of corresponding bits of two binary expressions of equal length.
+
+In each pair, the result is 1 if the first bit is 1 OR the second bit is 1 OR both bits are 1, and otherwise the result is 0.
+
+////
+== BOOL_AND
+
+[source,sql]
+----
+BOOL_AND (boolean)
+----
+
+Returns true if all expressions are true. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+=== Example
+
+[source,sql]
+----
+SELECT item, BOOL_AND(price > 10) FROM Items GROUP BY item;
+----
+
+== BOOL_OR
+
+[source,sql]
+----
+BOOL_AND  (boolean)
+----
+
+Returns true if any expression is true. If no entries​ are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+=== Example
+
+[source,sql]
+----
+SELECT BOOL_OR(CITY LIKE 'W%') FROM Users;
+----
+////
+
+== COUNT
+
+[source,sql]
+----
+COUNT (* | [DISTINCT] expression)
+----
+
+The count of all entries or of the non-null values. This method returns a long. If no entries are selected, the result is 0. Aggregates are only allowed in select statements.
+
+=== Example
+Calculate the number of players in every city:
+
+[source,sql]
+----
+SELECT city_id, COUNT(*) FROM Players GROUP BY city_id;
+----
+
+== FIRSTVALUE
+
+[source, sql]
+----
+FIRSTVALUE ([DISTINCT] <expression1>, <expression2>)
+----
+
+Returns the value of `expression1` associated with the smallest value of `expression2` for each group defined by the `group by` expression in the query.
+This function can only be used with colocated data and you have to use the `collocated` flag when executing the query.
+
+The colocated hint can be set as follows:
+
+* `SqlFieldsQuery.collocated = true` if you use the link:SQL/sql-api[SQL API] to execute queries.
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC connection string parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC connection string argument]
+
+
+=== Example
+The example returns
+[source, sql]
+----
+select company_id, firstvalue(name, age) as youngest from person group by company_id;
+----
+
+== GROUP_CONCAT
+
+[source,sql]
+----
+GROUP_CONCAT([DISTINCT] expression || [expression || [expression ...]]
+  [ORDER BY expression [ASC|DESC], [[ORDER BY expression [ASC|DESC]]]
+  [SEPARATOR expression])
+----
+
+Concatenates strings with a separator. The default separator is a ',' (without whitespace). This method returns a string. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+The `expression` can be a concatenation of columns and strings using the `||` operator, for example: `column1 || "=" || column2`.
+
+=== Parameters
+- `DISTINCT` - filters the result set for unique sets of expressions.
+- `expression` - specifies an expression that may be a column name, a result of another function, or a math operation.
+- `ORDER BY` - orders rows by expression.
+- `SEPARATOR` - overrides a string separator. By default, the separator character is the comma ','.
+
+NOTE: The `DISTINCT` and `ORDER BY` expressions inside the GROUP_CONCAT function are only supported if you group the results by the primary or affinity key (i.e. use `GROUP BY`). Moreover, you have to tell Ignite that your data is colocated by specifying the `collocated=true` property in the connection string or by calling `SqlFieldsQuery.setCollocated(true)` if you use the link:{javadoc_base_url}/org/apache/ignite/cache/query/SqlFieldsQuery.html#setCollocated-boolean-[Java API, window=_blank].
+
+
+=== Example
+Group all players' names in one row:
+
+
+[source,sql]
+----
+SELECT GROUP_CONCAT(name ORDER BY id SEPARATOR ', ') FROM Players;
+----
+
+
+== LASTVALUE
+
+[source, sql]
+----
+LASTVALUE ([DISTINCT] <expression1>, <expression2>)
+----
+
+Returns the value of `expression1` associated with the largest value of `expression2` for each group defined by the `group by` expression.
+This function can only be used with colocated data and you have to use the `collocated` flag when executing the query.
+
+The colocated hint can be set as follows:
+
+* `SqlFieldsQuery.collocated = true` if you use the link:SQL/sql-api[SQL API] to execute queries.
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC connection string parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC connection string argument]
+
+=== Example
+
+[source, sql]
+----
+select company_id, lastvalue(name, age) as oldest from person group by company_id;
+----
+
+
+
+== MAX
+
+[source,sql]
+----
+MAX (expression)
+----
+
+Returns the highest value. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements. The returned value is of the same data type as the parameter.
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+
+=== Example
+Return the height of the ​tallest player:
+
+
+[source,sql]
+----
+SELECT MAX(height) FROM Players;
+----
+
+
+== MIN
+
+[source,sql]
+----
+MIN (expression)
+----
+
+Returns the lowest value. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements. The returned value is of the same data type as the parameter.
+
+
+
+=== Parameters
+- `expression` - may be a column name, the result of another function, or a math operation.
+
+=== Example
+Return the age of the youngest player:
+
+
+[source,sql]
+----
+SELECT MIN(age) FROM Players;
+----
+
+
+== SUM
+
+[source,sql]
+----
+SUM ([DISTINCT] expression)
+----
+
+Returns the sum of all values. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements. The data type of the returned value depends on the parameter data.
+
+
+=== Parameters
+- `DISTINCT` - accumulate unique values only.
+- `expression` - may be a column name, the result of another function, or a math operation.
+
+=== Example
+Get the total number of goals scored by all players:
+
+
+[source,sql]
+----
+SELECT SUM(goal) FROM Players;
+----
+
+////
+this function is not supported
+== SELECTIVITY
+
+[source,sql]
+----
+SELECTIVITY (expression)
+----
+Estimates the selectivity (0-100) of a value. The value is defined as `(100 * distinctCount / rowCount)`. The selectivity of 0 rows is 0 (unknown). Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the selectivity of the `first_name` and `second_name` columns:
+
+
+[source,sql]
+----
+SELECT SELECTIVITY(first_name), SELECTIVITY(second_name) FROM Player
+  WHERE ROWNUM() < 20000;
+----
+
+
+== STDDEV_POP
+
+[source,sql]
+----
+STDDEV_POP ([DISTINCT] expression)
+----
+Returns the population standard deviation. This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `DISTINCT` - calculate unique value only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the standard deviation for Players' age:
+
+
+[source,sql]
+----
+SELECT STDDEV_POP(age) from Players;
+----
+
+
+== STDDEV_SAMP
+
+[source,sql]
+----
+STDDEV_SAMP ([DISTINCT] expression)
+----
+
+Calculates the sample standard deviation. This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+=== Parameters
+- `DISTINCT` - calculate unique values only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculates the sample standard deviation for Players' age:
+
+
+[source,sql]
+----
+SELECT STDDEV_SAMP(age) from Players;
+----
+
+
+== VAR_POP
+
+[source,sql]
+----
+VAR_POP ([DISTINCT] expression)
+----
+
+Calculates the _population variance_ (square of the population standard deviation). This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `DISTINCT` - calculate unique values only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the variance of Players' age:
+
+
+[source,sql]
+----
+SELECT VAR_POP (age) from Players;
+----
+
+
+
+== VAR_SAMP
+
+[source,sql]
+----
+VAR_SAMP ([DISTINCT] expression)
+----
+
+Calculates the _sample variance_ (square of the sample standard deviation). This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `DISTINCT` - calculate unique values only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the variance of Players' age:
+
+
+[source,sql]
+----
+SELECT VAR_SAMP(age) FROM Players;
+----
+////
diff --git a/docs/_docs/sql-reference/data-types.adoc b/docs/_docs/sql-reference/data-types.adoc
new file mode 100644
index 0000000..5b80daf
--- /dev/null
+++ b/docs/_docs/sql-reference/data-types.adoc
@@ -0,0 +1,168 @@
+= Data Types
+
+
+The page contains a list of SQL data types available in Ignite such as string, numeric, and date/time types.
+
+Every SQL type is mapped to a programming language or driver specific type that is supported by Ignite natively.
+
+== BOOLEAN
+Possible values: TRUE and FALSE.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Boolean`
+- .NET/C#: `bool`
+- C/C++: `bool`
+- ODBC: `SQL_BIT`
+
+== BIGINT
+Possible values: [`-9223372036854775808`, `9223372036854775807`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Long`
+- .NET/C#: `long`
+- C/C++: `int64_t`
+- ODBC: `SQL_BIGINT`
+
+== DECIMAL
+Possible values: Data type with fixed precision and scale.
+
+Mapped to:
+
+- Java/JDBC: `java.math.BigDecimal`
+- .NET/C#: `decimal`
+- C/C++: `ignite::Decimal`
+- ODBC: `SQL_DECIMAL`
+
+== DOUBLE
+Possible values: A floating point number.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Double`
+- .NET/C#: `double`
+- C/C++: `double`
+- ODBC: `SQL_DOUBLE`
+
+== INT
+Possible values: [`-2147483648`, `2147483647`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Integer`
+- .NET/C#: `int`
+- C/C++: `int32_t`
+- ODBC: `SQL_INTEGER`
+
+== REAL
+Possible values: A single precision floating point number.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Float`
+- .NET/C#: `float`
+- C/C++: `float`
+- ODBC: `SQL_FLOAT`
+
+== SMALLINT
+Possible values: [`-32768`, `32767`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Short`
+- .NET/C#: `short`
+- C/C++: `int16_t`
+- ODBC: `SQL_SMALLINT`
+
+== TINYINT
+Possible values: [`-128`, `127`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Byte`
+- .NET/C#: `sbyte`
+- C/C++: `int8_t`
+- ODBC: `SQL_TINYINT`
+
+== CHAR
+Possible values: A unicode String. This type is supported for compatibility with other databases and older applications.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.String`
+- .NET/C#: `string`
+- C/C++: `std::string`
+- ODBC: `SQL_CHAR`
+
+== VARCHAR
+Possible values: A Unicode String.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.String`
+- .NET/C#: `string`
+- C/C++: `std::string`
+- ODBC: `SQL_VARCHAR`
+
+== DATE
+Possible values: The date data type. The format is `yyyy-MM-dd`.
+
+Mapped to:
+
+Java/JDBC: `java.sql.Date`
+- .NET/C#: N/A
+- C/C++: `ignite::Date`
+- ODBC: `SQL_TYPE_DATE`
+
+NOTE: Use the <<TIMESTAMP>> type instead of DATE whenever possible. The DATE type is serialized/deserialized very inefficiently resulting in performance degradation.
+
+== TIME
+Possible values: The time data type. The format is `hh:mm:ss`.
+
+Mapped to:
+
+- Java/JDBC: `java.sql.Time`
+- .NET/C#: N/A
+- C/C++: `ignite::Time`
+- ODBC: `SQL_TYPE_TIME`
+
+== TIMESTAMP
+Possible values: The timestamp data type. The format is `yyyy-MM-dd hh:mm:ss[.nnnnnnnnn]`.
+
+Mapped to:
+
+- Java/JDBC: `java.sql.Timestamp`
+- .NET/C#: `System.DateTime`
+- C/C++: `ignite::Timestamp`
+- ODBC: `SQL_TYPE_TIMESTAMP`
+
+== BINARY
+Possible values: Represents a byte array.
+
+Mapped to:
+
+- Java/JDBC: `byte[]`
+- .NET/C#: `byte[]`
+- C/C++: `int8_t[]`
+- ODBC: `SQL_BINARY`
+
+== GEOMETRY
+Possible values: A spatial geometry type, based on the `com.vividsolutions.jts` library. Normally represented in a textual format using the WKT (well-known text) format.
+
+Mapped to:
+
+- Java/JDBC: Types from the `com.vividsolutions.jts` package.
+- .NET/C#: N/A
+- C/C++: N/A
+- ODBC: N/A
+
+== UUID
+Possible values: Universally unique identifier. This is a 128 bit value.
+
+Mapped to:
+
+- Java/JDBC: `java.util.UUID`
+- .NET/C#: `System.Guid`
+- C/C++: `ignite::Guid`
+- ODBC: `SQL_GUID`
diff --git a/docs/_docs/sql-reference/date-time-functions.adoc b/docs/_docs/sql-reference/date-time-functions.adoc
new file mode 100644
index 0000000..da6f464
--- /dev/null
+++ b/docs/_docs/sql-reference/date-time-functions.adoc
@@ -0,0 +1,385 @@
+= Date and Time Functions
+
+
+== CURRENT_DATE
+
+[source,sql]
+----
+{CURRENT_DATE [()] | CURDATE() | SYSDATE | TODAY}
+----
+
+Returns the current date.
+When called multiple times within a transaction, this function returns the same value.
+
+Example: ::
+
+[source,sql]
+----
+CURRENT_DATE()
+----
+
+
+== CURRENT_TIME
+
+[source,sql]
+----
+{CURRENT_TIME [ () ] | CURTIME()}
+----
+
+Returns the current time.
+When called multiple times within a transaction, this function returns the same value.
+
+Example: ::
+
+[source,sql]
+----
+CURRENT_TIME()
+----
+
+
+
+== CURRENT_TIMESTAMP
+
+[source,sql]
+----
+{CURRENT_TIMESTAMP [([int])] | NOW([int])}
+----
+
+
+
+Returns the current timestamp. The precision parameter for nanoseconds precision is optional. This method always returns the same value within a transaction.
+
+Example: ::
+
+[source,sql]
+----
+CURRENT_TIMESTAMP()
+----
+
+
+== DATEADD
+
+[source,sql]
+----
+{DATEADD| TIMESTAMPADD} (unitString, addIntLong, timestamp)
+----
+
+
+
+Adds units to a timestamp. The string indicates the unit. Use negative values to subtract units. `addIntLong` may be a long value when manipulating milliseconds, otherwise its range is restricted to `int`. The same units as in the EXTRACT function are supported. The DATEADD method returns a timestamp. The TIMESTAMPADD method returns a long.
+
+Example: ::
+
+[source,sql]
+----
+DATEADD('MONTH', 1, DATE '2001-01-31')
+----
+
+
+== DATEDIFF
+
+[source,sql]
+----
+{DATEDIFF | TIMESTAMPDIFF} (unitString, aTimestamp, bTimestamp)
+----
+
+
+
+Returns the number of crossed unit boundaries between two timestamps. This method returns a `long`. The string indicates the unit. The same units as in the EXTRACT function are supported.
+
+Example: ::
+
+[source,sql]
+----
+DATEDIFF('YEAR', T1.CREATED, T2.CREATED)
+----
+
+
+== DAYNAME
+
+[source,sql]
+----
+DAYNAME(date)
+----
+
+
+
+Returns the name of the day (in English).
+
+Example: ::
+
+[source,sql]
+----
+DAYNAME(CREATED)
+----
+
+
+== DAY_OF_MONTH
+
+[source,sql]
+----
+DAY_OF_MONTH(date)
+----
+
+
+
+Returns the day of the month (1-31).
+
+Example: ::
+
+[source,sql]
+----
+DAY_OF_MONTH(CREATED)
+----
+
+
+== DAY_OF_WEEK
+
+[source,sql]
+----
+DAY_OF_WEEK(date)
+----
+
+
+
+Returns the day of the week (1 means Sunday).
+
+Example: ::
+
+[source,sql]
+----
+DAY_OF_WEEK(CREATED)
+----
+
+
+== DAY_OF_YEAR
+
+[source,sql]
+----
+DAY_OF_YEAR(date)
+----
+
+
+
+Returns the day of the year (1-366).
+
+Example: ::
+
+[source,sql]
+----
+DAY_OF_YEAR(CREATED)
+----
+
+
+== EXTRACT
+
+[source,sql]
+----
+EXTRACT ({EPOCH | YEAR | YY | QUARTER | MONTH | MM | WEEK | ISO_WEEK
+| DAY | DD | DAY_OF_YEAR | DOY | DAY_OF_WEEK | DOW | ISO_DAY_OF_WEEK
+| HOUR | HH | MINUTE | MI | SECOND | SS | MILLISECOND | MS
+| MICROSECOND | MCS | NANOSECOND | NS}
+FROM timestamp)
+----
+
+
+
+Returns a specific value from a timestamps. This method returns an `int`.
+
+Example: ::
+
+[source,sql]
+----
+EXTRACT(SECOND FROM CURRENT_TIMESTAMP)
+----
+
+
+== FORMATDATETIME
+
+[source,sql]
+----
+FORMATDATETIME (timestamp, formatString [,localeString [,timeZoneString]])
+----
+
+
+
+Formats a date, time, or timestamp as a string. The most important format characters are: `y` year, `M` month, `d` day, `H` hour, `m` minute, `s` second. For details about the format, see `java.text.SimpleDateFormat`. This method returns a `string`.
+
+Example: ::
+
+[source,sql]
+----
+FORMATDATETIME(TIMESTAMP '2001-02-03 04:05:06', 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT')
+----
+
+
+== HOUR
+
+[source,sql]
+----
+HOUR(timestamp)
+----
+
+
+
+Returns the hour (0-23) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+HOUR(CREATED)
+----
+
+
+== MINUTE
+
+[source,sql]
+----
+MINUTE(timestamp)
+----
+
+
+
+Returns the minute (0-59) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+MINUTE(CREATED)
+----
+
+
+== MONTH
+
+[source,sql]
+----
+MONTH(timestamp)
+----
+
+
+
+Returns the month (1-12) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+MONTH(CREATED)
+----
+
+
+== MONTHNAME
+
+[source,sql]
+----
+MONTHNAME(date)
+----
+
+
+
+Returns the name of the month (in English).
+
+Example: ::
+
+[source,sql]
+----
+MONTHNAME(CREATED)
+----
+
+
+== PARSEDATETIME
+
+[source,sql]
+----
+PARSEDATETIME(string, formatString [, localeString [, timeZoneString]])
+----
+
+
+
+Parses a string and returns a `timestamp`. The most important format characters are: `y` year, `M` month, `d` day, `H` hour, `m` minute, `s` second. For details about the format, see `java.text.SimpleDateFormat`.
+
+Example: ::
+
+[source,sql]
+----
+PARSEDATETIME('Sat, 3 Feb 2001 03:05:06 GMT', 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT')
+----
+
+
+== QUARTER
+
+[source,sql]
+----
+QUARTER(timestamp)
+----
+
+
+
+Returns the quarter (1-4) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+QUARTER(CREATED)
+----
+
+
+== SECOND
+
+[source,sql]
+----
+SECOND(timestamp)
+----
+
+
+
+Returns the second (0-59) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+SECOND(CREATED)
+----
+
+
+== WEEK
+
+[source,sql]
+----
+WEEK(timestamp)
+----
+
+
+
+Returns the week (1-53) from a timestamp. This method uses the current system locale.
+
+Example: ::
+
+[source,sql]
+----
+WEEK(CREATED)
+----
+
+
+== YEAR
+
+[source,sql]
+----
+YEAR(timestamp)
+----
+
+
+
+Returns the year from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+YEAR(CREATED)
+----
+
diff --git a/docs/_docs/sql-reference/ddl.adoc b/docs/_docs/sql-reference/ddl.adoc
new file mode 100644
index 0000000..ca53cd8
--- /dev/null
+++ b/docs/_docs/sql-reference/ddl.adoc
@@ -0,0 +1,506 @@
+= Data Definition Language (DDL)
+
+:toclevels:
+
+This page encompasses all data definition language (DDL) commands supported by Ignite.
+
+== CREATE TABLE
+
+Create a new table and an underlying cache.
+
+[source,sql]
+----
+CREATE TABLE [IF NOT EXISTS] tableName (tableColumn [, tableColumn]...
+[, PRIMARY KEY (columnName [,columnName]...)])
+[WITH "paramName=paramValue [,paramName=paramValue]..."]
+
+tableColumn := columnName columnType [DEFAULT defaultValue] [PRIMARY KEY]
+----
+
+
+Parameters:
+
+* `tableName` - name of the table.
+* `tableColumn` - name and type of a column to be created in the new table.
+* `columnName` - name of a previously defined column.
+* `DEFAULT` - specifies a default value for the column. Only constant values are accepted.
+* `IF NOT EXISTS` - create the table only if a table with the same name does not exist.
+* `PRIMARY KEY` - specifies a primary key for the table that can consist of a single column or multiple columns.
+* `WITH` - accepts additional parameters not defined by ANSI-99 SQL:
+
+** `TEMPLATE=<cache's template name>` - case-sensitive​ name of a link:configuring-caches/configuration-overview#cache-templates[cache template]. A template is an instance of the `CacheConfiguration` class registered by calling `Ignite.addCacheConfiguration()`. Use predefined `TEMPLATE=PARTITIONED` or `TEMPLATE=REPLICATED` templates to create the cache with the corresponding replication mode. The rest of the parameters will be those that are defined in the `CacheConfiguration` object. By [...]
+** `BACKUPS=<number of backups>` - sets the number of link:configuring-caches/configuring-backups[partition backups]. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with `0` backup copies.
+** `ATOMICITY=<ATOMIC | TRANSACTIONAL | TRANSACTIONAL_SNAPSHOT>` - sets link:key-value-api/transactions[atomicity mode] for the underlying cache. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with the `ATOMIC` mode enabled. If `TRANSACTIONAL_SNAPSHOT` is specified, the table will link:transactions/mvcc[support transactions].
+** `WRITE_SYNCHRONIZATION_MODE=<PRIMARY_SYNC | FULL_SYNC | FULL_ASYNC>` -
+sets the write synchronization mode for the underlying cache. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with `FULL_SYNC` mode enabled.
+** `CACHE_GROUP=<group name>` - specifies the link:configuring-caches/cache-groups[group name] the underlying cache belongs to.
+** `AFFINITY_KEY=<affinity key column name>` - specifies an link:data-modeling/affinity-collocation[affinity key] name which is a column of the `PRIMARY KEY` constraint.
+** `CACHE_NAME=<custom name of the new cache>` - the name of the underlying cache created by the command.
+** `DATA_REGION=<existing data region name>` - name of the link:memory-configuration/data-regions[data region] where table entries should be stored. By default, Ignite stores all the data in a default region.
+** `KEY_TYPE=<custom name of the key type>` - sets the name of the custom key type that is used from the key-value APIs in Ignite. The name should correspond to a Java, .NET, or C++ class, or it can be a random one if link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used instead of a custom class. The number of fields and their types in the custom key type has to correspond to the `PRIMARY KEY`. Refer to the <<Description>> section below for more details.
+** `VALUE_TYPE=<custom name of the value type of the new cache>` - sets the name of a custom value type that is used from the key-value and other non-SQL APIs in Ignite. The name should correspond to a Java, .NET, or C++ class, or it can be a random one if
+link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used instead of a custom class. The value type should include all the columns defined in the CREATE TABLE command except for those listed in the `PRIMARY KEY` constraint. Refer to the <<Description>> section below for more details.
+** `WRAP_KEY=<true | false>` - this flag controls whether a _single column_ `PRIMARY KEY` should be wrapped in the link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or not. By default, this flag is set to false. This flag does not have any effect on the `PRIMARY KEY` with multiple columns; it always gets wrapped regardless of the value of the parameter.
+** `WRAP_VALUE=<true | false>` - this flag controls whether a single column value of a primitive type should be wrapped in the link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or not. By default, this flag is set to true. This flag does not have any effect on the value with multiple columns; it always gets wrapped regardless of the value of the parameter. Set this parameter to false if you have a single column value and do not plan to add additional columns to  [...]
+
+The CREATE TABLE command creates a new Ignite cache and defines a SQL table on top of it. The cache stores the data in the form of key-value pairs while the table allows processing the data with SQL queries.
+
+The table will reside in the schema specified in the connection parameters. If no schema is specified, the PUBLIC schema will be used. See link:SQL/schemas[Schemas] for more information about schemas in Ignite.
+
+Note that the CREATE TABLE operation is synchronous and blocks the execution of other DDL commands that are issued while CREATE TABLE is still in progress. The execution of DML commands is not affected and can be performed in parallel.
+
+If you wish to access the data using the key-value APIs, then setting the `CACHE_NAME`, `KEY_TYPE`, and `VALUE_TYPE` parameters may be useful for the following reasons:
+
+- When the CREATE TABLE command is executed, the name of the cache is generated with the following format- `SQL_{SCHEMA_NAME}_{TABLE}`. Use the CACHE_NAME parameter to override the default name.
+- Additionally, the command creates two new binary types - for the key and value respectively. Ignite generates the names of the types randomly including a UUID string. This complicates the usage of these 'types' from a non-SQL API. Use KEY_TYPE and VALUE_TYPE to override the names with custom ones corresponding to your business model objects.
+
+Read more about the database architecture on the link:SQL/sql-introduction[SQL Introduction] page.
+
+
+Examples:
+
+Create Person table:
+
+[source,sql]
+----
+CREATE TABLE IF NOT EXISTS Person (
+  id int,
+  city_id int,
+  name varchar,
+  age int,
+  company varchar,
+  PRIMARY KEY (id, city_id)
+) WITH "template=partitioned,backups=1,affinity_key=city_id, key_type=PersonKey, value_type=MyPerson";
+----
+
+Once the CREATE TABLE command gets executed, the following happens:
+
+- A new distributed cache is created and named SQL_PUBLIC_PERSON. This cache stores objects of the `Person` type that corresponds to a specific Java, .NET, C++ class or BinaryObject. Furthermore, the key type (`PersonKey`) and value type (`MyPerson`) are defined explicitly assuming the data is to be processed by key-value and other non-SQL APIs.
+- SQL table/schema with all the parameters will be defined.
+- The data will be stored in the form of key-value pairs. The `PRIMARY KEY` columns will be used as the object's key; the rest of the columns will belong to the value.
+- Distributed cache related parameters are passed in the `WITH` clause of the statement. If the `WITH` clause is omitted, then the cache will be created with default parameters set in the CacheConfiguration object.
+
+The example below shows how to create the same table with `PRIMARY KEY` specified in the column definition and overrid some cache related parameters:
+
+[source,sql]
+----
+CREATE TABLE Person (
+  id int PRIMARY KEY,
+  city_id int,
+  name varchar,
+  age int,
+  company varchar
+) WITH "atomicity=transactional,cachegroup=somegroup";
+----
+
+
+== ALTER TABLE
+
+Modify the structure of an existing table.
+
+[source,sql]
+----
+ALTER TABLE [IF EXISTS] tableName {alter_specification}
+
+alter_specification:
+    ADD [COLUMN] {[IF NOT EXISTS] tableColumn | (tableColumn [,...])}
+  | DROP [COLUMN] {[IF EXISTS] columnName | (columnName [,...])}
+  | {LOGGING | NOLOGGING}
+
+tableColumn := columnName columnType
+----
+
+[NOTE]
+====
+[discrete]
+=== Scope of ALTER TABLE
+Presently, Ignite only supports addition and removal of columns.
+====
+
+Parameters:
+
+- `tableName` - the name of the table.
+- `tableColumn` - the name and type of the column to be added to the table.
+- `columnName` - the name of the column to be added or removed.
+- `IF EXISTS` - if applied to TABLE, do not throw an error if a table with the specified table name does not exist. If applied to COLUMN, do not throw an error if a column with the specified name does not exist.
+- `IF NOT EXISTS` - do not throw an error if a column with the same name already exists.
+- `LOGGING` - enable link:persistence/native-persistence#write-ahead-log[write-ahead logging] for the table. Write-ahead logging in enabled by default. The command is relevant only if Ignite persistence is used.
+- `NOLOGGING` - disable write-ahead logging for the table. The command is relevant only if Ignite persistence is used.
+
+
+`ALTER TABLE ADD` adds a new column or several columns to a previously created table. Once a column is added, it can be accessed using link:sql-reference/dml[DML commands] and indexed with the <<CREATE INDEX>> statement.
+
+`ALTER TABLE DROP` removes an existing column or multiple columns from a table. Once a column is removed, it cannot be accessed within queries. Consider the following notes and limitations:
+
+- The command does not remove actual data from the cluster which means that if the column 'name' is dropped, the value of the 'name' is still stored in the cluster. This limitation is to be addressed in the next releases.
+- If the column was indexed, the index has to be dropped manually using the 'DROP INDEX' command.
+- It is not possible to remove a column that is a primary key or a part of such a key.
+- It is not possible to remove a column if it represents the whole value stored in the cluster. The limitation is relevant for primitive values.
+Ignite stores data in the form of key-value pairs and all the new columns will belong to the value. It's not possible to change a set of columns of the key (`PRIMARY KEY`).
+
+Both DDL and DML commands targeting the same table are blocked for a short time until `ALTER TABLE` is in progress.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+
+Examples:
+
+Add a column to the table:
+
+[source,sql]
+----
+ALTER TABLE Person ADD COLUMN city varchar;
+----
+
+
+Add a new column to the table only if a column with the same name does not exist:
+
+[source,sql]
+----
+ALTER TABLE City ADD COLUMN IF NOT EXISTS population int;
+----
+
+
+Add a column​ only if the table exists:
+
+[source,sql]
+----
+ALTER TABLE IF EXISTS Missing ADD number long;
+----
+
+
+Add several columns to the table at once:
+
+
+[source,sql]
+----
+ALTER TABLE Region ADD COLUMN (code varchar, gdp double);
+----
+
+
+Drop a column from the table:
+
+
+[source,sql]
+----
+ALTER TABLE Person DROP COLUMN city;
+----
+
+
+Drop a column from the table only if a column with the same name does exist:
+
+
+[source,sql]
+----
+ALTER TABLE Person DROP COLUMN IF EXISTS population;
+----
+
+
+Drop a column only if the table exists:
+
+
+[source,sql]
+----
+ALTER TABLE IF EXISTS Person DROP COLUMN number;
+----
+
+
+Drop several columns from the table at once:
+
+
+[source,sql]
+----
+ALTER TABLE Person DROP COLUMN (code, gdp);
+----
+
+
+Disable write-ahead logging:
+
+
+[source,sql]
+----
+ALTER TABLE Person NOLOGGING
+----
+
+
+== DROP TABLE
+
+The `DROP TABLE` command drops an existing table.
+The underlying cache with all the data in it is destroyed, too.
+
+
+[source,sql]
+----
+DROP TABLE [IF EXISTS] tableName
+----
+
+Parameters:
+
+- `tableName` - the name of the table.
+- `IF NOT EXISTS` - do not throw an error if a table with the same name does not exist.
+
+
+Both DDL and DML commands targeting the same table are blocked while the `DROP TABLE` is in progress.
+Once the table is dropped, all pending commands will fail with appropriate errors.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+Examples:
+
+Drop Person table if the one exists:
+
+[source,sql]
+----
+DROP TABLE IF EXISTS "Person";
+----
+
+== CREATE INDEX
+
+Create an index on the specified table.
+
+[source,sql]
+----
+CREATE [SPATIAL] INDEX [[IF NOT EXISTS] indexName] ON tableName
+    (columnName [ASC|DESC] [,...]) [(index_option [...])]
+
+index_option := {INLINE_SIZE size | PARALLEL parallelism_level}
+----
+
+Parameters:
+
+* `indexName` - the name of the index to be created.
+* `ASC` - specifies ascending sort order (default).
+* `DESC` - specifies descending sort order.
+* `SPATIAL` - create the spatial index. Presently, only geometry types are supported.
+* `IF NOT EXISTS` - do not throw an error if an index with the same name already exists. The database checks indexes' names only, and does not consider columns types or count.
+* `index_option` - additional options for index creation:
+** `INLINE_SIZE` - specifies index inline size in bytes. Depending on the size, Ignite will place the whole indexed value or a part of it directly into index pages, thus omitting extra calls to data pages and increasing queries' performance. Index inlining is enabled by default and the size is pre-calculated automatically based on the table structure. To disable inlining, set the size to 0 (not recommended). Refer to the link:perf-troubleshooting-guide/sql-tuning#increasing-index-inline- [...]
+** `PARALLEL` - specifies the number of threads to be used in parallel for index creation. The greater number is set, the faster the index is created and built. If the value exceeds the number of CPUs, then it will be decreased to the number of cores. If the parameter is not specified, then the number of threads is calculated as 25% of the CPU cores available.
+
+
+`CREATE INDEX` creates a new index on the specified table. Regular indexes are stored in the internal B+tree data structures. The B+tree gets distributed across the cluster along with the actual data. A cluster node stores a part of the index for the data it owns.
+
+If `CREATE INDEX` is executed in runtime on live data then the database will iterate over the specified columns synchronously indexing them. The rest of the DDL commands targeting the same table are blocked until CREATE INDEX is in progress. DML command execution is not affected and can be performed in parallel.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+
+
+=== Indexes Tradeoffs
+There are multiple things you should consider when choosing indexes for your application.
+
+- Indexes are not free. They consume memory, and each index needs to be updated separately, thus the performance of write operations might drop if too many indexes are created. On top of that, if a lot of indexes are defined, the optimizer might make more mistakes by choosing the wrong index while building the execution plan.
++
+WARNING: It is poor strategy to index everything.
+
+- Indexes are just sorted data structures (B+tree). If you define an index for the fields (a,b,c) then the records will be sorted first by a, then by b and only then by c.
++
+[NOTE]
+====
+[discrete]
+=== Example of Sorted Index
+[width="25%" cols="33l, 33l, 33l"]
+|=====
+| A | B | C
+| 1 | 2 | 3
+| 1 | 4 | 2
+| 1 | 4 | 4
+| 2 | 3 | 5
+| 2 | 4 | 4
+| 2 | 4 | 5
+|=====
+
+Any condition like `a = 1 and b > 3` can be viewed as a bounded range, both bounds can be quickly looked up in *log(N)* time, the result will be everything between.
+
+The following conditions will be able to use the index:
+
+- `a = ?`
+- `a = ? and b = ?`
+- `a = ? and b = ? and c = ?`
+
+Condition `a = ? and c = ?` is no better than `a = ?` from the index point of view.
+Obviously half-bounded ranges like `a > ?` can be used as well.
+====
+
+- Indexes on single fields are no better than group indexes on multiple fields starting with the same field (index on (a) is no better than (a,b,c)). Thus it is preferable to use group indexes.
+
+- When `INLINE_SIZE` option is specified, indexes holds a prefix of field data in the B+tree pages. This improves search performance by doing less row data retrievals, however substantially increases size of the tree (with a moderate increase in tree height) and reduces data insertion and removal performance due to excessive page splits and merges. It's a good idea to consider page size when choosing inlining size for the tree: each B-tree entry requires `16 + inline-size` bytes in the p [...]
+
+
+Examples:
+
+Create a regular index:
+
+[source,sql]
+----
+CREATE INDEX title_idx ON books (title);
+----
+
+Create a descending index only if it does not exist:
+
+[source,sql]
+----
+CREATE INDEX IF NOT EXISTS name_idx ON persons (firstName DESC);
+----
+
+Create a composite index:
+
+[source,sql]
+----
+CREATE INDEX city_idx ON sales (country, city);
+----
+
+Create an index specifying data inline size:
+
+[source,sql]
+----
+CREATE INDEX fast_city_idx ON sales (country, city) INLINE_SIZE 60;
+----
+
+Create a geospatial​ index:
+
+[source,sql]
+----
+CREATE SPATIAL INDEX idx_person_address ON Person (address);
+----
+
+
+== DROP INDEX
+
+`DROP INDEX` deletes an existing index.
+
+
+[source,sql]
+----
+DROP INDEX [IF EXISTS] indexName
+----
+
+Parameters:
+
+* `indexName` - the name of the index to drop.
+* `IF EXISTS` - do not throw an error if an index with the specified name does not exist. The database checks indexes' names only not considering column types or count.
+
+
+DDL commands targeting the same table are blocked until `DROP INDEX` is in progress. DML command execution is not affected and can be performed in parallel.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+
+[discrete]
+=== Examples
+Drop an index:
+
+
+[source,sql]
+----
+DROP INDEX idx_person_name;
+----
+
+
+== CREATE USER
+
+The command creates a user with a given name and password.
+
+A new user can only be created using a superuser account when authentication for thin clients is enabled. Ignite creates the superuser account under the name `ignite` and password `ignite` on the first cluster start-up. Presently, you can't rename the superuser account nor grant its privileges to any other account.
+
+
+
+[source,sql]
+----
+CREATE USER userName WITH PASSWORD 'password';
+----
+
+Parameters:
+
+* `userName` - new user's name. The name cannot be longer than 60 bytes in UTF8 encoding.
+* `password` - new user's password. An empty password is not allowed.
+
+To create a _case-sensitive_ username, use the quotation (") SQL identifier.
+
+[NOTE]
+====
+[discrete]
+=== When Are Case-Sensitive Names Preferred?
+The case-insensitivity property of the usernames is supported for JDBC and ODBC interfaces only. If it's planned to access Ignite from Java, .NET, or other programming language APIs then the username has to be passed either in all upper-case letters or enclosed in double quotes (") from those interfaces.
+
+For instance, if `Test` was set as a username then:
+
+- You can use `Test`, `TEst`, `TEST` and other combinations from JDBC and ODBC.
+- You can use either `TEST` or `"Test"` as the username from Ignite's native SQL APIs designed for Java, .NET and other programming languages.
+
+Alternatively, use the case-sensitive username at all times to ensure name consistency across all the SQL interfaces.
+====
+
+Examples:
+
+Create a new user using test as a name and password:
+
+
+[source,sql]
+----
+CREATE USER test WITH PASSWORD 'test';
+----
+
+Create a case-sensitive username:
+
+
+[source,sql]
+----
+CREATE USER "TeSt" WITH PASSWORD 'test'
+----
+
+
+== ALTER USER
+
+The command changes an existing user's password.
+The password can be updated by the superuser (`ignite`, see <<CREATE USER>> for more details) or by the user themselves.
+
+
+[source,sql]
+----
+ALTER USER userName WITH PASSWORD 'newPassword';
+----
+
+
+Parameters:
+
+* `userName` - existing user's name.
+* `newPassword` - the new password to set for the user's account.
+
+
+Examples:
+
+Updating user's password:
+
+
+[source,sql]
+----
+ALTER USER test WITH PASSWORD 'test123';
+----
+
+
+== DROP USER
+
+The command removes an existing user.
+
+The user can be removed only by the superuser (`ignite`, see <<CREATE USER>> for more details).
+
+
+[source,sql]
+----
+DROP USER userName;
+----
+
+
+Parameters:
+
+* `userName` - a name of the user to remove.
+
+
+Examples:
+
+[source,sql]
+----
+DROP USER test;
+----
+
diff --git a/docs/_docs/sql-reference/dml.adoc b/docs/_docs/sql-reference/dml.adoc
new file mode 100644
index 0000000..a231d43
--- /dev/null
+++ b/docs/_docs/sql-reference/dml.adoc
@@ -0,0 +1,349 @@
+= Data Manipulation Language (DML)
+
+
+This page includes all data manipulation language (DML) commands supported by Ignite.
+
+== SELECT
+
+Retrieve data from a table or multiple tables.
+
+[source,sql]
+----
+SELECT
+    [TOP term] [DISTINCT | ALL] selectExpression [,...]
+    FROM tableExpression [,...] [WHERE expression]
+    [GROUP BY expression [,...]] [HAVING expression]
+    [{UNION [ALL] | MINUS | EXCEPT | INTERSECT} select]
+    [ORDER BY order [,...]]
+    [{ LIMIT expression [OFFSET expression]
+    [SAMPLE_SIZE rowCountInt]} | {[OFFSET expression {ROW | ROWS}]
+    [{FETCH {FIRST | NEXT} expression {ROW | ROWS} ONLY}]}]
+----
+
+=== Parameters
+- `DISTINCT` - removes duplicate rows from a result set.
+- `GROUP BY` - groups the result by the given expression(s).
+- `HAVING` - filters rows after grouping.
+- `ORDER BY` - sorts the result by the given column(s) or expression(s).
+- `LIMIT and FETCH FIRST/NEXT ROW(S) ONLY` - limits the number of rows returned by the query (no limit if null or smaller than zero).
+- `OFFSET` - specifies​ how many rows to skip.
+- `UNION, INTERSECT, MINUS, EXPECT` - combines the result of this query with the results of another query.
+- `tableExpression` - Joins a table. The join expression is not supported for cross and natural joins. A natural join is an inner join, where the condition is automatically on the columns with the same name.
+
+[source,sql]
+----
+tableExpression = [[LEFT | RIGHT]{OUTER}] | INNER | CROSS | NATURAL]
+JOIN tableExpression
+[ON expression]
+----
+
+- `LEFT` - LEFT JOIN performs a join starting with the first (left-most) table and then any matching second (right-most) table records.
+- `RIGHT` - RIGHT JOIN performs a join starting with the second (right-most) table and then any matching first (left-most) table records.
+- `OUTER` - Outer joins subdivide further into left outer joins, right outer joins, and full outer joins, depending on which table's rows are retained (left, right, or both).
+- `INNER` - An inner join requires each row in the two joined tables to have matching column values.
+- `CROSS` - CROSS JOIN returns the Cartesian product of rows from tables in the join.
+- `NATURAL` - The natural join is a special case of equi-join.
+- `ON` - Value or condition to join on.
+
+=== Description
+`SELECT` queries can be executed against both link:data-modeling/data-partitioning#replicated[replicated] and link:data-modeling/data-partitioning#partitioned[partitioned] data.
+
+When queries are executed against fully replicated data, Ignite sends a query to a single cluster node and run it over the local data there.
+
+On the other hand, if a query is executed over partitioned data, then the execution flow will be the following:
+
+- The query will be parsed and split into multiple map queries and a single reduce query.
+- All the map queries are executed on all the nodes where required data resides.
+- All the nodes provide result sets of local execution to the query initiator (reducer) that, in turn, will accomplish the reduce phase by properly merging provided result sets.
+
+=== JOINs
+Ignite supports colocated and non-colocated distributed SQL joins. Furthermore, if the data resides in different tables (aka. caches in Ignite), Ignite allows for cross-table joins as well.
+
+Joins between partitioned and replicated data sets always work without any limitations.
+
+However, if you join partitioned data sets, then you have to make sure that the keys you are joining on are either colocated or make sure you switched on the non-colocated joins parameter for a query.
+
+Refer to Distributed Joins page for more details.
+
+=== Group By and Order By Optimizations
+SQL queries with `ORDER BY` clauses do not require loading the whole result set to a query initiator (reducer) node in order to complete the sorting. Instead, every node to which a query will be mapped will sort its own part of the overall result set and the reducer will do the merge in a streaming fashion.
+
+The same optimization is implemented for sorted `GROUP BY` queries - there is no need to load the whole result set to the reducer in order to do the grouping before giving it to an application. In Ignite, partial result sets from the individual nodes can be streamed, merged, aggregated, and returned to the application gradually.
+
+[discrete]
+=== Examples
+
+Retrieve all rows from the `Person` table:
+
+[source,sql]
+----
+SELECT * FROM Person;
+----
+
+
+Get all rows in alphabetical order:
+
+[source,sql]
+----
+SELECT * FROM Person ORDER BY name;
+----
+
+
+Calculate the number of `Persons` from a specific city:
+
+
+[source,sql]
+----
+SELECT city_id, COUNT(*) FROM Person GROUP BY city_id;
+----
+
+
+
+Join data stored in the `Person` and `City` tables:
+
+
+[source,sql]
+----
+SELECT p.name, c.name
+	FROM Person p, City c
+	WHERE p.city_id = c.id;
+----
+
+
+
+== INSERT
+
+Inserts data into a table.
+
+
+[source,sql]
+----
+INSERT INTO tableName
+  {[( columnName [,...])]
+  {VALUES {({DEFAULT | expression} [,...])} [,...] | [DIRECT] [SORTED] select}}
+  | {SET {columnName = {DEFAULT | expression}} [,...]}
+----
+
+
+=== Parameters
+- `tableName` - name of the table to be updated.
+- `columnName` - name of a column to be initialized with a value from the VALUES clause.
+
+=== Description
+`INSERT` adds an entry or entries into a table.
+
+Since Ignite stores all the data in a form of key-value pairs, all the `INSERT` statements are finally transformed into a set of key-value operations.
+
+If a single key-value pair is being added into a cache then, eventually, an `INSERT` statement will be converted into a `cache.putIfAbsent(...)` operation. In other cases, when multiple key-value pairs are inserted, the DML engine creates an `EntryProcessor` for each pair and uses `cache.invokeAll(...)` to propagate the data into a cache.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+[discrete]
+=== Examples
+Insert a new Person into the table:
+
+
+[source,sql]
+----
+INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3);
+----
+
+
+
+Fill in Person table with the data retrieved from Account table:
+
+
+[source,sql]
+----
+INSERT INTO Person(id, name, city_id)
+   (SELECT a.id + 1000, concat(a.firstName, a.secondName), a.city_id
+   FROM Account a WHERE a.id > 100 AND a.id < 1000);
+----
+
+
+== UPDATE
+
+Update data in a table.
+
+
+[source,sql]
+----
+UPDATE tableName [[AS] newTableAlias]
+  SET {{columnName = {DEFAULT | expression}} [,...]} |
+  {(columnName [,...]) = (select)}
+  [WHERE expression][LIMIT expression]
+----
+
+
+=== Parameters
+- `table` - the name of the table to be updated.
+- `columnName` - the name of a column to be updated with a value from a `SET` clause.
+
+=== Description
+`UPDATE` alters existing entries stored in a table.
+
+Since Ignite stores all the data in a form of key-value pairs, all the `UPDATE` statements are finally transformed into a set of key-value operations.
+
+Initially, the SQL engine generates and executes a `SELECT` query based on the `UPDATE WHERE` clause and only after that does it modify the existing values that satisfy the clause result.
+
+The modification is performed via a `cache.invokeAll(...)` operation. This means that once the result of the `SELECT` query is ready, the SQL engine will prepare a number of `EntryProcessors` and will execute all of them using a `cache.invokeAll(...)` operation. While the data is being modified using `EntryProcessors`, additional checks are performed to make sure that nobody has interfered between the `SELECT` and the actual update.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+=== Primary Keys Updates
+Ignite does not allow updating a primary key because the latter defines a partition the key and its value belong to statically. While the partition with all its data can change several cluster owners, the key always belongs to a single partition. The partition is calculated using a hash function applied to the key's value.
+
+Thus, if a key needs to be updated it has to be removed and then inserted.
+
+[discrete]
+=== Examples
+Update the `name` column of an entry:
+
+
+[source,sql]
+----
+UPDATE Person SET name = 'John Black' WHERE id = 2;
+----
+
+Update the `Person` table with the data taken from the `Account` table:
+
+[source,sql]
+----
+UPDATE Person p SET name = (SELECT a.first_name FROM Account a WHERE a.id = p.id)
+----
+
+
+== WITH
+
+Used to name a sub-query, can be referenced in other parts of the SQL statement.
+
+
+[source,sql]
+----
+WITH  { name [( columnName [,...] )] AS ( select ) [,...] }
+{ select | insert | update | merge | delete | createTable }
+----
+
+
+
+=== Parameters
+- `query_name` - the name of the sub-query to be created. The name assigned to the sub-query is treated as though it was an inline view or table.
+
+=== Description
+`WITH` creates a sub-query. One or more common table entries can be referred to by name. Column name declarations are optional - the column names will be inferred from the named select queries. The final action in a WITH statement can be a `select`, `insert`, `update`, `merge`, `delete`, or `create table`.
+
+[discrete]
+=== Example
+
+
+[source,sql]
+----
+WITH cte1 AS (
+        SELECT 1 AS FIRST_COLUMN
+), cte2 AS (
+        SELECT FIRST_COLUMN+1 AS FIRST_COLUMN FROM cte1
+)
+SELECT sum(FIRST_COLUMN) FROM cte2;
+----
+
+
+
+== MERGE
+
+Merge data into a table.
+
+
+[source,sql]
+----
+MERGE INTO tableName [(columnName [,...])]
+  [KEY (columnName [,...])]
+  {VALUES {({ DEFAULT | expression } [,...])} [,...] | select}
+----
+
+
+
+=== Parameters
+- `tableName` - the name of the table to be updated.
+- `columnName` - the name of a column to be initialized with a value from a `VALUES` clause.
+
+=== Description
+`MERGE` updates existing entries and inserts new entries.
+
+Because Ignite stores all the data in a form of key-value pairs, all the `MERGE` statements are transformed into a set of key-value operations.
+
+`MERGE` is one of the most straightforward operations because it is translated into `cache.put(...)` and `cache.putAll(...)` operations depending on the number of rows that need to be inserted or updated as part of the `MERGE` query.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+[discrete]
+=== Examples
+Merge some rows into the `Person` table:
+
+
+[source,sql]
+----
+MERGE INTO Person(id, name, city_id) VALUES
+	(1, 'John Smith', 5),
+        (2, 'Mary Jones', 5);
+----
+
+
+Fill in the `Person` table with the data retrieved from the `Account` table:
+
+
+[source,sql]
+----
+MERGE INTO Person(id, name, city_id)
+   (SELECT a.id + 1000, concat(a.firstName, a.secondName), a.city_id
+   FROM Account a WHERE a.id > 100 AND a.id < 1000);
+----
+
+
+
+== DELETE
+
+Delete data from a table.
+
+
+[source,sql]
+----
+DELETE
+  [TOP term] FROM tableName
+  [WHERE expression]
+  [LIMIT term]
+----
+
+
+=== Parameters
+- `tableName` - the name of the table to delete data from.
+- `TOP, LIMIT` - specifies the number​ of entries to be deleted (no limit if null or smaller than zero).
+
+=== Description
+`DELETE` removes data from a table.
+
+Because Ignite stores all the data in a form of key-value pairs, all the `DELETE` statements are transformed into a set of key-value operations.
+
+A `DELETE` statements' execution is split into two phases and is similar to the execution of `UPDATE` statements.
+
+First, using a `SELECT` query, the SQL engine gathers those keys that satisfy the `WHERE` clause in the `DELETE` statement. Next, after having all those keys in place, it creates a number of `EntryProcessors` and executes them with `cache.invokeAll(...)`. While the data is being deleted, additional checks are performed to make sure that nobody has interfered between the `SELECT` and the actual removal of the data.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+[discrete]
+=== Examples
+Delete all the `Persons` with a specific name:
+
+
+[source,sql]
+----
+DELETE FROM Person WHERE name = 'John Doe';
+----
+
diff --git a/docs/_docs/administrators-guide/security/index.adoc b/docs/_docs/sql-reference/index.adoc
similarity index 56%
rename from docs/_docs/administrators-guide/security/index.adoc
rename to docs/_docs/sql-reference/index.adoc
index 6602185..766bcf4 100644
--- a/docs/_docs/administrators-guide/security/index.adoc
+++ b/docs/_docs/sql-reference/index.adoc
@@ -2,4 +2,4 @@
 layout: toc
 ---
 
-= Security 
+= SQL Reference
diff --git a/docs/_docs/sql-reference/numeric-functions.adoc b/docs/_docs/sql-reference/numeric-functions.adoc
new file mode 100644
index 0000000..3a93cdf
--- /dev/null
+++ b/docs/_docs/sql-reference/numeric-functions.adoc
@@ -0,0 +1,967 @@
+= Numeric Functions
+
+== ABS
+
+[source,sql]
+----
+ABS (expression)
+----
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Returns the absolute value of an expression.
+
+[discrete]
+=== Example
+Calculate an absolute value:
+
+[source,sql]
+----
+SELECT transfer_id, ABS (price) from Transfers;
+----
+
+
+== ACOS
+
+[source,sql]
+----
+ACOS (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the arc cosine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get arc cos value:
+
+
+[source,sql]
+----
+SELECT acos(angle) FROM Triangles;
+----
+
+
+== ASIN
+
+[source,sql]
+----
+ASIN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the arc sine. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate an arc sine:
+
+
+[source,sql]
+----
+SELECT asin(angle) FROM Triangles;
+----
+
+
+== ATAN
+
+[source,sql]
+----
+ATAN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the arc tangent. This method returns a `double`.
+
+[discrete]
+=== Example
+Get an arc tangent:
+
+
+[source,sql]
+----
+SELECT atan(angle) FROM Triangles;
+----
+
+
+== COS
+
+[source,sql]
+----
+COS (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric cosine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a cosine:
+
+
+[source,sql]
+----
+SELECT COS(angle) FROM Triangles;
+----
+
+
+== COSH
+
+[source,sql]
+----
+COSH (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the hyperbolic cosine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get an hyperbolic cosine:
+
+
+[source,sql]
+----
+SELECT HCOS(angle) FROM Triangles;
+----
+
+
+== COT
+
+[source,sql]
+----
+COT (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric cotangent (1/TAN(ANGLE)). This method returns a `double`.
+
+[discrete]
+=== Example
+Gets a​ trigonometric cotangent:
+
+
+[source,sql]
+----
+SELECT COT(angle) FROM Triangles;
+----
+
+
+== SIN
+
+[source,sql]
+----
+SIN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric sine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a trigonometric sine:
+
+
+[source,sql]
+----
+SELECT SIN(angle) FROM Triangles;
+----
+
+
+== SINH
+
+[source,sql]
+----
+SINH (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the hyperbolic sine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a hyperbolic sine:
+
+
+[source,sql]
+----
+SELECT SINH(angle) FROM Triangles;
+----
+
+
+== TAN
+
+[source,sql]
+----
+TAN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric tangent. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a trigonometric tangent:
+
+
+[source,sql]
+----
+SELECT TAN(angle) FROM Triangles;
+----
+
+
+== TANH
+
+[source,sql]
+----
+TANH (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the hyperbolic tangent. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a hyperbolic tangent:
+
+
+[source,sql]
+----
+SELECT TANH(angle) FROM Triangles;
+----
+
+
+== ATAN2
+
+[source,sql]
+----
+ATAN2 (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+Calculates the angle when converting the rectangular coordinates to polar coordinates. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a hyperbolic tangent:
+
+
+[source,sql]
+----
+SELECT ATAN2(X, Y) FROM Triangles;
+----
+
+
+== BITAND
+
+[source,sql]
+----
+BITAND (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The bitwise AND operation. This method returns a `long`.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+SELECT BITAND(X, Y) FROM Triangles;
+----
+
+
+== BITGET
+
+[source,sql]
+----
+BITGET (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+Returns true if and only if the first parameter has a bit set in the position specified by the second parameter. This method returns a `boolean`. The second parameter is zero-indexed; the least significant bit has position 0.
+
+[discrete]
+=== Example
+Check that 3rd bit is 1:
+
+
+[source,sql]
+----
+SELECT BITGET(X, 3) from Triangles;
+----
+
+
+== BITOR
+
+[source,sql]
+----
+BITOR (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The bitwise OR operation. This method returns a `long`.
+
+[discrete]
+=== Example
+Calculate OR between two fields:
+
+
+[source,sql]
+----
+SELECT BITGET(X, Y) from Triangles;
+----
+
+
+== BITXOR
+
+[source,sql]
+----
+BITXOR (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The bitwise XOR operation. This method returns a `long`.
+
+[discrete]
+=== Example
+Calculate XOR between two fields:
+
+
+[source,sql]
+----
+SELECT BITXOR(X, Y) FROM Triangles;
+----
+
+
+== MOD
+
+[source,sql]
+----
+MOD (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The modulo operation. This method returns a `long`.
+
+[discrete]
+=== Example
+Calculate MOD between two fields:
+
+
+[source,sql]
+----
+SELECT BITXOR(X, Y) FROM Triangles;
+----
+
+
+== CEILING
+
+[source,sql]
+----
+CEIL (expression)
+CEILING (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also Java Math.ceil. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate a ceiling price for items:
+
+
+[source,sql]
+----
+SELECT item_id, CEILING(price) FROM Items;
+----
+
+
+== DEGREES
+
+
+[source,sql]
+----
+DEGREES (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.toDegrees`. This method returns a `double`.
+
+[discrete]
+=== Example
+Converts the argument value to degrees:
+
+
+[source,sql]
+----
+SELECT DEGREES(X) FROM Triangles;
+----
+
+
+== EXP
+
+[source,sql]
+----
+EXP (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.exp`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates exp:
+
+
+[source,sql]
+----
+SELECT EXP(X) FROM Triangles;
+----
+
+
+== FLOOR
+
+[source,sql]
+----
+FLOOR (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.floor`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates floor price:
+
+
+[source,sql]
+----
+SELECT FLOOR(X) FROM Items;
+----
+
+
+== LOG
+
+[source,sql]
+----
+LOG (expression)
+LN (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.log`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates LOG:
+
+
+[source,sql]
+----
+SELECT LOG(X) from Items;
+----
+
+
+== LOG10
+
+[source,sql]
+----
+LOG10 (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.log10` (in Java 5). This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate LOG10:
+
+
+[source,sql]
+----
+SELECT LOG(X) FROM Items;
+----
+
+
+== RADIANS
+
+[source,sql]
+----
+RADIANS (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also Java Math.toRadians. This method returns a double.
+
+[discrete]
+=== Example
+Calculates RADIANS:
+
+
+[source,sql]
+----
+SELECT RADIANS(X) FROM Items;
+----
+
+
+== SQRT
+
+[source,sql]
+----
+SQRT (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.sqrt`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates SQRT:
+
+
+[source,sql]
+----
+SELECT SQRT(X) FROM Items;
+----
+
+
+== PI
+
+
+[source,sql]
+----
+PI (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.PI`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates PI:
+
+
+[source,sql]
+----
+SELECT PI(X) FROM Items;
+----
+
+
+== POWER
+
+
+[source,sql]
+----
+POWER (X, Y)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.pow`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate the ​power of 2:
+
+
+[source,sql]
+----
+SELECT pow(2, n) FROM Rows;
+----
+
+
+== RAND
+
+[source,sql]
+----
+{RAND | RANDOM} ([expression])
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression seeds the session's random number generator.
+
+=== Description
+Calling the function without a parameter returns the next a pseudo random number. Calling it with a parameter seeds the session's random number generator. This method returns a `double` between 0 (including) and 1 (excluding).
+
+[discrete]
+=== Example
+Gets a random number for every play:
+
+
+[source,sql]
+----
+SELECT random() FROM Play;
+----
+
+
+== RANDOM_UUID
+
+[source,sql]
+----
+{RANDOM_UUID | UUID} ()
+----
+
+
+=== Description
+Returns a new UUID with 122 pseudo random bits.
+
+[discrete]
+=== Example
+Gets random number for every Player:
+
+
+[source,sql]
+----
+SELECT UUID(),name FROM Player;
+----
+
+
+== ROUND
+
+[source,sql]
+----
+ROUND ( expression [, precision] )
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+- `precision` - the number of digits after the decimal to round to. Rounds to the nearest long if the number of digits if not set.
+
+=== Description
+Rounds to a number of digits, or to the nearest long if the number of digits if not set. This method returns a `numeric` (the same type as the input).
+
+[discrete]
+=== Example
+Convert every Player's age to an integer number:
+
+
+[source,sql]
+----
+SELECT name, ROUND(age) FROM Player;
+----
+
+
+== ROUNDMAGIC
+
+[source,sql]
+----
+ROUNDMAGIC (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+This function is good for rounding numbers, but it can be slow. It has special handling for numbers around 0. Only numbers smaller than or equal to `+/-1000000000000` are supported. The value is converted to a String internally, and then the last 4 characters are checked. '000x' becomes '0000' and '999x' becomes '999999', which is rounded automatically. This method returns a `double`.
+
+[discrete]
+=== Example
+Round every Player's age:
+
+
+[source,sql]
+----
+SELECT name, ROUNDMAGIC(AGE/3*3) FROM Player;
+----
+
+
+== SECURE_RAND
+
+[source,sql]
+----
+SECURE_RAND (int)
+----
+
+
+=== Parameters
+- `int` - specifies the number​ of digits.
+
+=== Description
+Generate a number of cryptographically secure random numbers. This method returns `bytes`.
+
+[discrete]
+=== Example
+Get a truly random number:
+
+
+[source,sql]
+----
+SELECT name, SECURE_RAND(10) FROM Player;
+----
+
+
+== SIGN
+
+[source,sql]
+----
+SIGN (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+Return -1 if the value is smaller 0, 0 if zero, and otherwise 1.
+
+[discrete]
+=== Example
+Get a sign for every value:
+
+
+[source,sql]
+----
+SELECT name, SIGN(VALUE) FROM Player;
+----
+
+
+== ENCRYPT
+
+[source,sql]
+----
+ENCRYPT (algorithmString , keyBytes , dataBytes)
+----
+
+
+=== Parameters
+- `algorithmString` - sets a supported AES algorithm.
+- `keyBytes` - sets a key.
+- `dataBytes` - sets data.
+
+=== Description
+Encrypt data using a key. The supported algorithm is AES. The block size is 16 bytes. This method returns `bytes`.
+
+[discrete]
+=== Example
+Encrypt players name:
+
+
+[source,sql]
+----
+SELECT ENCRYPT('AES', '00', STRINGTOUTF8(Name)) FROM Player;
+----
+
+
+== DECRYPT
+
+[source,sql]
+----
+DECRYPT (algorithmString , keyBytes , dataBytes)
+----
+
+
+=== Parameters
+- `algorithmString` - sets a supported AES algorithm.
+- `keyBytes` - sets a key.
+- `dataBytes` - sets data.
+
+=== Description
+Decrypts data using a key. The supported algorithm is AES. The block size is 16 bytes. This method returns bytes.
+
+[discrete]
+=== Example
+Decrypt Players' names:
+
+
+[source,sql]
+----
+SELECT DECRYPT('AES', '00', '3fabb4de8f1ee2e97d7793bab2db1116'))) FROM Player;
+----
+
+
+== TRUNCATE
+
+
+[source,sql]
+----
+{TRUNC | TRUNCATE} (\{\{numeric, digitsInt} | timestamp | date | timestampString})
+----
+
+
+=== Description
+Truncates to a number of digits (to the next value closer to 0). This method returns a `double`. When used with a timestamp, truncates a timestamp to a date (day) value. When used with a date, truncates a date to a date (day) value less time part. When used with a timestamp as string, truncates a timestamp to a date (day) value.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+TRUNCATE(VALUE, 2);
+----
+
+
+== COMPRESS
+
+[source,sql]
+----
+COMPRESS(dataBytes [, algorithmString])
+----
+
+
+=== Parameters
+- `dataBytes` - data to compress.
+- `algorithmString` - an algorithm to use for compression.
+
+=== Description
+Compress the data using the specified compression algorithm. Supported algorithms are: LZF (faster but lower compression; default), and DEFLATE (higher compression). Compression does not always reduce size. Very small objects and objects with little redundancy may get larger. This method returns `bytes`.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+COMPRESS(STRINGTOUTF8('Test'))
+----
+
+
+== EXPAND
+
+[source,sql]
+----
+EXPAND(dataBytes)
+----
+
+
+=== Parameters
+- `dataBytes` - data to expand.
+
+=== Description
+Expand data that was compressed using the COMPRESS function. This method returns `bytes`.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+UTF8TOSTRING(EXPAND(COMPRESS(STRINGTOUTF8('Test'))))
+----
+
+
+== ZERO
+
+[source,sql]
+----
+ZERO()
+----
+
+
+=== Description
+Return the value 0. This function can be used even if numeric literals are disabled.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+ZERO()
+----
+
diff --git a/docs/_docs/sql-reference/operational-commands.adoc b/docs/_docs/sql-reference/operational-commands.adoc
new file mode 100644
index 0000000..043ea2d
--- /dev/null
+++ b/docs/_docs/sql-reference/operational-commands.adoc
@@ -0,0 +1,115 @@
+= Operational Commands
+
+
+The following operational commands are supported by Ignite:
+
+== COPY
+
+Copy data from a CSV file into a SQL table.
+
+[source,sql]
+----
+COPY FROM '/path/to/local/file.csv'
+INTO tableName (columnName, columnName, ...) FORMAT CSV [CHARSET '<charset-name>']
+----
+
+
+=== Parameters
+- `'/path/to/local/file.csv'` - actual path to your CSV file.
+- `tableName` - name of the table to which the data will be copied.
+- `columnName` - name of a column corresponding with the columns in the CSV file.
+
+=== Description
+`COPY` allows you to copy the content of a file in the local file system to the server and apply its data to a SQL table. Internally, `COPY` reads the file content in a binary form into data packets, and sends those packets to the server. Then, the file content is parsed and executed in a streaming mode. Use this mode if you have data dumped to a file.
+
+NOTE: Currently, `COPY` is only supported via the JDBC driver and can only work with CSV format.
+
+=== Example
+`COPY` can be executed like so:
+
+[source,sql]
+----
+COPY FROM '/path/to/local/file.csv' INTO city (
+  ID, Name, CountryCode, District, Population) FORMAT CSV
+----
+
+In the above command, substitute `/path/to/local/file.csv` with the actual path to your CSV file. For instance, you can use `city.csv` which is shipped with the latest Ignite. 
+You can find it in your `{IGNITE_HOME}/examples/src/main/resources/sql/` directory.
+
+== SET STREAMING
+
+Stream data in bulk from a file into a SQL table.
+
+[source,sql]
+----
+SET STREAMING [OFF|ON];
+----
+
+
+=== Description
+Using the `SET` command, you can stream data in bulk into a SQL table in your cluster. When streaming is enabled, the JDBC/ODBC driver will pack your commands in batches and send them to the server (Ignite cluster). On the server side, the batch is converted into a stream of cache update commands which are distributed asynchronously between server nodes. Performing this asynchronously increases peak throughput because at any given time all cluster nodes are busy with data loading.
+
+=== Usage
+To stream data into your cluster, prepare a file with the `SET STREAMING ON` command followed by `INSERT` commands for data that needs to be loaded. For example:
+
+[source,sql]
+----
+SET STREAMING ON;
+
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (1,'Kabul','AFG','Kabol',1780000);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (2,'Qandahar','AFG','Qandahar',237500);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (3,'Herat','AFG','Herat',186800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (4,'Mazar-e-Sharif','AFG','Balkh',127800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (5,'Amsterdam','NLD','Noord-Holland',731200);
+-- More INSERT commands --
+----
+
+Note that before executing the above statements, you should have the tables created in the cluster. Run `CREATE TABLE` commands, or provide the commands as part of the file that is used for inserting data, before the `SET STREAMING ON` command, like so:
+
+[source,sql]
+----
+CREATE TABLE City (
+  ID INT(11),
+  Name CHAR(35),
+  CountryCode CHAR(3),
+  District CHAR(20),
+  Population INT(11),
+  PRIMARY KEY (ID, CountryCode)
+) WITH "template=partitioned, backups=1, affinityKey=CountryCode, CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";
+
+SET STREAMING ON;
+
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (1,'Kabul','AFG','Kabol',1780000);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (2,'Qandahar','AFG','Qandahar',237500);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (3,'Herat','AFG','Herat',186800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (4,'Mazar-e-Sharif','AFG','Balkh',127800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (5,'Amsterdam','NLD','Noord-Holland',731200);
+-- More INSERT commands --
+----
+
+[NOTE]
+====
+[discrete]
+=== Flush All Data to the Cluster
+When you have finished loading data, make sure to close the JDBC/ODBC connection so that all data is flushed to the cluster.
+====
+
+=== Known Limitations
+While streaming mode allows you to load data much faster than other data loading techniques mentioned in this guide, it has some limitations:
+
+1. Only `INSERT` commands are allowed; any attempt to execute `SELECT` or any other DML or DDL command will cause an exception.
+2. Due to streaming mode's asynchronous nature, you cannot know update counts for every statement executed; all JDBC/ODBC commands returning update counts will return 0.
+
+=== Example
+As an example, you can use the sample world.sql file that is shipped with the latest Ignite distribution. It can be found in the `{IGNITE_HOME}/examples/sql/` directory. You can use the `run` command from link:tools-analytics/sqlline[SQLLine, window=_blank], as shown below:
+
+[source,shell]
+----
+!run /apache_ignite_version/examples/sql/world.sql
+----
+
+After executing the above command and *closing the JDBC connection*, all data will be loaded into the cluster and ready to be queried.
+
+image::images/set-streaming.png[]
+
+
diff --git a/docs/_docs/sql-reference/sql-conformance.adoc b/docs/_docs/sql-reference/sql-conformance.adoc
new file mode 100644
index 0000000..cabc8ff
--- /dev/null
+++ b/docs/_docs/sql-reference/sql-conformance.adoc
@@ -0,0 +1,457 @@
+= SQL Conformance
+
+Apache Ignite supports most of the major features of ANSI-99 out-of-the-box. The following table shows Ignite compliance to link:https://en.wikipedia.org/wiki/SQL_compliance[SQL:1999 (Core), window=_blank].
+
+
+
+[width="100%", cols="20%,80%a"]
+|=======
+|Feature ID, Name
+|Support
+
+| `E011` Numeric data types
+| Ignite fully supports the following sub-features:
+
+`E011–01` INTEGER and SMALLINT data types (including all spellings)
+
+`E011–02` REAL, DOUBLE PRECISON, and FLOAT data types
+
+`E011–05` Numeric comparison
+
+`E011–06` Implicit casting among the numeric data types
+
+Ignite provides partial support for the following sub-features:
+
+`E011–03` DECIMAL and NUMERIC data types. Fixed <scale> is not supported for DEC and NUMERIC, so there are violations for:
+
+7) If a <scale> is omitted, then a <scale> of 0 (zero) is implicit (6.1 <data type>)
+
+22) NUMERIC specifies the data type exact numeric, with the decimal precision and scale specified by the <precision> and <scale>.
+
+23) DECIMAL specifies the data type exact numeric, with the decimal scale specified by the <scale> and the implementation-defined decimal precision equal to or greater than the value of the specified <precision>.
+
+`E011–04` Arithmetic operator. See issue for feature E011–03
+
+| `E021` Character string types
+| Ignite fully supports the following sub-features:
+
+`E021–03` Character literals
+
+`E021–04` CHARACTER_LENGTH function
+
+`E021–05` OCTET_LENGTH function
+
+`E021–06` SUBSTRING function
+
+`E021–07` Character concatenation
+
+`E021–08` UPPER and LOWER functions
+
+`E021–09` TRIM function
+
+`E021–10` Implicit casting among the fixed-length and variable-length character string types
+
+`E021–11` POSITION function
+
+`E021–12` Character comparison
+
+Ignite provides partial support for the following sub-features:
+
+E021–01 CHARACTER data type (including all its spellings).
+
+----
+<character string type> ::=
+CHARACTER [ <left paren> <length> <right paren> ]
+\| CHAR [ <left paren> <length> <right paren> ]
+\| CHARACTER VARYING <left paren> <length> <right paren>
+\| CHAR VARYING <left paren> <length> <right paren>
+\| VARCHAR <left paren> <length> <right paren>
+----
+
+<length> is not supported for CHARACTER and CHARACTER VARYING data type.
+
+`E021–02` CHARACTER VARYING data type (including all its spellings). See issue for feature E021–01
+
+| `E031` Identifiers
+| Ignite fully supports the following sub-features:
+
+`E031–01` Delimited identifiers
+
+`E031–02` Lower case identifiers
+
+`E031–03` Trailing underscore
+
+| `E051` Basic query specification
+| Ignite fully supports the following sub-features:
+
+`E051–01` SELECT DISTINCT
+
+`E051–04` GROUP BY can contain columns not in <select-list>
+
+`E051–05` Select list items can be renamed
+
+`E051–06` HAVING clause
+
+`E051–07` Qualified * in select list
+
+`E051–08` Correlation names in the FROM clause
+
+Ignite does not support the following sub-features:
+
+`E051–02` GROUP BY clause; No support for ROLLUP, CUBE, GROUPING SETS.
+
+`E051–09` Rename columns in the FROM clause. Some information about support from other products is link:http://modern-sql.com/feature/table-column-aliases[here, window=_blank].
+
+| `E061` Basic predicates and search conditions
+| Ignite fully supports the following sub-features:
+
+`E061–01` Comparison predicate
+
+`E061–02` BETWEEN predicate
+
+`E061–03` IN predicate with list of values
+
+`E061–06` NULL predicate
+
+`E061–08` EXISTS predicate
+
+`E061–09` Subqueries in comparison predicate
+
+`E061–11` Subqueries in IN predicate
+
+`E061–13` Correlated subqueries
+
+`E061–14` Search condition
+
+Ignite provides partial support for the following sub-features:
+
+`E061–04` LIKE predicate; There is support for <character like predicate>, but <octet like predicate> could not be checked because of link:https://issues.apache.org/jira/browse/IGNITE-7480[this issue, window=_blank].
+
+`E061–05` LIKE predicate: ESCAPE clause; There is support for <character like predicate>, but <octet like predicate> could not be checked because of link:https://issues.apache.org/jira/browse/IGNITE-7480[this issue, window=_blank].
+
+`E061–07` Quantified comparison predicate; Except ALL (see link:https://issues.apache.org/jira/browse/IGNITE-5749[issue, window=_blank]).
+
+Ignite does not support the following sub-feature:
+
+`E061–12` Subqueries in quantified comparison predicate.
+
+| `E071` Basic query expressions
+| Ignite provides partial support for the following sub-features:
+
+`E071–01` UNION DISTINCT table operator
+
+`E071–02` UNION ALL table operator
+
+`E071–03` EXCEPT DISTINCT table operator
+
+`E071–05` Columns combined via table operators need not have exactly the same data type
+
+`E071–06` Table operators in subqueries
+
+Note that there is no support for non-recursive WITH clause in H2 and Ignite. According to link:http://www.h2database.com/html/grammar.html#with[the H2 docs, window=_blank] there is support for recursive WITH clause, but it fails in Ignite.
+
+| E081 Basic Privileges
+| Ignite does not support the following sub-feature:
+
+`E081–01` SELECT privilege at the table level
+
+`E081–02` DELETE privilege
+
+`E081–03` INSERT privilege at the table level
+
+`E081–04` UPDATE privilege at the table level
+
+`E081–05` UPDATE privilege at the column level
+
+`E081–06` REFERENCES privilege at the table
+
+`E081–07` REFERENCES privilege at the column
+
+`E081–08` WITH GRANT OPTION
+
+`E081–09` USAGE privilege
+
+`E081–10` EXECUTE privilege
+
+| `E091` Set functions
+| Ignite provides partial support for the following sub-features:
+
+`E091–01` AVG
+
+`E091–02` COUNT
+
+`E091–03` MAX
+
+`E091–04` MIN
+
+`E091–05` SUM
+
+`E091–06` ALL quantifier
+
+`E091–07` DISTINCT quantifier
+
+Note that there is no support for:
+
+- GROUPING and ANY (both in H2 and Ignite).
+
+- EVERY and SOME functions. There is support in H2, but fails in Ignite.
+
+| `E101` Basic data manipulation
+| Ignite fully supports the following sub-features:
+
+`E101–03` Searched UPDATE statement
+
+`E101–04` Searched DELETE statement
+
+Ignite provides partial support for the following sub-features:
+
+`E101–01` INSERT statement. No support for DEFAULT values in Ignite. Works in H2.
+
+| `E111` Single row SELECT statement
+| Ignite does not support this feature.
+
+| `E121` Basic cursor support
+| Ignite does not support the following sub-features
+
+`E121–01` DECLARE CURSOR
+
+`E121–02` ORDER BY columns need not be in select list
+
+`E121–03` Value expressions in ORDER BY clause
+
+`E121–04` OPEN statement
+
+`E121–06` Positioned UPDATE statement
+
+`E121–07` Positioned DELETE statement
+
+`E121–08` CLOSE statement
+
+`E121–10` FETCH statement: implicit NEXT
+
+`E121–17` WITH HOLD cursors
+
+| `E131` Null value support (nulls in lieu of values)
+| Ignite fully supports this feature.
+
+| `E141` Basic integrity constraints
+| Ignite fully supports the following sub-feature:
+
+`E141–01` NOT NULL constraints YES
+
+Ignite provides partial support for the following sub-features:
+
+`E141–03` PRIMARY KEY constraints. See link:https://issues.apache.org/jira/browse/IGNITE-7479[IGNITE-7479, window=_blank]
+
+`E141–08` NOT NULL inferred on PRIMARY KEY. See link:https://issues.apache.org/jira/browse/IGNITE-7479[IGNITE-7479, window=_blank]
+
+Ignite does not support the following sub-features:
+
+`E141–02` UNIQUE constraints of NOT NULL columns
+
+`E141–04` Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action
+
+`E141–06` CHECK constraints
+
+`E141–07` Column defaults
+
+`E141–10` Names in a foreign key can be specified in any order
+
+| `E151` Transaction support
+| Ignite does not support the following sub-features:
+
+`E151–01` COMMIT statement
+
+`E151–02` ROLLBACK statement
+
+| `E152` Basic SET TRANSACTION statement
+| Ignite does not support the following sub-features:
+
+`E152–01` SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause
+
+`E152–02` SET TRANSACTION statement: READ ONLY and READ WRITE clauses
+
+| `E153` Updatable queries with subqueries
+| Ignite fully supports this feature.
+
+| `E161` SQL comments using leading double minus
+| Ignite fully supports this feature.
+
+| `E171` SQLSTATE support
+| Ignite provides partial support for this feature implementing a subset of standard error codes and introducing custom ones. A full list of errors​ supported by Ignite can be found here:
+
+link:developers-guide/SQL/JDBC/jdbc-driver#error-codes[JDBC Error Codes]
+
+link:developers-guide/SQL/ODBC/error-codes[ODBC Error Codes]
+
+| `E182` Host language Binding (previously "Module Language")
+| Ignite does not support this feature.
+
+| `F021` Basic information schema
+| Ignite does not support the following sub-features:
+
+`F021–01` COLUMNS view
+
+`F021–02` TABLES view
+
+`F021–03` VIEWS view
+
+`F021–04` TABLE_CONSTRAINTS
+
+`F021–05` REFERENTIAL_CONSTRAINTS view
+
+`F021–06` CHECK_CONSTRAINTS view
+
+| `F031` Basic schema manipulation
+| Ignite fully supports the following feature:
+
+`F031–04` ALTER TABLE statement: ADD COLUMN clause
+
+Ignite provides partial support for the following sub-feature:
+
+`F031–01` CREATE TABLE statement to create persistent base tables.
+
+Basic syntax is supported. 'AS' is supported in H2 but not in Ignite. No support for privileges (INSERT, SELECT, UPDATE, DELETE).
+
+Ignite does not support the following sub-features:
+
+`F031–02` CREATE VIEW statement
+
+`F031–03` GRANT statement
+
+`F031–13` DROP TABLE statement: RESTRICT clause
+
+`F031–16` DROP VIEW statement: RESTRICT clause
+
+`F031–19REVOKE` statement: RESTRICT clause
+
+A link:sql-reference/ddl[DDL, window=_blank] is being actively developed; more features will be supported in upcoming releases.
+
+| `F041` Basic joined table
+| Ignite fully supports the following sub-features:
+
+`F041–01` Inner join (but not necessarily the INNER keyword)
+`F041–02` INNER keyword
+
+`F041–03` LEFT OUTER JOIN
+
+`F041–04` RIGHT OUTER JOIN
+
+`F041–05` Outer joins can be nested
+
+`F041–07` The inner table in a left or right outer join can also be used in an inner join
+
+`F041–08` All comparison operators are supported (rather than just =)
+
+| `F051` Basic date and time
+| Ignite fully supports the following sub-features:
+
+`F051–04` Comparison predicate on DATE, TIME, and TIMESTAMP data types
+
+`F051–05` Explicit CAST between datetime types and character string types
+
+`F051–06` CURRENT_DATE
+
+`F051–07` LOCALTIME
+
+`F051–08` LOCALTIMESTAMP
+
+Ignite provides partial support for the following sub-features:
+
+`F051–01` DATE data type (including support of DATE literal). See link:https://issues.apache.org/jira/browse/IGNITE-7360[IGNITE-7360, window=_blank].
+
+`F051–02` TIME data type (including support of TIME literal) with fractional seconds precision of at least 0. <precision> is not supported correctly for TIME data type. Also see link:https://issues.apache.org/jira/browse/IGNITE-7360[IGNITE-7360, window=_blank].
+
+`F051–03` TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6. <precision> is not supported correctly for TIME data type. Also see link:https://issues.apache.org/jira/browse/IGNITE-7360[IGNITE-7360, window=_blank].
+
+| `F081` UNION and EXCEPT in views
+| Ignite does not support this feature.
+
+| `F131` Grouped operations
+| Ignite does not support the following sub-features:
+
+`F131–01` WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views
+
+`F131–02` Multiple tables supported in queries with grouped views
+
+`F131–03` Set functions supported in queries with grouped views
+
+`F131–04` Subqueries with GROUP BY and HAVING clauses and grouped views
+
+`F131–05` Single row SELECT with GROUP BY and HAVING clauses and grouped views
+
+| `F181` Multiple module support
+| Ignite does not support this feature.
+
+| `F201` CAST function
+| Ignite fully supports this feature.
+
+| `F221` Explicit defaults
+| Ignite fully supports this feature.
+
+| `F261` CASE expression
+| Ignite fully supports the following sub-features:
+
+`F261–01` Simple CASE
+
+`F261–02` Searched CASE
+
+`F261–03` NULLIF
+
+`F261–04` COALESCE
+
+| `F311` Schema definition statement
+| Ignite does not support the following sub-features:
+
+`F311–01` CREATE SCHEMA
+
+`F311–02` CREATE TABLE for persistent base tables
+
+`F311–03` CREATE VIEW
+
+`F311–04` CREATE VIEW: WITH CHECK OPTION
+
+`F311–05` GRANT statement
+
+| `F471` Scalar subquery values
+| Ignite fully supports this feature.
+
+| `F481` Expanded NULL predicate
+| Ignite fully supports this feature.
+
+| `F501` Features and conformance views
+| Ignite does not support the following sub-features:
+
+`F501–01` SQL_FEATURES view
+
+`F501–02` SQL_SIZING view
+
+`F501–03` SQL_LANGUAGES view
+
+| `F812` Basic flagging
+| Ignite does not support this feature.
+
+`S011` Distinct data types
+
+Ignite does not support the following sub-feature:
+
+`S011–01` USER_DEFINED_TYPES view
+
+| `T321` Basic SQL-invoked routines
+| Ignite does not support the following sub-features:
+
+`T321–01` User-defined functions with no overloading
+
+`T321–02` User-defined stored procedures with no overloading
+
+`T321–03` Function invocation
+
+`T321–04` CALL statement
+
+`T321–05` RETURN statement
+
+`T321–06` ROUTINES view
+
+`T321–07` PARAMETERS view
+
+|=======
diff --git a/docs/_docs/sql-reference/string-functions.adoc b/docs/_docs/sql-reference/string-functions.adoc
new file mode 100644
index 0000000..9949633
--- /dev/null
+++ b/docs/_docs/sql-reference/string-functions.adoc
@@ -0,0 +1,928 @@
+= String Functions
+
+:toclevels:
+
+== ASCII
+
+Return the ASCII value of the first character in the string. This method returns an `int`.
+
+[source,sql]
+----
+ASCII(string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+select ASCII(name) FROM Players;
+----
+
+
+== BIT_LENGTH
+Returns the number of bits in a string. This method returns a `long`. For `BLOB`, `CLOB`, `BYTES`, and `JAVA_OBJECT`, the object's specified precision is used. Each character needs 16 bits.
+
+
+
+[source,sql]
+----
+BIT_LENGTH(string)
+----
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+select BIT_LENGTH(name) FROM Players;
+----
+
+
+== LENGTH
+Returns the number of characters in a string. This method returns a `long`. For `BLOB`, `CLOB`, `BYTES`, and `JAVA_OBJECT`, the object's specified precision is used.
+
+
+
+[source,sql]
+----
+{LENGTH | CHAR_LENGTH | CHARACTER_LENGTH} (string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+SELECT LENGTH(name) FROM Players;
+----
+
+
+== OCTET_LENGTH
+Returns the number of bytes in a string. This method returns a `long`. For `BLOB`, `CLOB`, `BYTES` and `JAVA_OBJECT`, the object's specified precision is used. Each character needs 2 bytes.
+
+
+
+[source,sql]
+----
+OCTET_LENGTH(string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+SELECT OCTET_LENGTH(name) FROM Players;
+----
+
+
+== CHAR
+
+Returns the character that represents the ASCII value. This method returns a `string`.
+
+[source,sql]
+----
+{CHAR | CHR} (int)
+----
+
+
+Parameters:
+- `int` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT CHAR(65)||name FROM Players;
+----
+
+
+== CONCAT
+Combines strings. Unlike with the `||` operator, NULL parameters are ignored and do not cause the result to become NULL. This method returns a `string`.
+
+
+[source,sql]
+----
+CONCAT(string, string [,...])
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT CONCAT(NAME, '!') FROM Players;
+----
+
+
+== CONCAT_WS
+Combines strings, dividing with a separator. Unlike with the `||` operator, NUL parameters are ignored, and do not cause the result to become NULL. This method returns a string.
+
+
+[source,sql]
+----
+CONCAT_WS(separatorString, string, string [,...])
+----
+
+
+Parameters:
+- `separatorString` - separator.
+- `string` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT CONCAT_WS(',', NAME, '!') FROM Players;
+----
+
+
+== DIFFERENCE
+
+Returns the difference between the `SOUNDEX()` values of two strings. This method returns an `int`.
+
+[source,sql]
+----
+DIFFERENCE(X, Y)
+----
+
+
+Parameters:
+- `X`, `Y` - strings to compare.
+
+
+
+Example:
+Calculates the SOUNDEX() difference for two Players' names:
+
+
+[source,sql]
+----
+select DIFFERENCE(T1.NAME, T2.NAME) FROM players T1, players T2
+   WHERE T1.ID = 10 AND T2.ID = 11;
+----
+
+
+== HEXTORAW
+
+Converts a hex representation of a string to a string. 4 hex characters per string character are used.
+
+[source,sql]
+----
+HEXTORAW(string)
+----
+
+
+Parameters:
+- `string` - a hex string to use for the conversion.
+
+
+
+Example:
+Calculate a harmony for Players' names:
+
+
+[source,sql]
+----
+SELECT HEXTORAW(DATA) FROM Players;
+----
+
+
+== RAWTOHEX
+
+Converts a string to the hex representation. 4 hex characters per string character are used. This method returns a `string`.
+
+[source,sql]
+----
+RAWTOHEX(string)
+----
+
+Parameters:
+- `string` - a string to convert to the hex representation.
+
+
+
+Example:
+Calculate a harmony for Players' names:
+
+
+[source,sql]
+----
+SELECT RAWTOHEX(DATA) FROM Players;
+----
+
+
+== INSTR
+
+Returns the location of a search string in a string. If a start position is used, the characters before it are ignored. If position is negative, the rightmost location is returned. 0 is returned if the search string is not found. Please note this function is case sensitive, even if the parameters are not.
+
+
+
+[source,sql]
+----
+INSTR(string, searchString, [, startInt])
+----
+
+
+Parameters:
+- `string` - any string.
+- `searchString` - any string to search for.
+- `startInt` - start position for the lookup.
+
+
+Example:
+Check if a string includes the "@" symbol:
+
+
+[source,sql]
+----
+SELECT INSTR(EMAIL,'@') FROM Players;
+----
+
+
+== INSERT
+
+Inserts a additional string into the original string at a specified start position. The length specifies the number of characters that are removed at the start position in the original string. This method returns a `string`.
+
+[source,sql]
+----
+INSERT(originalString, startInt, lengthInt, addString)
+----
+
+Parameters:
+
+* `originalString` - an original string.
+* `startInt` - start position.
+* `lengthInt` - the length.
+* `addString` - an additional string.
+
+
+Example:
+
+[source,sql]
+----
+SELECT INSERT(NAME, 1, 1, ' ') FROM Players;
+----
+
+
+== LOWER
+
+Converts a string to lowercase.
+
+[source,sql]
+----
+{LOWER | LCASE} (string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LOWER(NAME) FROM Players;
+----
+
+
+== UPPER
+
+Converts a string to uppercase.
+
+[source,sql]
+----
+{UPPER | UCASE} (string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+The following example returns the last name in uppercase for each Player:
+
+
+[source,sql]
+----
+SELECT UPPER(last_name) "LastNameUpperCase" FROM Players;
+----
+
+
+== LEFT
+
+Returns the leftmost number of characters.
+
+[source,sql]
+----
+LEFT(string, int)
+----
+
+
+Parameters:
+- `string` - an argument.
+- `int` - a number of characters to extract.
+
+
+
+Example:
+Get 3 first letters of Players' names:
+
+
+[source,sql]
+----
+SELECT LEFT(NAME, 3) FROM Players;
+----
+
+
+== RIGHT
+
+Returns the rightmost number of characters.
+
+[source,sql]
+----
+RIGHT(string, int)
+----
+
+
+Parameters:
+- `string` - an argument.
+- `int` - a number of characters to extract.
+
+
+
+Example:
+Get the last 3 letters of Players' names:
+
+
+[source,sql]
+----
+SELECT RIGHT(NAME, 3) FROM Players;
+----
+
+
+== LOCATE
+
+Returns the location of a search string in a string. If a start position is used, the characters before it are ignored. If position is negative, the rightmost location is returned. 0 is returned if the search string is not found.
+
+[source,sql]
+----
+LOCATE(searchString, string [, startInt])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LOCATE('.', NAME) FROM Players;
+----
+
+
+== POSITION
+
+Returns the location of a search string in a string. See also <<LOCATE>>.
+
+[source,sql]
+----
+POSITION(searchString, string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT POSITION('.', NAME) FROM Players;
+----
+
+
+== LPAD
+
+Left pad the string to the specified length. If the length is shorter than the string, it will be truncated at the end. If the padding string is not set, spaces will be used.
+
+[source,sql]
+----
+LPAD(string, int[, paddingString])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LPAD(AMOUNT, 10, '*') FROM Players;
+----
+
+
+== RPAD
+
+Right pad the string to the specified length. If the length is shorter than the string, it will be truncated. If the padding string is not set, spaces will be used.
+
+[source,sql]
+----
+RPAD(string, int[, paddingString])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT RPAD(TEXT, 10, '-') FROM Players;
+----
+
+
+== LTRIM
+
+Removes all leading spaces from a string.
+
+[source,sql]
+----
+LTRIM(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LTRIM(NAME) FROM Players;
+----
+
+
+== RTRIM
+
+Removes all trailing spaces from a string.
+
+[source,sql]
+----
+RTRIM(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT RTRIM(NAME) FROM Players;
+----
+
+
+== TRIM
+
+Removes all leading spaces, trailing spaces, or spaces at both ends, from a string. Other characters can be removed as well.
+
+[source,sql]
+----
+TRIM ([{LEADING | TRAILING | BOTH} [string] FROM] string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT TRIM(BOTH '_' FROM NAME) FROM Players;
+----
+
+
+== REGEXP_REPLACE
+
+Replaces each substring that matches a regular expression. For details, see the Java `String.replaceAll()` method. If any parameter is null (except the optional flagsString parameter), the result is null.
+
+[source,sql]
+----
+REGEXP_REPLACE(inputString, regexString, replacementString [, flagsString])
+----
+
+
+Flags values are limited to 'i', 'c', 'n', 'm'. Other symbols cause an exception. Multiple symbols can be used in one `flagsString` parameter (for example: 'im'). Later flags override earlier ones, for example 'ic' is equivalent to case sensitive, matching 'c'.
+
+- 'i' enables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'c' disables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'n' allows the period to match the newline character (Pattern.DOTALL)
+
+- 'm' enables multiline mode (Pattern.MULTILINE)
+
+
+Example:
+
+[source,sql]
+----
+SELECT REGEXP_REPLACE(name, 'w+', 'W', 'i') FROM Players;
+----
+
+
+== REGEXP_LIKE
+
+Matches string to a regular expression. For details, see the Java `Matcher.find()` method. If any parameter is null (except the optional `flagsString` parameter), the result is null.
+
+[source,sql]
+----
+REGEXP_LIKE(inputString, regexString [, flagsString])
+----
+
+
+
+Flags values are limited to 'i', 'c', 'n', 'm'. Other symbols cause an exception. Multiple symbols can be used in one `flagsString` parameter (for example: 'im'). Later flags override earlier ones, for example 'ic' is equivalent to case sensitive, matching 'c'.
+
+- 'i' enables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'c' disables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'n' allows the period to match the newline character (Pattern.DOTALL)
+
+- 'm' enables multiline mode (Pattern.MULTILINE)
+
+
+Example:
+
+[source,sql]
+----
+SELECT REGEXP_LIKE(name, '[A-Z ]*', 'i') FROM Players;
+----
+
+
+== REPEAT
+
+Returns a string repeated some number of times.
+
+[source,sql]
+----
+REPEAT(string, int)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT REPEAT(NAME || ' ', 10) FROM Players;
+----
+
+
+== REPLACE
+
+Replaces all occurrences of a search string in specified text with another string. If no replacement is specified, the search string is removed from the original string. If any parameter is null, the result is null.
+
+[source,sql]
+----
+REPLACE(string, searchString [, replacementString])
+----
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT REPLACE(NAME, ' ') FROM Players;
+----
+
+
+== SOUNDEX
+
+Returns a four character code representing the SOUNDEX of a string. See also link:http://www.archives.gov/genealogy/census/soundex.html[http://www.archives.gov/genealogy/census/soundex.html]. This method returns a `string`.
+
+[source,sql]
+----
+SOUNDEX(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT SOUNDEX(NAME) FROM Players;
+----
+
+
+== SPACE
+
+Returns a string consisting of the specified number of spaces.
+
+[source,sql]
+----
+SPACE(int)
+----
+
+
+
+
+Example:
+
+
+[source,sql]
+----
+SELECT name, SPACE(80) FROM Players;
+----
+
+
+== STRINGDECODE
+
+Converts an encoded string using the Java string literal encoding format. Special characters are `\b`, `\t`, `\n`, `\f`, `\r`, `\"`, `\`, `\<octal>`, `\u<unicode>`. This method returns a `string`.
+
+[source,sql]
+----
+STRINGDECODE(string)
+----
+
+Example:
+
+[source,sql]
+----
+STRINGENCODE(STRINGDECODE('Lines 1\nLine 2'));
+----
+
+
+== STRINGENCODE
+
+Encodes special characters in a string using the Java string literal encoding format. Special characters are `\b`, `\t`, `\n`, `\f`, `\r`, `\"`, `\`, `\<octal>`, `\u<unicode>`. This method returns a `string`.
+
+[source,sql]
+----
+STRINGENCODE(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+STRINGENCODE(STRINGDECODE('Lines 1\nLine 2'))
+----
+
+
+== STRINGTOUTF8
+
+Encodes a string to a byte array using the UTF8 encoding format. This method returns `bytes`.
+
+[source,sql]
+----
+STRINGTOUTF8(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT UTF8TOSTRING(STRINGTOUTF8(name)) FROM Players;
+----
+
+
+== SUBSTRING
+
+Returns a substring of a string starting at the specified position. If the start index is negative, then the start index is relative to the end of the string. The length is optional. Also supported is: `SUBSTRING(string [FROM start] [FOR length])`.
+
+[source,sql]
+----
+{SUBSTRING | SUBSTR} (string, startInt [, lengthInt])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT SUBSTR(name, 2, 5) FROM Players;
+----
+
+
+== UTF8TOSTRING
+
+Decodes a byte array in UTF8 format to a string.
+
+[source,sql]
+----
+UTF8TOSTRING(bytes)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT UTF8TOSTRING(STRINGTOUTF8(name)) FROM Players;
+----
+
+
+== XMLATTR
+
+Creates an XML attribute element of the form name=value. The value is encoded as XML text. This method returns a `string`.
+
+[source,sql]
+----
+XMLATTR(nameString, valueString)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLNODE('a', XMLATTR('href', 'http://h2database.com'))
+----
+
+
+== XMLNODE
+
+Create an XML node element. An empty or null attribute string means no attributes are set. An empty or null content string means the node is empty. The content is indented by default if it contains a newline. This method returns a `string`.
+
+[source,sql]
+----
+XMLNODE(elementString [, attributesString [, contentString [, indentBoolean]]])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLNODE('a', XMLATTR('href', 'http://h2database.com'), 'H2')
+----
+
+
+== XMLCOMMENT
+
+Creates an XML comment. Two dashes (`--`) are converted to `- -`. This method returns a `string`.
+
+[source,sql]
+----
+XMLCOMMENT(commentString)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLCOMMENT('Test')
+----
+
+
+== XMLCDATA
+
+Creates an XML CDATA element. If the value contains `]]>`, an XML text element is created instead. This method returns a `string`.
+
+[source,sql]
+----
+XMLCDATA(valueString)
+----
+
+Example:
+
+[source,sql]
+----
+XMLCDATA('data')
+----
+
+
+== XMLSTARTDOC
+
+Returns the XML declaration. The result is always `<?xml version=1.0?>`.
+
+[source,sql]
+----
+XMLSTARTDOC()
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLSTARTDOC()
+----
+
+
+== XMLTEXT
+
+
+Creates an XML text element. If enabled, newline and linefeed is converted to an XML entity (`&#`). This method returns a `string`.
+
+[source,sql]
+----
+XMLTEXT(valueString [, escapeNewlineBoolean])
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLSTARTDOC()
+----
+
+
+== TO_CHAR
+
+Formats a timestamp, number, or text.
+
+[source,sql]
+----
+TO_CHAR(value [, formatString[, nlsParamString]])
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+TO_CHAR(TIMESTAMP '2010-01-01 00:00:00', 'DD MON, YYYY')
+----
+
+
+== TRANSLATE
+
+Replaces a sequence of characters in a string with another set of characters.
+
+[source,sql]
+----
+TRANSLATE(value , searchString, replacementString]])
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+TRANSLATE('Hello world', 'eo', 'EO')
+----
+
diff --git a/docs/_docs/sql-reference/system-functions.adoc b/docs/_docs/sql-reference/system-functions.adoc
new file mode 100644
index 0000000..2dd4a16
--- /dev/null
+++ b/docs/_docs/sql-reference/system-functions.adoc
@@ -0,0 +1,211 @@
+= System Functions
+
+
+== COALESCE
+
+Returns the first value that is not null.
+
+[source,sql]
+----
+{COALESCE | NVL } (aValue, bValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+COALESCE(A, B, C)
+----
+
+
+
+== DECODE
+
+Returns the first matching value. NULL is considered to match NULL. If no match was found, then NULL or the last parameter (if the parameter count is even) is returned.
+
+[source,sql]
+----
+DECODE(value, whenValue, thenValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+DECODE(RAND()>0.5, 0, 'Red', 1, 'Black')
+----
+
+
+== GREATEST
+
+Returns the largest value that is not NULL, or NULL if all values are NULL.
+[source,sql]
+----
+GREATEST(aValue, bValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+GREATEST(1, 2, 3)
+----
+
+
+== IFNULL
+
+Returns the value of 'a' if it is not null, otherwise 'b'.
+
+[source,sql]
+----
+IFNULL(aValue, bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+IFNULL(NULL, '')
+----
+
+
+== LEAST
+
+Returns the smallest value that is not NULL, or NULL if all values are NULL.
+
+[source,sql]
+----
+LEAST(aValue, bValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+LEAST(1, 2, 3)
+----
+
+
+== NULLIF
+
+Returns NULL if 'a' is equals to 'b', otherwise 'a'.
+
+[source,sql]
+----
+NULLIF(aValue, bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+NULLIF(A, B)
+----
+
+
+== NVL2
+
+If the test value is null, then 'b' is returned. Otherwise, 'a' is returned. The data type of the returned value is the data type of 'a' if this is a text type.
+
+[source,sql]
+----
+NVL2(testValue, aValue, bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+NVL2(X, 'not null', 'null')
+----
+
+
+== CASEWHEN
+
+Returns 'aValue' if the boolean expression is true, otherwise 'bValue'.
+
+[source,sql]
+----
+CASEWHEN (boolean , aValue , bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+CASEWHEN(ID=1, 'A', 'B')
+----
+
+
+== CAST
+
+Converts a value to another data type. The following conversion rules are used:
+
+- When converting a number to a boolean, 0 is considered as false and every other value is true.
+- When converting a boolean to a number, false is 0 and true is 1.
+- When converting a number to a number of another type, the value is checked for overflow.
+- When converting a number to binary, the number of bytes will match the precision.
+- When converting a string to binary, it is hex encoded.
+- A hex string can be converted into the binary form and then to a number. If a direct conversion is not possible, the value is first converted to a string.
+
+
+
+[source,sql]
+----
+CAST (value AS dataType)
+----
+
+
+Examples:
+[source,sql]
+----
+CAST(NAME AS INT);
+CAST(65535 AS BINARY);
+CAST(CAST('FFFF' AS BINARY) AS INT);
+----
+
+
+== CONVERT
+
+Converts a value to another data type.
+
+[source,sql]
+----
+CONVERT (value , dataType)
+----
+
+
+
+Examples:
+[source,sql]
+----
+CONVERT(NAME, INT)
+----
+
+
+== TABLE
+
+
+Returns the result set. TABLE_DISTINCT removes duplicate rows.
+
+[source,sql]
+----
+TABLE	| TABLE_DISTINCT	(name dataType = expression)
+----
+
+
+
+Examples:
+[source,sql]
+----
+SELECT * FROM TABLE(ID INT=(1, 2), NAME VARCHAR=('Hello', 'World'))
+----
+
diff --git a/docs/_docs/sql-reference/transactions.adoc b/docs/_docs/sql-reference/transactions.adoc
new file mode 100644
index 0000000..c37c5c4
--- /dev/null
+++ b/docs/_docs/sql-reference/transactions.adoc
@@ -0,0 +1,52 @@
+= Transactions
+
+IMPORTANT: Support for link:transactions/mvcc[SQL transactions] is currently in the beta stage. For production use, consider key-value transactions.
+
+Ignite supports the following statements that allow users to start, commit, or rollback a transaction.
+
+[source,sql]
+----
+BEGIN [TRANSACTION]
+
+COMMIT [TRANSACTION]
+
+ROLLBACK [TRANSACTION]
+----
+
+- The `BEGIN` statement begins a new transaction.
+- `COMMIT` commits the current transaction.
+- `ROLLBACK` rolls back the current transaction.
+
+NOTE: DDL statements are not supported inside transactions.
+
+== Description
+
+The `BEGIN`, `COMMIT` and `ROLLBACK` commands allow you to manage SQL Transactions. A transaction is a sequence of SQL operations that starts with the `BEGIN` statement and ends with the `COMMIT` statement. Either all of the operations in a transaction succeed or they all fail.
+
+The `ROLLBACK [TRANSACTION]` statement undoes all updates made since the last time a `COMMIT` or `ROLLBACK` command was issued.
+
+== Example
+Add a person and update the city population by 1 in a single transaction.
+
+[source,sql]
+----
+BEGIN;
+
+INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3);
+
+UPDATE City SET population = population + 1 WHERE id = 3;
+
+COMMIT;
+----
+
+Roll back the changes made by the previous commands.
+
+[source,sql]
+----
+BEGIN;
+
+INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3);
+
+UPDATE City SET population = population + 1 WHERE id = 3;
+----
+
diff --git a/docs/_docs/developers-guide/starting-nodes.adoc b/docs/_docs/starting-nodes.adoc
similarity index 92%
rename from docs/_docs/developers-guide/starting-nodes.adoc
rename to docs/_docs/starting-nodes.adoc
index 14331cb..d0c85f9 100644
--- a/docs/_docs/developers-guide/starting-nodes.adoc
+++ b/docs/_docs/starting-nodes.adoc
@@ -112,8 +112,8 @@ Non-graceful shutdowns should be used as a last resort when the node is not resp
 
 A graceful shutdown allows the node to finish critical operations and correctly complete its lifecycle.
 The proper procedure to perform a graceful shutdown is as follows:
-//is to use one of the following ways to stop the node and remove it from the link:developers-guide/baseline-topology[baseline topology]:
-//to remove the node from the link:developers-guide/baseline-topology[baseline topology] and
+//is to use one of the following ways to stop the node and remove it from the link:baseline-topology[baseline topology]:
+//to remove the node from the link:baseline-topology[baseline topology] and
 
 . Stop the node using one of the following methods:
 //tag::stop-commands[]
@@ -122,7 +122,7 @@ The proper procedure to perform a graceful shutdown is as follows:
 * send a user interrupt signal. Ignite uses a JVM shutdown hook to execute custom logic before the JVM stops.
 If you start the node by running `ignite.sh` and don't detach it from the terminal, you can stop the node by hitting `Ctrl+C`.
 //end::stop-commands[]
-. Remove the node from the link:developers-guide/baseline-topology[baseline topology]. This step may not be necessary if link:developers-guide/baseline-topology#baseline-topology-autoadjustment[baseline auto-adjustment] is enabled.
+. Remove the node from the link:baseline-topology[baseline topology]. This step may not be necessary if link:baseline-topology#baseline-topology-autoadjustment[baseline auto-adjustment] is enabled.
 
 
 
@@ -165,11 +165,11 @@ To enable partition loss prevention, set the `IGNITE_WAIT_FOR_BACKUPS_ON_SHUTDOW
 
 CAUTION: If you have a cache without partition backups and you stop a node (even with this property set), you will lose the portion of the cache that was kept on this node.
 
-//The behavior of the node depend on whether the baseline topology is configured to be link:developers-guide/baseline-topology#baseline-topology-autoadjustment[adjusted automatically].
+//The behavior of the node depend on whether the baseline topology is configured to be link:baseline-topology#baseline-topology-autoadjustment[adjusted automatically].
 
 When this property is set, the last node in the cluster will not stop gracefully.
 You will have to terminate the process by sending the `kill -9` signal.
-If you want to shut down the entire cluster, link:administrators-guide/control-script#deactivating-cluster[deactivate] it and then stop all the nodes.
+If you want to shut down the entire cluster, link:control-script#deactivating-cluster[deactivate] it and then stop all the nodes.
 Alternatively, you can stop all the nodes non-gracefully (by sending `kill -9`).
 However, the latter option is not recommended for clusters with persistence.
 ////
diff --git a/docs/_docs/developers-guide/thin-client-comparison.csv b/docs/_docs/thin-client-comparison.csv
similarity index 100%
rename from docs/_docs/developers-guide/thin-client-comparison.csv
rename to docs/_docs/thin-client-comparison.csv
diff --git a/docs/_docs/developers-guide/thin-clients/cpp-thin-client.adoc b/docs/_docs/thin-clients/cpp-thin-client.adoc
similarity index 88%
rename from docs/_docs/developers-guide/thin-clients/cpp-thin-client.adoc
rename to docs/_docs/thin-clients/cpp-thin-client.adoc
index 5fb5499..1792602 100644
--- a/docs/_docs/developers-guide/thin-clients/cpp-thin-client.adoc
+++ b/docs/_docs/thin-clients/cpp-thin-client.adoc
@@ -103,7 +103,7 @@ include::code-snippets/cpp/src/thin_client_cache.cpp[tag=basic-cache-operations,
 
 === SSL/TLS
 
-To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration. Refer to the link:developers-guide/thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for instructions on the cluster configuration.
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for instructions on the cluster configuration.
 
 [source, cpp]
 ----
@@ -112,7 +112,7 @@ include::code-snippets/cpp/src/thin_client_ssl.cpp[tag=thin-client-ssl,indent=0]
 
 === Authentication
 
-Configure link:administrators-guide/security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
 
 [source, cpp]
 ----
diff --git a/docs/_docs/developers-guide/thin-clients/dotnet-thin-client.adoc b/docs/_docs/thin-clients/dotnet-thin-client.adoc
similarity index 88%
rename from docs/_docs/developers-guide/thin-clients/dotnet-thin-client.adoc
rename to docs/_docs/thin-clients/dotnet-thin-client.adoc
index 4a7b939..07cba4f 100644
--- a/docs/_docs/developers-guide/thin-clients/dotnet-thin-client.adoc
+++ b/docs/_docs/thin-clients/dotnet-thin-client.adoc
@@ -109,7 +109,7 @@ include::code-snippets/dotnet/ThinClient.cs[tag=basicOperations,indent=0]
 
 
 === Working With Binary Objects
-The .NET thin client supports the Binary Object API described in the link:developers-guide/key-value-api/binary-objects[Working with Binary Objects] section. Use `ICacheClient.WithKeepBinary()` to switch the cache to binary mode and start working directly with binary objects avoiding serialization/deserialization. Use `IIgniteClient.GetBinary()` to get an instance of `IBinary` and build an object from scratch.
+The .NET thin client supports the Binary Object API described in the link:key-value-api/binary-objects[Working with Binary Objects] section. Use `ICacheClient.WithKeepBinary()` to switch the cache to binary mode and start working directly with binary objects avoiding serialization/deserialization. Use `IIgniteClient.GetBinary()` to get an instance of `IBinary` and build an object from scratch.
 
 [source, csharp]
 ----
@@ -150,7 +150,7 @@ include::code-snippets/dotnet/ThinClient.cs[tag=executingSql,indent=0]
 
 === SSL/TLS
 
-To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS in both the cluster configuration and the client configuration. Refer to the link:developers-guide/thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS in both the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
 
 The following code example demonstrates how to configure SSL parameters in the thin client.
 [source, csharp]
@@ -162,7 +162,7 @@ include::code-snippets/dotnet/ThinClient.cs[tag=ssl,indent=0]
 === Authentication
 
 
-Configure link:administrators-guide/security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
 
 [source, csharp]
 ----
diff --git a/docs/_docs/developers-guide/thin-clients/getting-started-with-thin-clients.adoc b/docs/_docs/thin-clients/getting-started-with-thin-clients.adoc
similarity index 79%
rename from docs/_docs/developers-guide/thin-clients/getting-started-with-thin-clients.adoc
rename to docs/_docs/thin-clients/getting-started-with-thin-clients.adoc
index 67928e3..e9d31ef 100644
--- a/docs/_docs/developers-guide/thin-clients/getting-started-with-thin-clients.adoc
+++ b/docs/_docs/thin-clients/getting-started-with-thin-clients.adoc
@@ -1,20 +1,20 @@
 = Thin Clients Overview
 
 == Overview
-A thin client is a lightweight Ignite client that connects to the cluster via a standard socket connection. 
-It does not become a part of the cluster topology, never holds any data, and is not used as a destination for compute grid calculations. 
+A thin client is a lightweight Ignite client that connects to the cluster via a standard socket connection.
+It does not become a part of the cluster topology, never holds any data, and is not used as a destination for compute grid calculations.
 What it does is simply establish a socket connection to a standard Ignite node and perform all operations through that node.
 
 Thin clients are based on the link:https://apacheignite.readme.io/docs/binary-client-protocol[binary client protocol], which makes it possible to support Ignite connectivity from any programming language.
 
 Ignite provides the following thin clients:
 
-* link:developers-guide/thin-clients/java-thin-client[Java Thin Client]
-* link:developers-guide/thin-clients/dotnet-thin-client[.NET/C# Thin Client]
-* link:developers-guide/thin-clients/cpp-thin-client[C++ Thin Client]
-* link:developers-guide/thin-clients/python-thin-client[Python Thin Client]
-* link:developers-guide/thin-clients/nodejs-thin-client[Node.js Thin Client]
-* link:developers-guide/thin-clients/php-thin-client[PHP Thin Client]
+* link:thin-clients/java-thin-client[Java Thin Client]
+* link:thin-clients/dotnet-thin-client[.NET/C# Thin Client]
+* link:thin-clients/cpp-thin-client[C++ Thin Client]
+* link:thin-clients/python-thin-client[Python Thin Client]
+* link:thin-clients/nodejs-thin-client[Node.js Thin Client]
+* link:thin-clients/php-thin-client[PHP Thin Client]
 
 ////
 *TODO: add a diagram of a thin client connecting to the cluster (multiple nodes) and how a request is rerouted to the node that hosts the data*
@@ -27,7 +27,7 @@ The following table outlines features supported by each client.
 
 [%header,format=csv,cols="2,1,1,1,1,1,1"]
 |===
-include::developers-guide/thin-client-comparison.csv[]
+include::thin-client-comparison.csv[]
 |===
 
 === Client Connection Failover
@@ -39,7 +39,7 @@ Refer to the specific client documentation for more details.
 [#partition-awareness]
 === Partition Awareness
 
-As explained in the link:developers-guide/data-modeling/data-partitioning[Data Partitioning] section, data in the cluster is distributed between the nodes in a balanced manner for scalability and performance reasons.
+As explained in the link:data-modeling/data-partitioning[Data Partitioning] section, data in the cluster is distributed between the nodes in a balanced manner for scalability and performance reasons.
 Each cluster node maintains a subset of the data and the partition distribution map, which is used to determine the node that keeps the primary/backup copy of requested entries.
 
 include::includes/partition-awareness.adoc[]
@@ -49,7 +49,7 @@ Refer to the documentation of the specific client for more information.
 
 === Authentication
 
-All thin clients support authentication in the cluster side. Authentication is link:administrators-guide/security/authentication[configured in the cluster] configuration, and the client simply provide user credentials.
+All thin clients support authentication in the cluster side. Authentication is link:security/authentication[configured in the cluster] configuration, and the client simply provide user credentials.
 Refer to the documentation of the specific client for more information.
 
 == Cluster Configuration
@@ -108,5 +108,5 @@ See the complete list of parameters in the link:{javadoc_base_url}/org/apache/ig
 
 === Enabling SSL/TLS for Thin Clients
 
-Refer to the  link:administrators-guide/security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section.
+Refer to the  link:security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section.
 
diff --git a/docs/_docs/developers-guide/thin-clients/java-thin-client.adoc b/docs/_docs/thin-clients/java-thin-client.adoc
similarity index 89%
rename from docs/_docs/developers-guide/thin-clients/java-thin-client.adoc
rename to docs/_docs/thin-clients/java-thin-client.adoc
index 00db957..21ac06c 100644
--- a/docs/_docs/developers-guide/thin-clients/java-thin-client.adoc
+++ b/docs/_docs/thin-clients/java-thin-client.adoc
@@ -3,7 +3,7 @@
 :sourceCodeFile: {javaCodeDir}/JavaThinClient.java
 == Overview
 
-The Java thin client is a lightweight client that connects to the cluster via a standard socket connection. It does not become a part of the cluster topology, never holds any data, and is not used as a destination for compute grid calculations. The thin client simply establishes a socket connection to a standard node​ and performs all operations through that node.
+The Java thin client is a lightweight client that connects to the cluster via a standard socket connection. It does not become a part of the cluster topology, never holds any data, and is not used as a destination for compute calculations. The thin client simply establishes a socket connection to a standard node​ and performs all operations through that node.
 
 To start a single node cluster that you can use to run examples, refer to the link:getting-started/quick-start/java[Java Quick Start Guide].
 
@@ -93,7 +93,7 @@ include::{sourceCodeFile}[tag=key-value-operations,indent=0]
 -------------------------------------------------------------------------------
 
 === Executing Scan Queries
-Use the `ScanQuery<K, V>` class to get a set of entries that satisfy a given condition. The thin client sends the query to the cluster node where it is executed as a normal link:developers-guide/key-value-api/using-scan-queries[scan query].
+Use the `ScanQuery<K, V>` class to get a set of entries that satisfy a given condition. The thin client sends the query to the cluster node where it is executed as a normal link:key-value-api/using-scan-queries[scan query].
 
 The query condition is specified by an `IgniteBiPredicate<K, V>` object that is passed to the query constructor as an argument. The predicate is applied on the server side. If you don't provide any predicate, the query returns all cache entries.
 
@@ -126,7 +126,7 @@ include::{sourceCodeFile}[tags=tx,indent=0]
 
 ==== Transaction Configuration
 
-Client transactions can have different link:developers-guide/key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency modes, isolation levels], and execution timeout, which can be set for all transactions or on a per transaction basis.
+Client transactions can have different link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency modes, isolation levels], and execution timeout, which can be set for all transactions or on a per transaction basis.
 
 The `ClientConfiguration` object supports setting the default concurrency mode, isolation level, and timeout for all transactions started with this client interface.
 
@@ -147,7 +147,7 @@ include::{sourceCodeFile}[tags=tx-custom-properties,indent=0]
 
 
 === Working with Binary Objects
-The thin client fully supports Binary Object API described in the link:developers-guide/key-value-api/binary-objects[Working with Binary Objects] section.
+The thin client fully supports Binary Object API described in the link:key-value-api/binary-objects[Working with Binary Objects] section.
 Use `CacheClient.withKeepBinary()` to switch the cache to binary mode and start working directly with binary objects to avoid serialization/deserialization.
 Use `IgniteClient.binary()` to get an instance of `IgniteBinary` and build an object from scratch.
 
@@ -156,7 +156,7 @@ Use `IgniteClient.binary()` to get an instance of `IgniteBinary` and build an ob
 include::{sourceCodeFile}[tag=binary-example,indent=0]
 -------------------------------------------------------------------------------
 
-Refer to the link:developers-guide/key-value-api/binary-objects[Working with Binary Objects] page for detailed information.
+Refer to the link:key-value-api/binary-objects[Working with Binary Objects] page for detailed information.
 
 == Executing SQL Statements
 
@@ -170,7 +170,7 @@ The `query(SqlFieldsQuery)` method returns an instance of `FieldsQueryCursor`, w
 
 NOTE: The `getAll()` method retrieves the results from the cursor and closes it.
 
-Read more about using `SqlFieldsQuery` and SQL API in the link:developers-guide/SQL/sql-api[Using SQL API] section.
+Read more about using `SqlFieldsQuery` and SQL API in the link:SQL/sql-api[Using SQL API] section.
 
 == Handling Exceptions
 
@@ -189,7 +189,7 @@ include::{sourceCodeFile}[tag=results-to-map,indent=0]
 
 === SSL/TLS
 
-To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS in both the cluster configuration and the client configuration. Refer to the link:developers-guide/thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS in both the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
 
 To enable encrypted communication in the thin client, provide a keystore that contains the encryption key and a truststore with the trusted certificates in the thin client configuration.
 
@@ -217,7 +217,7 @@ The following table explains encryption parameters of the client configuration:
 
 === Authentication
 
-Configure link:administrators-guide/security/authentication[authentication on the cluster side] and provide the user name and password in the client configuration.
+Configure link:security/authentication[authentication on the cluster side] and provide the user name and password in the client configuration.
 
 [source, java]
 -------------------------------------------------------------------------------
diff --git a/docs/_docs/developers-guide/thin-clients/nodejs-thin-client.adoc b/docs/_docs/thin-clients/nodejs-thin-client.adoc
similarity index 90%
rename from docs/_docs/developers-guide/thin-clients/nodejs-thin-client.adoc
rename to docs/_docs/thin-clients/nodejs-thin-client.adoc
index 27a08f3..2a540a0 100644
--- a/docs/_docs/developers-guide/thin-clients/nodejs-thin-client.adoc
+++ b/docs/_docs/thin-clients/nodejs-thin-client.adoc
@@ -51,7 +51,7 @@ You can create as many `IgniteClient` instances as needed. All of them will work
 
 == Connecting to Cluster
 To connect the client to a cluster, use the `IgniteClient.connect()` method.
-It accepts an object of the `IgniteClientConfiguration` class that represents connection parameters. The connection parameters must contain a list of nodes (in the `host:port` format) that will be used for link:developers-guide/thin-clients/getting-started-with-thin-clients#client-connection-failover[failover purposes].
+It accepts an object of the `IgniteClientConfiguration` class that represents connection parameters. The connection parameters must contain a list of nodes (in the `host:port` format) that will be used for link:thin-clients/getting-started-with-thin-clients#client-connection-failover[failover purposes].
 
 [source, js]
 ----
@@ -62,7 +62,7 @@ The client has three connection states: `CONNECTING`, `CONNECTED`, `DISCONNECTED
 You can specify a callback function in the client configuration object, which will be called every time the connection state changes.
 
 Interactions with the cluster are only possible in the `CONNECTED` state.
-If the client loses the connection, it automatically switches to the `CONNECTING` state and tries to re-connect using the link:developers-guide/thin-clients/getting-started-with-thin-clients#client-connection-failover[failover mechanism]. If it fails to reconnect to all the endpoints from the provided list, the client switches to the `DISCONNECTED` state.
+If the client loses the connection, it automatically switches to the `CONNECTING` state and tries to re-connect using the link:thin-clients/getting-started-with-thin-clients#client-connection-failover[failover mechanism]. If it fails to reconnect to all the endpoints from the provided list, the client switches to the `DISCONNECTED` state.
 
 You can call the `disconnect()` method to close the connection. This will switch the client to the `DISCONNECTED` state.
 
@@ -207,7 +207,7 @@ include::{source_code_dir}/sql-fields-query.js[indent=0]
 == Security
 
 === SSL/TLS
-To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration. Refer to the link:developers-guide/thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for instructions on the cluster configuration.
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for instructions on the cluster configuration.
 
 Here is an example configuration for enabling SSL in the thin client:
 
@@ -219,7 +219,7 @@ include::{source_code_dir}/tls.js[indent=0]
 
 
 === Authentication
-Configure link:administrators-guide/security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
 
 [source, js]
 ----
diff --git a/docs/_docs/developers-guide/thin-clients/php-thin-client.adoc b/docs/_docs/thin-clients/php-thin-client.adoc
similarity index 89%
rename from docs/_docs/developers-guide/thin-clients/php-thin-client.adoc
rename to docs/_docs/thin-clients/php-thin-client.adoc
index 5b3b436..c7b04d2 100644
--- a/docs/_docs/developers-guide/thin-clients/php-thin-client.adoc
+++ b/docs/_docs/thin-clients/php-thin-client.adoc
@@ -46,7 +46,7 @@ To connect to a cluster, define a `ClientConfiguration` object with the desired
 include::code-snippets/php/ConnectingToCluster.php[tag=connecting,indent=0]
 ----
 
-The `ClientConfiguration` constructor accepts a list of node endpoints. At least one endpoint must be specified. If you specify more than one, the thin client will use them for link:developers-guide/thin-clients/getting-started-with-thin-clients#client-connection-failover[failover purposes].
+The `ClientConfiguration` constructor accepts a list of node endpoints. At least one endpoint must be specified. If you specify more than one, the thin client will use them for link:thin-clients/getting-started-with-thin-clients#client-connection-failover[failover purposes].
 
 If the client cannot connect to the cluster, a `NoConnectionException`  is thrown when attempting to perform any remote operation.
 
@@ -115,7 +115,7 @@ include::code-snippets/php/UsingKeyValueApi.php[tag=executingSql,indent=0]
 == Security
 
 === SSL/TLS
-To use encrypted communication between the thin client and the cluster, you have to enable it both in the cluster configuration and the client configuration. Refer to the link:developers-guide/thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+To use encrypted communication between the thin client and the cluster, you have to enable it both in the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
 
 Here is an example configuration for enabling SSL in the thin client:
 [source, php]
@@ -124,7 +124,7 @@ include::code-snippets/php/Security.php[tag=tls,indent=0]
 ----
 
 === Authentication
-Configure link:administrators-guide/security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
 
 [source, php]
 ----
diff --git a/docs/_docs/developers-guide/thin-clients/python-thin-client.adoc b/docs/_docs/thin-clients/python-thin-client.adoc
similarity index 91%
rename from docs/_docs/developers-guide/thin-clients/python-thin-client.adoc
rename to docs/_docs/thin-clients/python-thin-client.adoc
index 19f0599..6fb9212 100644
--- a/docs/_docs/developers-guide/thin-clients/python-thin-client.adoc
+++ b/docs/_docs/thin-clients/python-thin-client.adoc
@@ -146,21 +146,21 @@ The list of property keys that you can specify are provided in the `prop_codes`
 
 |PROP_CACHE_MODE
 |int
-a| link:developers-guide/data-modeling/data-partitioning#partitionedreplicated-mode[Cache mode]:
+a| link:data-modeling/data-partitioning#partitionedreplicated-mode[Cache mode]:
 
 * REPLICATED=1,
 * PARTITIONED=2
 
 |PROP_CACHE_ATOMICITY_MODE
 |int
-a|link:developers-guide/configuring-caches/atomicity-modes[Cache atomicity mode]:
+a|link:configuring-caches/atomicity-modes[Cache atomicity mode]:
 
 * TRANSACTIONAL=0,
 * ATOMIC=1
 
 |PROP_BACKUPS_NUMBER
 |int
-|link:developers-guide/data-modeling/data-partitioning#backup-partitions[Number of backup partitions].
+|link:data-modeling/data-partitioning#backup-partitions[Number of backup partitions].
 
 |PROP_WRITE_SYNCHRONIZATION_MODE
 |int
@@ -180,11 +180,11 @@ a|Write synchronization mode:
 
 |PROP_DATA_REGION_NAME
 |str
-| link:developers-guide/memory-configuration/data-regions[Data region] name.
+| link:memory-configuration/data-regions[Data region] name.
 
 |PROP_IS_ONHEAP_CACHE_ENABLED
 |bool
-|Enable link:developers-guide/configuring-caches/on-heap-caching[on-heap caching] for the cache.
+|Enable link:configuring-caches/on-heap-caching[on-heap caching] for the cache.
 
 |PROP_QUERY_ENTITIES
 |list
@@ -266,7 +266,7 @@ a|Rebalancing mode:
 
 |PROP_PARTITION_LOSS_POLICY
 |int
-a|link:developers-guide/partition-loss-policy[Partition loss policy]:
+a|link:partition-loss-policy[Partition loss policy]:
 
 - READ_ONLY_SAFE=0,
 - READ_ONLY_ALL=1,
@@ -276,7 +276,7 @@ a|link:developers-guide/partition-loss-policy[Partition loss policy]:
 
 |PROP_EAGER_TTL
 |bool
-|link:developers-guide/configuring-caches/expiry-policies#eager-ttl[Eager TTL]
+|link:configuring-caches/expiry-policies#eager-ttl[Eager TTL]
 
 |PROP_STATISTICS_ENABLED
 |bool
@@ -284,7 +284,7 @@ a|link:developers-guide/partition-loss-policy[Partition loss policy]:
 |===
 
 ==== Query Entities
-Query entities are objects that describe link:developers-guide/SQL/sql-api#configuring-queryable-fields[queryable fields], i.e. the fields of the cache objects that can be queried using SQL queries.
+Query entities are objects that describe link:SQL/sql-api#configuring-queryable-fields[queryable fields], i.e. the fields of the cache objects that can be queried using SQL queries.
 
 - `table_name`: SQL table name.
 - `key_field_name`: name of the key field.
@@ -427,7 +427,7 @@ include::{sourceFileDir}/sql.py[tag=field-names,indent=0]
 
 === SSL/TLS
 To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration.
-Refer to the link:developers-guide/thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
 
 Here is an example configuration for enabling SSL in the thin client:
 [source, python]
@@ -455,7 +455,7 @@ if provided,
 |===
 
 === Authentication
-Configure link:administrators-guide/security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
 
 [source, python]
 ----
diff --git a/docs/_docs/developers-guide/transactions/mvcc.adoc b/docs/_docs/transactions/mvcc.adoc
similarity index 90%
rename from docs/_docs/developers-guide/transactions/mvcc.adoc
rename to docs/_docs/transactions/mvcc.adoc
index b6354a8..55fcd28 100644
--- a/docs/_docs/developers-guide/transactions/mvcc.adoc
+++ b/docs/_docs/transactions/mvcc.adoc
@@ -4,7 +4,7 @@ IMPORTANT: MVCC is currently in beta.
 
 == Overview
 
-Caches with the `TRANSACTIONAL_SNAPSHOT` atomicity mode support SQL transactions as well as link:developers-guide/key-value-api/transactions[key-value transactions] and enable multiversion concurrency control (MVCC) for both types of transactions.
+Caches with the `TRANSACTIONAL_SNAPSHOT` atomicity mode support SQL transactions as well as link:key-value-api/transactions[key-value transactions] and enable multiversion concurrency control (MVCC) for both types of transactions.
 
 
 == Multiversion Concurrency Control
@@ -43,7 +43,7 @@ CREATE TABLE Person WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT"
 ----
 --
 
-NOTE: The `TRANSACTIONAL_SNAPSHOT` mode only supports the default concurrency mode (`PESSIMISTIC`) and default isolation level (`REPEATABLE_READ`). See link:developers-guide/key-value-api/transactions#concurrency-modes-and-isolation-levels[Concurrency modes and isolation levels] for details.
+NOTE: The `TRANSACTIONAL_SNAPSHOT` mode only supports the default concurrency mode (`PESSIMISTIC`) and default isolation level (`REPEATABLE_READ`). See link:key-value-api/transactions#concurrency-modes-and-isolation-levels[Concurrency modes and isolation levels] for details.
 
 
 == Concurrent Updates
@@ -161,7 +161,7 @@ When a nested transaction occurs within another transaction, the `nestedTransact
 
 
 === Continuous Queries
-If you use link:developers-guide/key-value-api/continuous-queries[Continuous Queries] with an MVCC-enabled cache, there are several limitations that you should be aware of:
+If you use link:key-value-api/continuous-queries[Continuous Queries] with an MVCC-enabled cache, there are several limitations that you should be aware of:
 
 * When an update event is received, subsequent reads of the updated key may return the old value for a period of time before the MVCC-coordinator learns of the update. This is because the update event is sent from the node where the key is updated, as soon as it is updated. In such a case, the MVCC-coordinator may not be immediately aware of that update, and therefore, subsequent reads may return outdated information during that period of time.
 * There is a limit on the number of keys per node a single transaction can update when continuous queries are used. The updated values are kept in memory, and if there are too many updates, the node might not have enough RAM to keep all the objects. To avoid OutOfMemory errors, each transaction is allowed to update at most 20,000 keys (the default value) on a single node. If this value is exceeded, the transaction will throw an exception and will be rolled back. This number can be change [...]
@@ -169,11 +169,11 @@ If you use link:developers-guide/key-value-api/continuous-queries[Continuous Que
 === Other Limitations
 The following features are not supported for the MVCC-enabled caches. These limitations may be addressed in future releases.
 
-* link:developers-guide/near-cache[Near Caches]
-* link:developers-guide/configuring-caches/expiry-policies[Expiry Policies]
-* link:developers-guide/events/listening-to-events[Events]
+* link:near-cache[Near Caches]
+* link:configuring-caches/expiry-policies[Expiry Policies]
+* link:events/listening-to-events[Events]
 * link:{javadoc_base_url}/org/apache/ignite/cache/CacheInterceptor.html[Cache Interceptors]
-* link:developers-guide/persistence/external-storage[External Storage]
-* link:developers-guide/configuring-caches/on-heap-caching[On-Heap Caching]
+* link:persistence/external-storage[External Storage]
+* link:configuring-caches/on-heap-caching[On-Heap Caching]
 * link:{javadoc_base_url}/org/apache/ignite/IgniteCache.html#lock-K-[Explicit Locks]
 * The link:{javadoc_base_url}/org/apache/ignite/IgniteCache.html#localEvict-java.util.Collection-[localEvict()] and link:{javadoc_base_url}/org/apache/ignite/IgniteCache.html#localPeek-K-org.apache.ignite.cache.CachePeekMode...-[localPeek()] methods
diff --git a/docs/_docs/developers-guide/understanding-configuration.adoc b/docs/_docs/understanding-configuration.adoc
similarity index 92%
rename from docs/_docs/developers-guide/understanding-configuration.adoc
rename to docs/_docs/understanding-configuration.adoc
index 2626fe7..4e3d1d2 100644
--- a/docs/_docs/developers-guide/understanding-configuration.adoc
+++ b/docs/_docs/understanding-configuration.adoc
@@ -6,8 +6,7 @@ This chapter explains different ways of configuring an Ignite cluster.
 == Overview
 
 When you start a node, you need to provide configuration parameters to the node.
-Basically, all configuration parameters are defined in an instance of the link:{javadoc_base_url}/org/apache/ignite/configuration/IgniteConfiguration.html[IgniteConfiguration]
-class.
+Basically, all configuration parameters are defined in an instance of the javadoc:org.apache.ignite.configuration.IgniteConfiguration[] class.
 You can set the parameters either programmatically or via an XML configuration file.
 These 2 ways are fully interchangeable.
 
@@ -28,7 +27,7 @@ To create a configuration in a Spring XML format, you need to define the
 https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/xsd-configuration.html[official
 Spring documentation].
 
-In the example below, we create an `IgniteConfiguration` bean, set the `workDirectory` property, and configure a link:developers-guide/data-modeling/data-partitioning#partitioned[partitioned cache].
+In the example below, we create an `IgniteConfiguration` bean, set the `workDirectory` property, and configure a link:data-modeling/data-partitioning#partitioned[partitioned cache].
 
 [source,xml]
 ----
diff --git a/docs/_plugins/asciidoctor-extensions.rb b/docs/_plugins/asciidoctor-extensions.rb
index d605e93..c52c3bc 100644
--- a/docs/_plugins/asciidoctor-extensions.rb
+++ b/docs/_plugins/asciidoctor-extensions.rb
@@ -68,6 +68,32 @@ Asciidoctor::Extensions.register do
 end
 
 
+class JavadocUrlMacro < Extensions::InlineMacroProcessor
+  use_dsl
+
+  named :javadoc
+  name_positional_attributes 'text'
+
+  def process parent, target, attrs
+
+    parts = target.split('.')
+
+    if attrs['text'] == nil
+      text = parts.last();
+    else
+      text = attrs['text'] 
+    end
+
+    target = parent.document.attributes['javadoc_base_url'] + '/' + parts.join('/') + ".html" 
+    attrs.store('window', '_blank')
+
+    (create_anchor parent, text, type: :link, target: target, attributes: attrs).render
+  end
+end
+
+Asciidoctor::Extensions.register do
+  inline_macro JavadocUrlMacro  
+end
 Extensions.register do 
  inline_macro do
    named :link