You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ignite.apache.org by dm...@apache.org on 2020/12/22 16:32:38 UTC

[ignite] branch ignite-2.9.1 updated: IGNITE-13884 Merged docs into 2.9.1 from 2.9 branch with updates (#8598)

This is an automated email from the ASF dual-hosted git repository.

dmagda pushed a commit to branch ignite-2.9.1
in repository https://gitbox.apache.org/repos/asf/ignite.git


The following commit(s) were added to refs/heads/ignite-2.9.1 by this push:
     new 07ddb79  IGNITE-13884 Merged docs into 2.9.1 from 2.9 branch with updates (#8598)
07ddb79 is described below

commit 07ddb79d7e4870e3a8c618b5d16503235b4bb4dc
Author: ymolochkov <mo...@gmail.com>
AuthorDate: Tue Dec 22 19:32:06 2020 +0300

    IGNITE-13884 Merged docs into 2.9.1 from 2.9 branch with updates (#8598)
    
    * IGNITE-7595: new Ignite docs (returning the original changes after fixing licensing issues)
    
    (cherry picked from commit 073488ac97517bbaad9f6b94b781fc404646f191)
    
    * IGNITE-13574: add license headers for some imported files of the Ignite docs (#8361)
    
    * Added a proper license header to some files used by the docs.
    
    * Enabled the defaultLicenseMatcher for the license checker.
    
    (cherry picked from commit d928fb8576b22dffbfce90a5541e67dc6cbfe410)
    
    * ignite docs: updated a couple of contribution instructions
    
    (cherry picked from commit 9e8da702068b1232789f8f9f93680f2c6d69ed16)
    
    * IGNITE-13527: replace some references to the readme.io docs with the references to the new pages. The job will be finished as part of IGNITE-13586
    
    (cherry picked from commit 7399ae64972cc097c48769cb5e2d9622ce7f7234)
    
    * ignite docs: fixed broken lings to the SQLLine page
    
    (cherry picked from commit faf4f467e964d478b3d99b94d43d32430a7e88f0)
    
    * IGNITE-13615 Update .NET thin client feature set documentation
    
    * IGNITE-13652 Wrong GitHub link for Apache Ignite With Spring Data/Example (#8420)
    
    * ignite docs: updated the TcpDiscovery.soLinger documentation
    
    * IGNITE-13663 : Represent in the documenttion affection of several node addresses on failure detection v2. (#8424)
    
    * ignite docs: set the latest spring-data artifact id after receiving user feedback
    
    * IGNITE-12951 Update documents for migrated extensions - Fixes #8488.
    
    Signed-off-by: samaitra <sa...@gmail.com>
    (cherry picked from commit 15a5da500c08948ee081533af97a9f1c2c8330f8)
    
    * ignite docs: fixing a broken documentation link
    
    * ignite docs: updated the index page with quick links to the APIs and examples
    
    * ignite docs: fixed broken links and updated the C++ API header
    
    * ignite docs: fixed case of GitHub
    
    * IGNITE-13876 Updated documentation for 2.9.1 release (#8592)
    
    (cherry picked from commit e74cf6ba8711338ed48dd01d1efe12505977f63f)
    
    Co-authored-by: Denis Magda <dm...@gridgain.com>
    Co-authored-by: Pavel Tupitsyn <pt...@apache.org>
    Co-authored-by: Denis Garus <ga...@gmail.com>
    Co-authored-by: Vladsz83 <vl...@gmail.com>
    Co-authored-by: samaitra <sa...@gmail.com>
    Co-authored-by: Nikita Safonov <73...@users.noreply.github.com>
    Co-authored-by: ymolochkov <yn...@sberbank.ru>
---
 CONTRIBUTING.md                                    |    2 +-
 README.txt                                         |   14 +-
 config/visor-cmd/node_startup_by_ssh.sample.ini    |    2 +-
 docs/.gitignore                                    |    5 +
 docs/Gemfile                                       |   14 +
 docs/README.adoc                                   |  212 ++
 docs/_config.yml                                   |   46 +
 docs/_data/toc.yaml                                |  559 ++++
 docs/_docs/SQL/JDBC/error-codes.adoc               |   81 +
 docs/_docs/SQL/JDBC/jdbc-client-driver.adoc        |  297 ++
 docs/_docs/SQL/JDBC/jdbc-driver.adoc               |  649 +++++
 docs/_docs/SQL/ODBC/connection-string-dsn.adoc     |  255 ++
 docs/_docs/SQL/ODBC/data-types.adoc                |   38 +
 docs/_docs/SQL/ODBC/error-codes.adoc               |  155 +
 docs/_docs/SQL/ODBC/odbc-driver.adoc               |  343 +++
 docs/_docs/SQL/ODBC/querying-modifying-data.adoc   |  491 ++++
 docs/_docs/SQL/ODBC/specification.adoc             | 1090 ++++++++
 docs/_docs/SQL/custom-sql-func.adoc                |   49 +
 docs/_docs/SQL/distributed-joins.adoc              |  110 +
 docs/_docs/SQL/indexes.adoc                        |  357 +++
 docs/_docs/SQL/schemas.adoc                        |   94 +
 docs/_docs/SQL/sql-api.adoc                        |  352 +++
 docs/_docs/SQL/sql-introduction.adoc               |   53 +
 docs/_docs/SQL/sql-transactions.adoc               |   87 +
 docs/_docs/SQL/sql-tuning.adoc                     |  471 ++++
 .../binary-client-protocol.adoc                    |  286 ++
 .../binary-type-metadata.adoc                      |  421 +++
 .../cache-configuration.adoc                       |  714 +++++
 docs/_docs/binary-client-protocol/data-format.adoc | 1072 +++++++
 .../binary-client-protocol/key-value-queries.adoc  | 1416 ++++++++++
 .../sql-and-scan-queries.adoc                      |  634 +++++
 docs/_docs/clustering/baseline-topology.adoc       |  159 ++
 docs/_docs/clustering/clustering.adoc              |   51 +
 docs/_docs/clustering/connect-client-nodes.adoc    |  106 +
 docs/_docs/clustering/discovery-in-the-cloud.adoc  |  270 ++
 docs/_docs/clustering/network-configuration.adoc   |  198 ++
 .../running-client-nodes-behind-nat.adoc           |   47 +
 docs/_docs/clustering/tcp-ip-discovery.adoc        |  426 +++
 docs/_docs/clustering/zookeeper-discovery.adoc     |  193 ++
 .../_docs/code-deployment/deploying-user-code.adoc |   96 +
 docs/_docs/code-deployment/peer-class-loading.adoc |  166 ++
 docs/_docs/code-snippets/cpp/src/affinity_run.cpp  |  148 +
 .../cpp/src/cache_asynchronous_execution.cpp       |  128 +
 .../cpp/src/cache_atomic_operations.cpp            |   54 +
 .../cpp/src/cache_creating_dynamically.cpp         |   37 +
 docs/_docs/code-snippets/cpp/src/cache_get_put.cpp |   58 +
 .../cpp/src/cache_getting_instance.cpp             |   38 +
 docs/_docs/code-snippets/cpp/src/city.h            |   69 +
 docs/_docs/code-snippets/cpp/src/city_key.h        |   76 +
 .../cpp/src/compute_acessing_data.cpp              |  134 +
 .../code-snippets/cpp/src/compute_broadcast.cpp    |  136 +
 docs/_docs/code-snippets/cpp/src/compute_call.cpp  |  151 +
 .../code-snippets/cpp/src/compute_call_async.cpp   |  165 ++
 docs/_docs/code-snippets/cpp/src/compute_get.cpp   |   38 +
 docs/_docs/code-snippets/cpp/src/compute_run.cpp   |  147 +
 .../code-snippets/cpp/src/concurrent_updates.cpp   |   60 +
 .../code-snippets/cpp/src/continuous_query.cpp     |   87 +
 .../cpp/src/continuous_query_filter.cpp            |  167 ++
 .../cpp/src/continuous_query_listener.cpp          |   76 +
 docs/_docs/code-snippets/cpp/src/country.h         |   74 +
 docs/_docs/code-snippets/cpp/src/invoke.cpp        |  156 ++
 .../cpp/src/key_value_execute_sql.cpp              |   55 +
 .../code-snippets/cpp/src/key_value_object_key.cpp |   52 +
 docs/_docs/code-snippets/cpp/src/person.h          |   94 +
 docs/_docs/code-snippets/cpp/src/scan_query.cpp    |   53 +
 .../cpp/src/setting_work_directory.cpp             |   32 +
 docs/_docs/code-snippets/cpp/src/sql.cpp           |   56 +
 docs/_docs/code-snippets/cpp/src/sql_create.cpp    |   40 +
 .../_docs/code-snippets/cpp/src/sql_join_order.cpp |   33 +
 .../code-snippets/cpp/src/start_stop_nodes.cpp     |   45 +
 .../code-snippets/cpp/src/thin_authentication.cpp  |   44 +
 .../code-snippets/cpp/src/thin_client_cache.cpp    |   46 +
 .../code-snippets/cpp/src/thin_client_ssl.cpp      |   39 +
 .../cpp/src/thin_creating_client_instance.cpp      |   42 +
 .../cpp/src/thin_partition_awareness.cpp           |   46 +
 docs/_docs/code-snippets/cpp/src/transactions.cpp  |   78 +
 .../cpp/src/transactions_pessimistic.cpp           |   52 +
 .../code-snippets/dotnet/AffinityCollocation.cs    |  141 +
 .../_docs/code-snippets/dotnet/BaselineTopology.cs |   49 +
 .../code-snippets/dotnet/BasicCacheOperations.cs   |   93 +
 docs/_docs/code-snippets/dotnet/ClusterGroups.cs   |   89 +
 .../code-snippets/dotnet/ClusteringOverview.cs     |   58 +
 .../dotnet/ClusteringTcpIpDiscovery.cs             |  132 +
 .../dotnet/CollocationgComputationsWithData.cs     |  161 ++
 .../code-snippets/dotnet/ConfiguringMetrics.cs     |   86 +
 .../code-snippets/dotnet/ContiniuosQueries.cs      |  116 +
 .../dotnet/DataModellingConfiguringCaches.cs       |  103 +
 .../dotnet/DataModellingDataPartitioning.cs        |   53 +
 docs/_docs/code-snippets/dotnet/DataRebalancing.cs |   67 +
 docs/_docs/code-snippets/dotnet/DataStreaming.cs   |  224 ++
 docs/_docs/code-snippets/dotnet/DefiningIndexes.cs |  187 ++
 .../dotnet/DistributedComputingApi.cs              |  284 ++
 .../_docs/code-snippets/dotnet/EvictionPolicies.cs |  114 +
 docs/_docs/code-snippets/dotnet/ExpiryPolicies.cs  |   60 +
 docs/_docs/code-snippets/dotnet/IgniteLifecycle.cs |   53 +
 docs/_docs/code-snippets/dotnet/MapReduceApi.cs    |  158 ++
 .../code-snippets/dotnet/MemoryArchitecture.cs     |   79 +
 docs/_docs/code-snippets/dotnet/NearCaches.cs      |  118 +
 docs/_docs/code-snippets/dotnet/OnHeapCaching.cs   |   35 +
 .../_docs/code-snippets/dotnet/PeerClassLoading.cs |   52 +
 .../code-snippets/dotnet/PerformingTransactions.cs |  152 +
 .../dotnet/PersistenceIgnitePersistence.cs         |   82 +
 .../code-snippets/dotnet/PersistenceTuning.cs      |   95 +
 docs/_docs/code-snippets/dotnet/PlatformCache.cs   |  120 +
 docs/_docs/code-snippets/dotnet/SqlJoinOrder.cs    |   38 +
 docs/_docs/code-snippets/dotnet/SqlTransactions.cs |  102 +
 docs/_docs/code-snippets/dotnet/ThinClient.cs      |  351 +++
 .../dotnet/UnderstandingConfiguration.cs           |   51 +
 .../code-snippets/dotnet/UnderstandingSchemas.cs   |   38 +
 .../_docs/code-snippets/dotnet/UsingScanQueries.cs |   82 +
 docs/_docs/code-snippets/dotnet/UsingSqlApi.cs     |  211 ++
 .../dotnet/WorkingWithBinaryObjects.cs             |  142 +
 .../code-snippets/dotnet/WorkingWithEvents.cs      |  183 ++
 docs/_docs/code-snippets/dotnet/dotnet.csproj      |   11 +
 docs/_docs/code-snippets/java/pom.xml              |  146 +
 .../snippets/AffinityCollocationExample.java       |  150 +
 .../org/apache/ignite/snippets/BackupFilter.java   |   39 +
 .../ignite/snippets/BasicCacheOperations.java      |  139 +
 .../ignite/snippets/CacheJdbcPersonStore.java      |  121 +
 .../org/apache/ignite/snippets/ClientNodes.java    |   81 +
 .../org/apache/ignite/snippets/ClusterAPI.java     |  118 +
 .../apache/ignite/snippets/ClusteringOverview.java |   80 +
 .../ignite/snippets/CollocatedComputations.java    |  184 ++
 .../apache/ignite/snippets/ComputeTaskExample.java |   81 +
 .../apache/ignite/snippets/ConfiguringCaches.java  |  104 +
 .../apache/ignite/snippets/ConfiguringMetrics.java |  169 ++
 .../apache/ignite/snippets/CustomThreadPool.java   |   69 +
 .../apache/ignite/snippets/DataPartitioning.java   |   67 +
 .../snippets/DataRegionConfigurationExample.java   |   71 +
 .../org/apache/ignite/snippets/DataStreaming.java  |  179 ++
 .../org/apache/ignite/snippets/DataStructures.java |  222 ++
 .../java/org/apache/ignite/snippets/Discovery.java |   42 +
 .../ignite/snippets/DiscoveryInTheCloud.java       |  151 +
 .../apache/ignite/snippets/DiskCompression.java    |   57 +
 .../ignite/snippets/DistributedComputing.java      |  197 ++
 .../java/org/apache/ignite/snippets/Events.java    |  188 ++
 .../apache/ignite/snippets/EvictionPolicies.java   |  164 ++
 .../org/apache/ignite/snippets/ExpiryPolicies.java |   68 +
 .../apache/ignite/snippets/ExternalStorage.java    |  169 ++
 .../org/apache/ignite/snippets/FailureHandler.java |   55 +
 .../org/apache/ignite/snippets/FaultTolerance.java |   65 +
 .../ignite/snippets/IgniteExecutorService.java     |   56 +
 .../apache/ignite/snippets/IgniteLifecycle.java    |   76 +
 .../apache/ignite/snippets/IgnitePersistence.java  |  113 +
 .../java/org/apache/ignite/snippets/Indexes.java   |  159 ++
 .../org/apache/ignite/snippets/Indexes_groups.java |   37 +
 .../apache/ignite/snippets/JDBCClientDriver.java   |   80 +
 .../org/apache/ignite/snippets/JDBCThinDriver.java |  237 ++
 .../org/apache/ignite/snippets/JavaThinClient.java |  427 +++
 .../org/apache/ignite/snippets/JobScheduling.java  |  122 +
 .../org/apache/ignite/snippets/LoadBalancing.java  |  119 +
 .../java/org/apache/ignite/snippets/Logging.java   |   94 +
 .../java/org/apache/ignite/snippets/MapReduce.java |  170 ++
 .../apache/ignite/snippets/MyLifecycleBean.java    |   39 +
 .../org/apache/ignite/snippets/MyNodeFilter.java   |   40 +
 .../java/org/apache/ignite/snippets/NearCache.java |   69 +
 .../ignite/snippets/NetworkConfiguration.java      |   52 +
 .../org/apache/ignite/snippets/NodeFilter.java     |   75 +
 .../main/java/org/apache/ignite/snippets/ODBC.java |   38 +
 .../org/apache/ignite/snippets/OnHeapCaching.java  |   31 +
 .../snippets/PartitionLossPolicyExample.java       |  113 +
 .../apache/ignite/snippets/PeerClassLoading.java   |   42 +
 .../ignite/snippets/PerformingTransactions.java    |  178 ++
 .../apache/ignite/snippets/PersistenceTuning.java  |  109 +
 .../java/org/apache/ignite/snippets/Person.java    |   75 +
 .../QueryEntitiesExampleWithAnnotation.java        |   58 +
 .../apache/ignite/snippets/QueryEntityExample.java |   58 +
 .../apache/ignite/snippets/RESTConfiguration.java  |   31 +
 .../ignite/snippets/RebalancingConfiguration.java  |   62 +
 .../java/org/apache/ignite/snippets/Schemas.java   |   37 +
 .../java/org/apache/ignite/snippets/Security.java  |   94 +
 .../java/org/apache/ignite/snippets/Snapshots.java |   54 +
 .../java/org/apache/ignite/snippets/SqlAPI.java    |  195 ++
 .../apache/ignite/snippets/SqlTransactions.java    |   33 +
 .../main/java/org/apache/ignite/snippets/Swap.java |   55 +
 .../main/java/org/apache/ignite/snippets/TDE.java  |   63 +
 .../org/apache/ignite/snippets/TcpIpDiscovery.java |  335 +++
 .../java/org/apache/ignite/snippets/Tracing.java   |  110 +
 .../snippets/UnderstandingConfiguration.java       |   42 +
 .../apache/ignite/snippets/UserCodeDeployment.java |   66 +
 .../ignite/snippets/UsingContinuousQueries.java    |  158 ++
 .../apache/ignite/snippets/UsingScanQueries.java   |   87 +
 .../main/java/org/apache/ignite/snippets/WAL.java  |   46 +
 .../ignite/snippets/WorkingWithBinaryObjects.java  |  183 ++
 .../apache/ignite/snippets/ZookeeperDiscovery.java |   46 +
 .../java/org/apache/ignite/snippets/k8s/K8s.java   |   40 +
 .../apache/ignite/snippets/plugin/MyPlugin.java    |   84 +
 .../ignite/snippets/plugin/MyPluginProvider.java   |  142 +
 .../ignite/snippets/plugin/PluginExample.java      |   66 +
 .../ignite/snippets/services/MyCounterService.java |   32 +
 .../snippets/services/MyCounterServiceImpl.java    |   99 +
 .../ignite/snippets/services/ServiceExample.java   |  177 ++
 .../java/src/main/resources/config/ignite-jdbc.xml |   39 +
 .../java/src/main/resources/keystore/node.jks      |  Bin 0 -> 3230 bytes
 .../java/src/main/resources/keystore/trust.jks     |  Bin 0 -> 2432 bytes
 docs/_docs/code-snippets/k8s/cluster-role.yaml     |   45 +
 docs/_docs/code-snippets/k8s/service-account.yaml  |   22 +
 docs/_docs/code-snippets/k8s/service.yaml          |   43 +
 docs/_docs/code-snippets/k8s/setup.sh              |   96 +
 .../k8s/stateful/node-configuration.xml            |   55 +
 .../k8s/stateful/statefulset-template.yaml         |   96 +
 .../k8s/stateless/deployment-template.yaml         |   60 +
 .../k8s/stateless/node-configuration.xml           |   39 +
 docs/_docs/code-snippets/nodejs/authentication.js  |   53 +
 docs/_docs/code-snippets/nodejs/binary-types.js    |   80 +
 docs/_docs/code-snippets/nodejs/conf1.js           |   36 +
 docs/_docs/code-snippets/nodejs/conf2.js           |   39 +
 .../code-snippets/nodejs/configuring-cache-1.js    |   43 +
 .../code-snippets/nodejs/configuring-cache-2.js    |   40 +
 docs/_docs/code-snippets/nodejs/connecting.js      |   50 +
 docs/_docs/code-snippets/nodejs/enabling-debug.js  |   22 +
 .../code-snippets/nodejs/get-existing-cache.js     |   37 +
 docs/_docs/code-snippets/nodejs/initialize.js      |   33 +
 docs/_docs/code-snippets/nodejs/key-value.js       |   51 +
 docs/_docs/code-snippets/nodejs/scan-query.js      |   55 +
 docs/_docs/code-snippets/nodejs/scanquery.js       |   62 +
 .../_docs/code-snippets/nodejs/sql-fields-query.js |   60 +
 docs/_docs/code-snippets/nodejs/sql.js             |   75 +
 docs/_docs/code-snippets/nodejs/tls.js             |  128 +
 .../nodejs/types-mapping-configuration.js          |   45 +
 .../code-snippets/php/ConnectingToCluster.php      |   39 +
 docs/_docs/code-snippets/php/Security.php          |   45 +
 docs/_docs/code-snippets/php/UsingKeyValueApi.php  |  134 +
 docs/_docs/code-snippets/python/auth.py            |   33 +
 .../_docs/code-snippets/python/basic_operations.py |   42 +
 .../_docs/code-snippets/python/client_reconnect.py |   50 +
 docs/_docs/code-snippets/python/client_ssl.py      |   29 +
 docs/_docs/code-snippets/python/connect.py         |   22 +
 docs/_docs/code-snippets/python/create_cache.py    |   25 +
 .../python/create_cache_with_properties.py         |   52 +
 docs/_docs/code-snippets/python/scan.py            |   59 +
 docs/_docs/code-snippets/python/sql.py             |   66 +
 docs/_docs/code-snippets/python/type_hints.py      |   48 +
 .../code-snippets/xml/affinity-backup-filter.xml   |   65 +
 .../code-snippets/xml/attribute-node-filter.xml    |   58 +
 docs/_docs/code-snippets/xml/binary-objects.xml    |   54 +
 .../code-snippets/xml/cache-configuration.xml      |   49 +
 docs/_docs/code-snippets/xml/cache-groups.xml      |   56 +
 .../code-snippets/xml/cache-jdbc-pojo-store.xml    |  114 +
 docs/_docs/code-snippets/xml/cache-template.xml    |   49 +
 docs/_docs/code-snippets/xml/client-behind-nat.xml |   44 +
 docs/_docs/code-snippets/xml/client-node.xml       |   50 +
 docs/_docs/code-snippets/xml/configure-backups.xml |   54 +
 .../code-snippets/xml/configuring-metrics.xml      |   89 +
 docs/_docs/code-snippets/xml/custom-keys.xml       |   70 +
 .../xml/data-regions-configuration.xml             |   90 +
 docs/_docs/code-snippets/xml/deployment.xml        |   55 +
 .../code-snippets/xml/discovery-multicast.xml      |   36 +
 .../xml/discovery-static-and-multicast.xml         |   45 +
 docs/_docs/code-snippets/xml/discovery-static.xml  |   48 +
 docs/_docs/code-snippets/xml/disk-compression.xml  |   59 +
 docs/_docs/code-snippets/xml/events.xml            |   54 +
 docs/_docs/code-snippets/xml/eviction.xml          |   58 +
 docs/_docs/code-snippets/xml/expiry.xml            |   56 +
 docs/_docs/code-snippets/xml/failover-always.xml   |   45 +
 docs/_docs/code-snippets/xml/failover-never.xml    |   43 +
 .../_docs/code-snippets/xml/http-configuration.xml |   50 +
 .../code-snippets/xml/ignite-authentication.xml    |   58 +
 docs/_docs/code-snippets/xml/jcl.xml               |   57 +
 docs/_docs/code-snippets/xml/jetty.xml             |   69 +
 .../code-snippets/xml/job-scheduling-fifo.xml      |   46 +
 .../code-snippets/xml/job-scheduling-priority.xml  |   47 +
 docs/_docs/code-snippets/xml/job-stealing.xml      |   66 +
 docs/_docs/code-snippets/xml/lifecycle.xml         |   43 +
 docs/_docs/code-snippets/xml/log4j-config.xml      |  107 +
 docs/_docs/code-snippets/xml/log4j.xml             |   59 +
 docs/_docs/code-snippets/xml/log4j2-config.xml     |   79 +
 docs/_docs/code-snippets/xml/log4j2.xml            |   59 +
 docs/_docs/code-snippets/xml/metrics.xml           |   56 +
 docs/_docs/code-snippets/xml/mvcc.xml              |   46 +
 docs/_docs/code-snippets/xml/near-cache-config.xml |   52 +
 .../code-snippets/xml/network-configuration.xml    |   46 +
 docs/_docs/code-snippets/xml/odbc-cache-config.xml |   95 +
 docs/_docs/code-snippets/xml/odbc.xml              |   52 +
 docs/_docs/code-snippets/xml/on-heap-cache.xml     |   44 +
 .../code-snippets/xml/partition-loss-policy.xml    |   49 +
 .../_docs/code-snippets/xml/peer-class-loading.xml |   44 +
 .../code-snippets/xml/persistence-metrics.xml      |   64 +
 .../_docs/code-snippets/xml/persistence-tuning.xml |   81 +
 docs/_docs/code-snippets/xml/persistence.xml       |   50 +
 docs/_docs/code-snippets/xml/plugins.xml           |   47 +
 docs/_docs/code-snippets/xml/query-entities.xml    |   71 +
 .../_docs/code-snippets/xml/rebalancing-config.xml |   65 +
 .../xml/round-robin-load-balancing.xml             |   69 +
 docs/_docs/code-snippets/xml/schemas.xml           |   48 +
 docs/_docs/code-snippets/xml/services.xml          |   52 +
 docs/_docs/code-snippets/xml/slf4j.xml             |   57 +
 docs/_docs/code-snippets/xml/snapshots.xml         |   52 +
 docs/_docs/code-snippets/xml/sql-on-heap-cache.xml |   44 +
 .../code-snippets/xml/ssl-without-validation.xml   |   58 +
 docs/_docs/code-snippets/xml/ssl.xml               |   58 +
 docs/_docs/code-snippets/xml/swap.xml              |   47 +
 docs/_docs/code-snippets/xml/tcp-ip-discovery.xml  |   45 +
 docs/_docs/code-snippets/xml/tde.xml               |   61 +
 .../xml/thin-client-cluster-config.xml             |   65 +
 docs/_docs/code-snippets/xml/thread-pool.xml       |   48 +
 docs/_docs/code-snippets/xml/tracing.xml           |   45 +
 docs/_docs/code-snippets/xml/transactions.xml      |   57 +
 docs/_docs/code-snippets/xml/wal.xml               |   57 +
 .../code-snippets/xml/weighted-load-balancing.xml  |   59 +
 docs/_docs/configuring-caches/atomicity-modes.adoc |  113 +
 docs/_docs/configuring-caches/cache-groups.adoc    |   80 +
 .../configuring-caches/configuration-overview.adoc |  153 +
 .../configuring-caches/configuring-backups.adoc    |   92 +
 docs/_docs/configuring-caches/expiry-policies.adoc |   90 +
 docs/_docs/configuring-caches/near-cache.adoc      |  102 +
 docs/_docs/configuring-caches/on-heap-caching.adoc |  182 ++
 .../configuring-caches/partition-loss-policy.adoc  |  196 ++
 docs/_docs/cpp-specific/cpp-objects-lifetime.adoc  |   92 +
 .../cpp-platform-interoperability.adoc             |  250 ++
 docs/_docs/cpp-specific/cpp-serialization.adoc     |  266 ++
 docs/_docs/cpp-specific/index.adoc                 |   22 +
 docs/_docs/data-modeling/affinity-collocation.adoc |  123 +
 docs/_docs/data-modeling/binary-marshaller.adoc    |  299 ++
 docs/_docs/data-modeling/data-modeling.adoc        |   74 +
 docs/_docs/data-modeling/data-partitioning.adoc    |  140 +
 docs/_docs/data-rebalancing.adoc                   |  151 +
 docs/_docs/data-streaming.adoc                     |  190 ++
 docs/_docs/data-structures/atomic-sequence.adoc    |   38 +
 docs/_docs/data-structures/atomic-types.adoc       |   63 +
 docs/_docs/data-structures/countdownlatch.adoc     |   39 +
 docs/_docs/data-structures/id-generator.adoc       |   76 +
 docs/_docs/data-structures/queue-and-set.adoc      |   81 +
 docs/_docs/data-structures/semaphore.adoc          |   33 +
 .../distributed-computing/cluster-groups.adoc      |   62 +
 .../collocated-computations.adoc                   |  179 ++
 .../distributed-computing.adoc                     |  388 +++
 .../distributed-computing/executor-service.adoc    |   39 +
 .../distributed-computing/fault-tolerance.adoc     |   65 +
 .../distributed-computing/job-scheduling.adoc      |   78 +
 .../distributed-computing/load-balancing.adoc      |  127 +
 docs/_docs/distributed-computing/map-reduce.adoc   |  140 +
 docs/_docs/distributed-locks.adoc                  |   59 +
 docs/_docs/events/events.adoc                      |  342 +++
 docs/_docs/events/listening-to-events.adoc         |  268 ++
 .../cassandra/configuration.adoc                   |  588 ++++
 .../cassandra/ddl-generator.adoc                   |   99 +
 .../cassandra/overview.adoc                        |   54 +
 .../cassandra/usage-examples.adoc                  |  691 +++++
 .../hibernate-l2-cache.adoc                        |  308 ++
 .../ignite-for-spark/ignite-dataframe.adoc         |  380 +++
 .../ignite-for-spark/ignitecontext-and-rdd.adoc    |  106 +
 .../ignite-for-spark/installation.adoc             |  171 ++
 .../ignite-for-spark/overview.adoc                 |   49 +
 .../ignite-for-spark/spark-shell.adoc              |  202 ++
 .../ignite-for-spark/troubleshooting.adoc          |   23 +
 .../mybatis-l2-cache.adoc                          |   55 +
 .../_docs/extensions-and-integrations/php-pdo.adoc |  247 ++
 .../spring/spring-boot.adoc                        |  210 ++
 .../spring/spring-caching.adoc                     |  232 ++
 .../spring/spring-data.adoc                        |  234 ++
 .../streaming/camel-streamer.adoc                  |  153 +
 .../streaming/flink-streamer.adoc                  |   78 +
 .../streaming/flume-sink.adoc                      |   79 +
 .../streaming/jms-streamer.adoc                    |  123 +
 .../streaming/kafka-streamer.adoc                  |  221 ++
 .../streaming/mqtt-streamer.adoc                   |   76 +
 .../streaming/rocketmq-streamer.adoc               |   85 +
 .../streaming/storm-streamer.adoc                  |   62 +
 .../streaming/twitter-streamer.adoc                |   65 +
 .../streaming/zeromq-streamer.adoc                 |   67 +
 docs/_docs/images/111.gif                          |  Bin 0 -> 419 bytes
 docs/_docs/images/222.gif                          |  Bin 0 -> 1163 bytes
 docs/_docs/images/333.gif                          |  Bin 0 -> 719 bytes
 docs/_docs/images/555.gif                          |  Bin 0 -> 1197 bytes
 docs/_docs/images/666.gif                          |  Bin 0 -> 1309 bytes
 docs/_docs/images/bagging.png                      |  Bin 0 -> 4675 bytes
 docs/_docs/images/cache_table.png                  |  Bin 0 -> 166752 bytes
 docs/_docs/images/checkpointing-chainsaw.png       |  Bin 0 -> 70186 bytes
 docs/_docs/images/checkpointing-persistence.png    |  Bin 0 -> 58508 bytes
 docs/_docs/images/client-to-aws.png                |  Bin 0 -> 71068 bytes
 docs/_docs/images/collocated_joins.png             |  Bin 0 -> 174755 bytes
 docs/_docs/images/data_streaming.png               |  Bin 0 -> 159011 bytes
 docs/_docs/images/defragmented.png                 |  Bin 0 -> 45437 bytes
 docs/_docs/images/durable-memory-diagram.png       |  Bin 0 -> 311833 bytes
 docs/_docs/images/durable-memory-overview.png      |  Bin 0 -> 213676 bytes
 docs/_docs/images/external_storage.png             |  Bin 0 -> 125073 bytes
 docs/_docs/images/fragmented.png                   |  Bin 0 -> 26245 bytes
 docs/_docs/images/ignite_clustering.png            |  Bin 0 -> 117282 bytes
 docs/_docs/images/ijfull.png                       |  Bin 0 -> 548711 bytes
 docs/_docs/images/ijimport.png                     |  Bin 0 -> 43919 bytes
 docs/_docs/images/ijrun.png                        |  Bin 0 -> 50135 bytes
 docs/_docs/images/integrations/camel-streamer.png  |  Bin 0 -> 120217 bytes
 .../images/integrations/hibernate-l2-cache.png     |  Bin 0 -> 135173 bytes
 docs/_docs/images/jconsole.png                     |  Bin 0 -> 97939 bytes
 docs/_docs/images/k8s/aks-node-number.png          |  Bin 0 -> 43709 bytes
 docs/_docs/images/k8s/create-aks-cluster.png       |  Bin 0 -> 60411 bytes
 docs/_docs/images/logistic-regression.png          |  Bin 0 -> 9666 bytes
 docs/_docs/images/logistic-regression2.png         |  Bin 0 -> 8764 bytes
 docs/_docs/images/machine_learning.png             |  Bin 0 -> 68453 bytes
 docs/_docs/images/memory-segment.png               |  Bin 0 -> 28735 bytes
 docs/_docs/images/naive-bayes.png                  |  Bin 0 -> 18067 bytes
 docs/_docs/images/naive-bayes2.png                 |  Bin 0 -> 27103 bytes
 docs/_docs/images/naive-bayes3.png                 |  Bin 0 -> 13713 bytes
 docs/_docs/images/naive-bayes3png                  |  Bin 0 -> 13713 bytes
 docs/_docs/images/net-view-details.png             |  Bin 0 -> 56828 bytes
 docs/_docs/images/network_segmentation.png         |  Bin 0 -> 37812 bytes
 docs/_docs/images/non_collocated_joins.png         |  Bin 0 -> 190860 bytes
 docs/_docs/images/odbc_dsn_configuration.png       |  Bin 0 -> 13372 bytes
 docs/_docs/images/off_heap_memory_eviction.png     |  Bin 0 -> 168793 bytes
 docs/_docs/images/partitionawareness01.png         |  Bin 0 -> 35538 bytes
 docs/_docs/images/partitionawareness02.png         |  Bin 0 -> 31181 bytes
 docs/_docs/images/partitioned_cache.png            |  Bin 0 -> 183181 bytes
 docs/_docs/images/partitioning.png                 |  Bin 0 -> 160390 bytes
 docs/_docs/images/persistent_store_structure.png   |  Bin 0 -> 96783 bytes
 docs/_docs/images/preprocessing.png                |  Bin 0 -> 6588 bytes
 docs/_docs/images/preprocessing2.png               |  Bin 0 -> 4548 bytes
 docs/_docs/images/replicated_cache.png             |  Bin 0 -> 181143 bytes
 docs/_docs/images/segmentation_resolved.png        |  Bin 0 -> 41915 bytes
 docs/_docs/images/set-streaming.png                |  Bin 0 -> 56005 bytes
 docs/_docs/images/span.png                         |  Bin 0 -> 34434 bytes
 docs/_docs/images/spark_integration.png            |  Bin 0 -> 115826 bytes
 docs/_docs/images/split_brain.png                  |  Bin 0 -> 15844 bytes
 docs/_docs/images/split_brain_resolved.png         |  Bin 0 -> 15887 bytes
 docs/_docs/images/tools/gg-control-center.png      |  Bin 0 -> 251342 bytes
 .../images/tools/informatica-import-tables.png     |  Bin 0 -> 54326 bytes
 .../images/tools/informatica-rel-connection.png    |  Bin 0 -> 40510 bytes
 .../images/tools/pentaho-ignite-connection.png     |  Bin 0 -> 77439 bytes
 .../images/tools/pentaho-new-transformation.png    |  Bin 0 -> 81849 bytes
 .../tools/pentaho-running-and-inspecting-data.png  |  Bin 0 -> 56310 bytes
 docs/_docs/images/tools/tableau-choose_dsn_01.png  |  Bin 0 -> 12515 bytes
 docs/_docs/images/tools/tableau-choose_dsn_02.png  |  Bin 0 -> 12860 bytes
 .../images/tools/tableau-choosing_driver_01.png    |  Bin 0 -> 100372 bytes
 .../images/tools/tableau-creating_dataset.png      |  Bin 0 -> 59092 bytes
 .../_docs/images/tools/tableau-edit_connection.png |  Bin 0 -> 7123 bytes
 .../images/tools/tableau-visualizing_data.png      |  Bin 0 -> 86105 bytes
 docs/_docs/images/tools/visor-cmd.png              |  Bin 0 -> 208235 bytes
 docs/_docs/images/trace_in_zipkin.png              |  Bin 0 -> 118677 bytes
 docs/_docs/images/zookeeper.png                    |  Bin 0 -> 139311 bytes
 docs/_docs/images/zookeeper_split.png              |  Bin 0 -> 56004 bytes
 .../includes/cpp-linux-build-prerequisites.adoc    |   45 +
 docs/_docs/includes/cpp-prerequisites.adoc         |   23 +
 docs/_docs/includes/dotnet-prerequisites.adoc      |   20 +
 docs/_docs/includes/exampleprojects.adoc           |   37 +
 docs/_docs/includes/install-ignite.adoc            |   26 +
 docs/_docs/includes/install-nodejs-npm.adoc        |   19 +
 docs/_docs/includes/install-php-composer.adoc      |   25 +
 docs/_docs/includes/install-python-pip.adoc        |   29 +
 docs/_docs/includes/intro-languages.adoc           |   47 +
 docs/_docs/includes/java9.adoc                     |   42 +
 docs/_docs/includes/nodes-and-clustering.adoc      |   33 +
 docs/_docs/includes/note-on-deactivation.adoc      |   19 +
 docs/_docs/includes/partition-awareness.adoc       |   40 +
 docs/_docs/includes/prereqs.adoc                   |   23 +
 docs/_docs/includes/starting-node.adoc             |   93 +
 docs/_docs/includes/thick-and-thin-clients.adoc    |   42 +
 docs/_docs/index.adoc                              |   53 +
 docs/_docs/installation/deb-rpm.adoc               |   95 +
 docs/_docs/installation/index.adoc                 |   21 +
 .../installation/installing-using-docker.adoc      |  212 ++
 docs/_docs/installation/installing-using-zip.adoc  |   27 +
 .../kubernetes/amazon-eks-deployment.adoc          |   68 +
 .../installation/kubernetes/azure-deployment.adoc  |   84 +
 .../kubernetes/generic-configuration.adoc          |  402 +++
 .../installation/kubernetes/gke-deployment.adoc    |   78 +
 docs/_docs/installation/vmware-installation.adoc   |   59 +
 .../key-value-api/basic-cache-operations.adoc      |  421 +++
 docs/_docs/key-value-api/binary-objects.adoc       |  236 ++
 docs/_docs/key-value-api/continuous-queries.adoc   |  177 ++
 docs/_docs/key-value-api/transactions.adoc         |  330 +++
 docs/_docs/key-value-api/using-scan-queries.adoc   |  124 +
 docs/_docs/key-value-api/with-expiry-policy.adoc   |   40 +
 docs/_docs/logging.adoc                            |  184 ++
 .../binary-classification/ann.adoc                 |   87 +
 .../binary-classification/decision-trees.adoc      |   77 +
 .../binary-classification/introduction.adoc        |   36 +
 .../binary-classification/knn-classification.adoc  |   63 +
 .../binary-classification/linear-svm.adoc          |   52 +
 .../binary-classification/logistic-regression.adoc |   85 +
 .../multilayer-perceptron.adoc                     |   78 +
 .../binary-classification/naive-bayes.adoc         |  109 +
 .../clustering/gaussian-mixture.adoc               |   71 +
 .../machine-learning/clustering/introduction.adoc  |   22 +
 .../clustering/k-means-clustering.adoc             |   80 +
 .../machine-learning/ensemble-methods/bagging.adoc |   56 +
 .../ensemble-methods/gradient-boosting.adoc        |   99 +
 .../ensemble-methods/introduction.adoc             |   25 +
 .../ensemble-methods/random-forest.adoc            |   85 +
 .../ensemble-methods/stacking.adoc                 |   49 +
 .../importing-model/introduction.adoc              |   26 +
 .../model-import-from-apache-spark.adoc            |   84 +
 .../importing-model/model-import-from-gxboost.adoc |   35 +
 docs/_docs/machine-learning/machine-learning.adoc  |  139 +
 .../model-selection/cross-validation.adoc          |   90 +
 .../model-selection/evaluator.adoc                 |  107 +
 .../model-selection/hyper-parameter-tuning.adoc    |   65 +
 .../model-selection/introduction.adoc              |   32 +
 .../model-selection/pipeline-api.adoc              |  125 +
 ...lit-the-dataset-on-test-and-train-datasets.adoc |   66 +
 .../multiclass-classification.adoc                 |   55 +
 .../machine-learning/partition-based-dataset.adoc  |  100 +
 docs/_docs/machine-learning/preprocessing.adoc     |  253 ++
 .../machine-learning/recommendation-systems.adoc   |   71 +
 .../regression/decision-trees-regression.adoc      |   75 +
 .../machine-learning/regression/introduction.adoc  |   23 +
 .../regression/knn-regression.adoc                 |   63 +
 .../regression/linear-regression.adoc              |   99 +
 .../machine-learning/updating-trained-models.adoc  |   77 +
 docs/_docs/memory-architecture.adoc                |   93 +
 docs/_docs/memory-configuration/data-regions.adoc  |   84 +
 .../memory-configuration/eviction-policies.adoc    |  177 ++
 docs/_docs/memory-configuration/index.adoc         |   21 +
 docs/_docs/messaging.adoc                          |  106 +
 docs/_docs/monitoring-metrics/cluster-id.adoc      |   62 +
 docs/_docs/monitoring-metrics/cluster-states.adoc  |   97 +
 .../monitoring-metrics/configuring-metrics.adoc    |  149 +
 docs/_docs/monitoring-metrics/intro.adoc           |   58 +
 docs/_docs/monitoring-metrics/metrics.adoc         |  507 ++++
 .../monitoring-metrics/new-metrics-system.adoc     |  220 ++
 docs/_docs/monitoring-metrics/new-metrics.adoc     |  342 +++
 docs/_docs/monitoring-metrics/system-views.adoc    |  705 +++++
 docs/_docs/monitoring-metrics/tracing.adoc         |  183 ++
 .../_docs/net-specific/asp-net-output-caching.adoc |   93 +
 .../asp-net-session-state-caching.adoc             |   81 +
 docs/_docs/net-specific/index.adoc                 |   23 +
 .../net-specific/net-configuration-options.adoc    |  190 ++
 .../net-specific/net-cross-platform-support.adoc   |   65 +
 .../_docs/net-specific/net-deployment-options.adoc |  152 +
 .../net-specific/net-entity-framework-cache.adoc   |  198 ++
 .../net-specific/net-java-services-execution.adoc  |  116 +
 docs/_docs/net-specific/net-linq.adoc              |  256 ++
 docs/_docs/net-specific/net-logging.adoc           |  133 +
 docs/_docs/net-specific/net-platform-cache.adoc    |  125 +
 .../net-platform-interoperability.adoc             |  195 ++
 docs/_docs/net-specific/net-plugins.adoc           |  169 ++
 .../net-specific/net-remote-assembly-loading.adoc  |  154 +
 docs/_docs/net-specific/net-serialization.adoc     |  314 +++
 docs/_docs/net-specific/net-standalone-nodes.adoc  |  130 +
 .../general-perf-tips.adoc                         |   49 +
 .../handling-exceptions.adoc                       |  248 ++
 docs/_docs/perf-and-troubleshooting/index.adoc     |   18 +
 .../perf-and-troubleshooting/memory-tuning.adoc    |  185 ++
 .../persistence-tuning.adoc                        |  269 ++
 .../_docs/perf-and-troubleshooting/sql-tuning.adoc |  525 ++++
 .../thread-pools-tuning.adoc                       |  117 +
 .../perf-and-troubleshooting/troubleshooting.adoc  |  164 ++
 .../yardstick-benchmarking.adoc                    |  176 ++
 docs/_docs/persistence/custom-cache-store.adoc     |  103 +
 docs/_docs/persistence/disk-compression.adoc       |   62 +
 docs/_docs/persistence/external-storage.adoc       |  224 ++
 docs/_docs/persistence/native-persistence.adoc     |  362 +++
 docs/_docs/persistence/persistence-tuning.adoc     |  258 ++
 docs/_docs/persistence/snapshots.adoc              |  208 ++
 docs/_docs/persistence/swap.adoc                   |   66 +
 docs/_docs/plugins.adoc                            |  129 +
 docs/_docs/quick-start/cpp.adoc                    |  131 +
 docs/_docs/quick-start/dotnet.adoc                 |   95 +
 docs/_docs/quick-start/index.adoc                  |   18 +
 docs/_docs/quick-start/java.adoc                   |  171 ++
 docs/_docs/quick-start/nodejs.adoc                 |  104 +
 docs/_docs/quick-start/php.adoc                    |  125 +
 docs/_docs/quick-start/python.adoc                 |   88 +
 docs/_docs/quick-start/restapi.adoc                |   96 +
 docs/_docs/quick-start/sql.adoc                    |  129 +
 docs/_docs/read-repair.adoc                        |   56 +
 docs/_docs/resources-injection.adoc                |   88 +
 docs/_docs/restapi.adoc                            | 2953 ++++++++++++++++++++
 docs/_docs/security/authentication.adoc            |   65 +
 docs/_docs/security/index.adoc                     |   18 +
 docs/_docs/security/master-key-rotation.adoc       |  131 +
 docs/_docs/security/sandbox.adoc                   |   94 +
 docs/_docs/security/ssl-tls.adoc                   |  225 ++
 docs/_docs/security/tde.adoc                       |  142 +
 docs/_docs/services/services.adoc                  |  267 ++
 docs/_docs/setup.adoc                              |  303 ++
 docs/_docs/sql-reference/aggregate-functions.adoc  |  397 +++
 docs/_docs/sql-reference/data-types.adoc           |  182 ++
 docs/_docs/sql-reference/date-time-functions.adoc  |  399 +++
 docs/_docs/sql-reference/ddl.adoc                  |  520 ++++
 docs/_docs/sql-reference/dml.adoc                  |  363 +++
 docs/_docs/sql-reference/index.adoc                |   18 +
 docs/_docs/sql-reference/numeric-functions.adoc    |  981 +++++++
 docs/_docs/sql-reference/operational-commands.adoc |  372 +++
 docs/_docs/sql-reference/sql-conformance.adoc      |  471 ++++
 docs/_docs/sql-reference/string-functions.adoc     |  942 +++++++
 docs/_docs/sql-reference/system-functions.adoc     |  225 ++
 docs/_docs/sql-reference/transactions.adoc         |   66 +
 docs/_docs/starting-nodes.adoc                     |  262 ++
 docs/_docs/thin-client-comparison.csv              |   15 +
 docs/_docs/thin-clients/cpp-thin-client.adoc       |  117 +
 docs/_docs/thin-clients/dotnet-thin-client.adoc    |  260 ++
 .../getting-started-with-thin-clients.adoc         |  126 +
 docs/_docs/thin-clients/java-thin-client.adoc      |  329 +++
 docs/_docs/thin-clients/nodejs-thin-client.adoc    |  240 ++
 docs/_docs/thin-clients/php-thin-client.adoc       |  149 +
 docs/_docs/thin-clients/python-thin-client.adoc    |  488 ++++
 docs/_docs/tools/control-script.adoc               |  649 +++++
 docs/_docs/tools/gg-control-center.adoc            |   34 +
 docs/_docs/tools/informatica.adoc                  |  304 ++
 docs/_docs/tools/pentaho.adoc                      |   65 +
 docs/_docs/tools/sqlline.adoc                      |  225 ++
 docs/_docs/tools/tableau.adoc                      |   66 +
 docs/_docs/tools/visor-cmd.adoc                    |   68 +
 docs/_docs/transactions/mvcc.adoc                  |  193 ++
 docs/_docs/understanding-configuration.adoc        |  111 +
 docs/_includes/copyright.html                      |   22 +
 docs/_includes/footer.html                         |   20 +
 docs/_includes/header.html                         |   36 +
 docs/_includes/left-nav.html                       |   88 +
 docs/_includes/right-nav.html                      |   21 +
 docs/_includes/section-toc.html                    |   31 +
 docs/_includes/toc.html                            |   63 +
 docs/_layouts/default.html                         |   72 +
 docs/_layouts/doc.html                             |   33 +
 docs/_layouts/toc.html                             |   32 +
 docs/_plugins/asciidoctor-extensions.rb            |  180 ++
 docs/_sass/callouts.scss                           |   75 +
 docs/_sass/code.scss                               |  115 +
 docs/_sass/docs.scss                               |  238 ++
 docs/_sass/footer.scss                             |   48 +
 docs/_sass/github.scss                             |  223 ++
 docs/_sass/header.scss                             |  374 +++
 docs/_sass/layout.scss                             |   45 +
 docs/_sass/left-nav.scss                           |  109 +
 docs/_sass/right-nav.scss                          |   73 +
 docs/_sass/rouge-base16-solarized.scss             |   99 +
 docs/_sass/text.scss                               |   62 +
 docs/_sass/variables.scss                          |   33 +
 docs/assets/css/asciidoc-pygments.css              |   59 +
 docs/assets/css/docs.scss                          |   21 +
 docs/assets/css/styles.scss                        |   30 +
 docs/assets/images/apple-blob.svg                  |   16 +
 docs/assets/images/arrow-down-white.svg            |    3 +
 docs/assets/images/arrow-down.svg                  |    3 +
 docs/assets/images/background-lines.svg            |   54 +
 docs/assets/images/cancel.svg                      |   11 +
 docs/assets/images/checkmark-green.svg             |    3 +
 docs/assets/images/copy-icon.svg                   |    6 +
 docs/assets/images/cpp.svg                         |    9 +
 docs/assets/images/dev-internal-bg.jpg             |  Bin 0 -> 23014 bytes
 docs/assets/images/dotnet.svg                      |    9 +
 docs/assets/images/edition-ce.svg                  |   16 +
 docs/assets/images/edition-ee.svg                  |   25 +
 docs/assets/images/edition-ue.svg                  |   28 +
 docs/assets/images/events-nav-arrow.svg            |    3 +
 docs/assets/images/feature-easy-installation.svg   |   28 +
 docs/assets/images/feature-fast.svg                |   16 +
 docs/assets/images/feature-reliable.svg            |   25 +
 docs/assets/images/github-gray.svg                 |    3 +
 docs/assets/images/github-white.svg                |    3 +
 docs/assets/images/glowing-box.svg                 |  170 ++
 docs/assets/images/integrations/hibernate.svg      |    6 +
 docs/assets/images/integrations/kafka.svg          |    3 +
 docs/assets/images/integrations/more.svg           |   18 +
 docs/assets/images/integrations/oracle.svg         |    3 +
 docs/assets/images/integrations/osgi.svg           |   17 +
 docs/assets/images/integrations/spark.svg          |    7 +
 docs/assets/images/integrations/spring.svg         |    3 +
 docs/assets/images/java.svg                        |    9 +
 docs/assets/images/left-nav-arrow.svg              |    3 +
 docs/assets/images/lines-bg-1.svg                  |   54 +
 docs/assets/images/lines-bg-2.svg                  |   54 +
 docs/assets/images/lines-bg-3.svg                  |   54 +
 docs/assets/images/lines-bg-4.svg                  |   54 +
 docs/assets/images/menu-icon.svg                   |    3 +
 docs/assets/images/mousepad-blob.svg               |    9 +
 ...piece-of-paper-with-folded-top-right-corner.svg |  117 +
 docs/assets/images/scala.svg                       |   31 +
 docs/assets/images/search.svg                      |   15 +
 docs/assets/images/violent-blob.svg                |   28 +
 docs/assets/images/watermelon-blob.svg             |    9 +
 docs/assets/js/anchor.min.js                       |    9 +
 docs/assets/js/code-copy-to-clipboard.js           |   70 +
 docs/assets/js/code-tabs.js                        |  155 +
 docs/assets/js/docs-menu.js                        |   64 +
 docs/assets/js/index.js                            |   51 +
 docs/assets/js/page-nav.js                         |   37 +
 docs/assets/js/top-navigation.js                   |   92 +
 docs/favicon.ico                                   |  Bin 0 -> 9780 bytes
 docs/run.sh                                        |   23 +
 examples/README.md                                 |    2 +-
 examples/config/servlet/README.txt                 |    3 -
 examples/redis/redis-example.php                   |    2 -
 examples/redis/redis-example.py                    |    2 -
 modules/platforms/cpp/core/namespaces.dox          |    4 +-
 parent/pom.xml                                     |   10 +-
 676 files changed, 74260 insertions(+), 24 deletions(-)

diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index a22b7c641..5347636 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -36,7 +36,7 @@ Apache Ignite prefer to use [consensus to make decisions](http://community.apach
 
 ## Contributing Documentation
 Documentation can be contributed to
- - End-User documentation https://apacheignite.readme.io/ . Use Suggest Edits. See also [How To Document](https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document).
+ - End-User documentation https://ignite.apache.org/docs/latest/ . Use Suggest Edits. See also [How To Document](https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document).
  - Developer documentation, design documents, IEPs [Apache Wiki](https://cwiki.apache.org/confluence/display/IGNITE). Ask at [Dev List](https://lists.apache.org/list.html?dev@ignite.apache.org) to be added as editor.
  - Markdown files, visible at GitHub, e.g. README.md; drawings explaining Apache Ignite & product internals.
  - Javadocs for packages (package-info.java), classes, methods, etc.
diff --git a/README.txt b/README.txt
index 4d02f4c..c7a2cdf 100644
--- a/README.txt
+++ b/README.txt
@@ -18,13 +18,7 @@ The main feature set of Ignite includes:
 
 For information on how to get started with Apache Ignite please visit:
 
-    http://apacheignite.readme.io/docs/getting-started
-
-
-You can find Apache Ignite documentation here:
-
-    http://apacheignite.readme.io/docs
-
+    https://ignite.apache.org/docs/latest/
 
 Crypto Notice
 =============
@@ -49,12 +43,12 @@ and source code.
 The following provides more details on the included cryptographic software:
 
 * JDK SSL/TLS libraries used to enable secured connectivity between cluster
-nodes (https://apacheignite.readme.io/docs/ssltls).
+nodes (https://ignite.apache.org/docs/latest/security/ssl-tls).
 Oracle/OpenJDK (https://www.oracle.com/technetwork/java/javase/downloads/index.html)
 
 * JDK Java Cryptography Extensions build in encryption from the Java libraries is used
 for Transparent Data Encryption of data on disk
-(https://apacheignite.readme.io/docs/transparent-data-encryption)
+(https://ignite.apache.org/docs/latest/security/tde)
 and for AWS S3 Client Side Encryprion.
 (https://java.sun.com/javase/technologies/security/)
 
@@ -74,4 +68,4 @@ Eclipse Jetty (http://eclipse.org/jetty)
 * Apache Ignite.NET uses .NET Framework crypto APIs from standard class library
 for all security and cryptographic related code.
  .NET Classic, Windows-only (https://dotnet.microsoft.com/download)
- .NET Core  (https://dotnetfoundation.org/projects)
\ No newline at end of file
+ .NET Core  (https://dotnetfoundation.org/projects)
diff --git a/config/visor-cmd/node_startup_by_ssh.sample.ini b/config/visor-cmd/node_startup_by_ssh.sample.ini
index f1d8e01..649e0c7 100644
--- a/config/visor-cmd/node_startup_by_ssh.sample.ini
+++ b/config/visor-cmd/node_startup_by_ssh.sample.ini
@@ -15,7 +15,7 @@
 
 # ==================================================================
 # This is a sample file for Visor CMD to use with "start" command.
-# More info: https://apacheignite-tools.readme.io/docs/start-command
+# More info: https://ignite.apache.org/docs/latest/tools/visor-cmd
 # ==================================================================
 
 # Section with settings for host1:
diff --git a/docs/.gitignore b/docs/.gitignore
new file mode 100644
index 0000000..a01b89a
--- /dev/null
+++ b/docs/.gitignore
@@ -0,0 +1,5 @@
+.jekyll-cache/
+_site/
+Gemfile.lock
+.jekyll-metadata
+
diff --git a/docs/Gemfile b/docs/Gemfile
new file mode 100644
index 0000000..f471d02
--- /dev/null
+++ b/docs/Gemfile
@@ -0,0 +1,14 @@
+source "https://rubygems.org"
+
+# git_source(:github) {|repo_name| "https://github.com/#{repo_name}" }
+
+gem 'asciidoctor'
+gem 'jekyll', group: :jekyll_plugins
+gem 'wdm', '~> 0.1.1' if Gem.win_platform?
+group :jekyll_plugins do
+  gem 'jekyll-asciidoc'
+end
+#gem 'pygments.rb', '~> 1.2.1'
+gem 'thread_safe', '~> 0.3.6'
+gem 'slim', '~> 4.0.1'
+gem 'tilt', '~> 2.0.9'
diff --git a/docs/README.adoc b/docs/README.adoc
new file mode 100644
index 0000000..856b993
--- /dev/null
+++ b/docs/README.adoc
@@ -0,0 +1,212 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite Documentation
+:toc:
+:toc-title:
+
+== Overview
+The Apache Ignite documentation is maintained in the repository with the code base, in the "/docs" subdirectory. The directory contains the source files, HTML templates and css styles.
+
+
+The Apache Ignite documentation is written in link:https://asciidoctor.org/docs/what-is-asciidoc/[asciidoc].
+The Asciidoc files are compiled into HTML pages and published to https://ignite.apache.org/docs.
+
+
+.Content of the “docs” directory
+[cols="1,4",opts="stretch"]
+|===
+| pass:[_]docs  | The directory with .adoc files and code-snippets.
+| pass:[_]config.yml | Jekyll configuration file.
+|===
+
+
+== Building the Docs Locally
+
+To build the docs locally, you can install `jekyll` and other dependencies on your machine, or you can use Jekyll docker image.
+
+=== Install Jekyll and Asciidoctor
+
+. Install Jekyll by following this instruction:  https://jekyllrb.com/docs/installation/[window=_blank]
+. In the “/docs” directory, run the following command:
++
+[source, shell]
+----
+$ bundle
+----
++
+This should install all dependencies, including `asciidoctor`.
+. Start jekyll:
++
+[source, shell]
+----
+$ bundle exec jekyll s
+----
+The command compiles the Asciidoc files into HTML pages and starts a local webserver.
+
+Open `http://localhost:4000/docs[window=_blank]` in your browser.
+
+=== Run with Docker
+
+The following command starts jekyll in a container and downloads all dependencies. Run the command in the “/docs” directory.
+
+[source, shell]
+----
+$ docker run -v "$PWD:/srv/jekyll" -p 4000:4000 jekyll/jekyll:latest jekyll s
+----
+
+Open `http://localhost:4000/docs[window=_blank]` in your browser.
+
+== How to Contribute
+
+If you want to contribute to the documentation, add or modify the relevant page in the `docs/_docs` directory.
+This directory contains all .adoc files (which are then rendered into HTML pages and published on the web-site).
+
+Because we use asciidoc for documentation, consider the following points:
+
+* Get familiar with the asciidoc format: https://asciidoctor.org/docs/user-manual/. You don’t have to read the entire manual. Search through it when you want to learn how to create a numbered list, or insert an image, or use italics.
+* Please read the link:https://asciidoctor.org/docs/asciidoc-recommended-practices:[AsciiDoc Recommended Practices] and try to adhere to those when editing the .adoc source files.
+
+
+The following sections explain specific asciidoc syntax that we use.
+
+=== Table of content
+
+The table of content is defined in the `_data/toc.yaml` file.
+If you want to add a new page, make sure to update the TOC.
+
+=== Changing an URL of an existing page
+
+If you rename an already published page or change the page's path in the `/_data/toc.yaml` file,
+you must configure a proper redirect from the old to the new URL in the following files of the Ignite website:
+https://github.com/apache/ignite-website/blob/master/.htaccess
+
+Reach out to documentation maintainers if you need any help with this.
+
+=== Links to other sections in the docs
+All .adoc files are located in the "docs/_docs" directory.
+Any link to the files within the directory must be relative to that directory.
+Remove the file extension (.adoc).
+
+For example:
+[source, adoc]
+----
+link:persistence/native-persistence[Native Persistence]
+----
+
+This is a link to the Native Persistence page.
+
+=== Links to external resources
+
+When referencing an external resource, make the link to open in a new window by adding the `window=_blank` attribute:
+
+[source, adoc]
+----
+link:https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSE_Protocols[Supported protocols,window=_blank]
+----
+
+
+=== Tabs
+
+We use custom syntax to insert tabs. Tabs are used to provide code samples for different programming languages.
+
+Tabs are defined by the `tabs` block:
+```
+[tabs]
+--
+individual tabs are defined here
+--
+```
+
+Each tab is defined by the 'tab' directive:
+
+```
+tab:tab_name[]
+```
+
+where `tab_name` is the title of the tab.
+
+The content of the tab is everything that is given between the tab title, and the next tab or the end of the block.
+
+```asciidoc
+[tabs]
+--
+tab:XML[]
+
+The content of the XML tab goes here
+
+tab:Java[]
+
+The content of the Java tab is here
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
+```
+
+=== Callouts
+
+Use the syntax below if you need to bring reader's attention to some details:
+
+[NOTE]
+====
+[discrete]
+=== Callout Title
+Callout Text
+====
+
+Change the callout type to `CAUTION` if you want to put out a warning:
+
+[CAUTION]
+====
+[discrete]
+=== Callout Title
+Callout Text
+====
+
+=== Code Snippets
+
+Code snippets must be taken from a compilable source code file (e.g. java, cs, js, etc).
+We use the `include` feature of asciidoc.
+Source code files are located in the `docs/_docs/code-snippets/{language}` folders.
+
+
+To add a code snippet to a page, follow these steps:
+
+* Create a file in the code snippets directory, e.g. _docs/code-snippets/java/org/apache/ignite/snippets/JavaThinClient.java
+
+* Enclose the piece of code you want to include within named tags (see https://asciidoctor.org/docs/user-manual/#by-tagged-regions). Give the tag a self-evident name.
+For example:
++
+```
+[source, java]
+----
+// tag::clientConnection[]
+ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+try (IgniteClient client = Ignition.startClient(cfg)) {
+    ClientCache<Integer, String> cache = client.cache("myCache");
+    // get data from the cache
+}
+// end::clientConnection[]
+----
+```
+
+* Include the tag in the adoc file:
++
+[source, adoc,subs="macros"]
+----
+\include::{javaCodeDir}/JavaThinClient.java[tag=clientConnection,indent=0]
+----
diff --git a/docs/_config.yml b/docs/_config.yml
new file mode 100644
index 0000000..0562d1a
--- /dev/null
+++ b/docs/_config.yml
@@ -0,0 +1,46 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+exclude: [guidelines.md,  "Gemfile", "Gemfile.lock", README.adoc, "_docs/code-snippets", "_docs/includes", '*.sh']
+attrs: &asciidoc_attributes
+  version: 2.9.1
+  base_url: /docs
+  stylesdir: /docs/assets/css
+  imagesdir: /docs
+  source-highlighter: rouge
+  table-stripes: even
+  javadoc_base_url: https://ignite.apache.org/releases/{version}/javadoc
+  javaCodeDir: code-snippets/java/src/main/java/org/apache/ignite/snippets
+  csharpCodeDir: code-snippets/dotnet
+  githubUrl: https://github.com/apache/ignite/tree/master
+  docSourceUrl: https://github.com/apache/ignite/tree/IGNITE-7595/docs
+collections:
+  docs:
+    permalink: /docs/:path:output_ext
+    output: true
+defaults:
+  -
+    scope:
+      path: ''
+    values:
+      layout: 'doc'
+  -
+    scope:
+      path: '_docs'
+    values:
+      toc: ignite 
+asciidoctor:
+  base_dir: _docs/ 
+  attributes: *asciidoc_attributes
+   
diff --git a/docs/_data/toc.yaml b/docs/_data/toc.yaml
new file mode 100644
index 0000000..750c1d5
--- /dev/null
+++ b/docs/_data/toc.yaml
@@ -0,0 +1,559 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+- title: Documentation Overview
+  url: index
+- title: Quick Start Guides
+  items: 
+    - title: Java
+      url: quick-start/java
+    - title: .NET/C#
+      url: quick-start/dotnet
+    - title: C++
+      url: quick-start/cpp
+    - title: Python
+      url: quick-start/python
+    - title: Node.JS
+      url: quick-start/nodejs
+    - title: SQL
+      url: quick-start/sql
+    - title: PHP
+      url: quick-start/php
+    - title: REST API
+      url: quick-start/restapi
+- title: Installation
+  url: installation
+  items:
+  - title: Installing Using ZIP Archive
+    url: installation/installing-using-zip
+  - title: Installing Using Docker
+    url: installation/installing-using-docker
+  - title: Installing DEB or RPM package
+    url: installation/deb-rpm
+  - title: Kubernetes
+    items: 
+      - title: Amazon EKS 
+        url: installation/kubernetes/amazon-eks-deployment
+      - title: Azure Kubernetes Service 
+        url: installation/kubernetes/azure-deployment
+      - title: Google Kubernetes Engine
+        url: installation/kubernetes/gke-deployment
+  - title: VMWare
+    url: installation/vmware-installation
+- title: Setting Up
+  items:
+    - title: Understanding Configuration
+      url: understanding-configuration
+    - title: Setting Up
+      url: setup
+    - title: Configuring Logging
+      url: logging
+    - title: Resources Injection
+      url: resources-injection
+- title: Starting and Stopping Nodes
+  url: starting-nodes
+- title: Clustering
+  items:
+    - title: Overview
+      url: clustering/clustering
+    - title: TCP/IP Discovery
+      url: clustering/tcp-ip-discovery
+    - title: ZooKeeper Discovery
+      url: clustering/zookeeper-discovery
+    - title: Discovery in the Cloud
+      url: clustering/discovery-in-the-cloud
+    - title: Network Configuration
+      url: clustering/network-configuration
+    - title: Connecting Client Nodes 
+      url: clustering/connect-client-nodes
+    - title: Baseline Topology
+      url: clustering/baseline-topology
+    - title: Running Client Nodes Behind NAT
+      url: clustering/running-client-nodes-behind-nat
+- title: Thin Clients
+  items:
+    - title: Thin Clients Overview
+      url: thin-clients/getting-started-with-thin-clients
+    - title: Java Thin Client
+      url: thin-clients/java-thin-client
+    - title: .NET Thin Client
+      url: thin-clients/dotnet-thin-client
+    - title: C++ Thin Client
+      url: thin-clients/cpp-thin-client
+    - title: Python Thin Client
+      url: thin-clients/python-thin-client
+    - title: PHP Thin Client
+      url: thin-clients/php-thin-client
+    - title: Node.js Thin Client
+      url: thin-clients/nodejs-thin-client
+    - title: Binary Client Protocol
+      items:
+        - title: Binary Client Protocol
+          url: binary-client-protocol/binary-client-protocol
+        - title: Data Format
+          url: binary-client-protocol/data-format
+        - title: Key-Value Queries
+          url: binary-client-protocol/key-value-queries
+        - title: SQL and Scan Queries
+          url: binary-client-protocol/sql-and-scan-queries
+        - title: Binary Types Metadata
+          url: binary-client-protocol/binary-type-metadata
+        - title: Cache Configuration
+          url: binary-client-protocol/cache-configuration
+- title: Data Modeling
+  items: 
+    - title: Introduction
+      url: data-modeling/data-modeling
+    - title: Data Partitioning
+      url: data-modeling/data-partitioning
+    - title: Affinity Colocation 
+      url: data-modeling/affinity-collocation
+    - title: Binary Marshaller
+      url: data-modeling/binary-marshaller
+- title: Configuring Memory 
+  items:
+    - title: Memory Architecture
+      url: memory-architecture
+    - title: Configuring Data Regions
+      url: memory-configuration/data-regions
+    - title: Eviction Policies
+      url: memory-configuration/eviction-policies        
+- title: Configuring Persistence
+  items:
+    - title: Ignite Persistence
+      url: persistence/native-persistence
+    - title: External Storage
+      url: persistence/external-storage
+    - title: Swapping
+      url: persistence/swap
+    - title: Implementing Custom Cache Store
+      url: persistence/custom-cache-store
+    - title: Cluster Snapshots
+      url: persistence/snapshots
+    - title: Disk Compression
+      url: persistence/disk-compression
+    - title: Tuning Persistence
+      url: persistence/persistence-tuning
+- title: Configuring Caches
+  items:
+    - title: Cache Configuration 
+      url: configuring-caches/configuration-overview 
+    - title: Configuring Partition Backups
+      url: configuring-caches/configuring-backups
+    - title: Partition Loss Policy
+      url: configuring-caches/partition-loss-policy
+    - title: Atomicity Modes
+      url: configuring-caches/atomicity-modes
+    - title: Expiry Policy
+      url: configuring-caches/expiry-policies
+    - title: On-Heap Caching
+      url: configuring-caches/on-heap-caching
+    - title: Cache Groups 
+      url: configuring-caches/cache-groups
+    - title: Near Caches
+      url: configuring-caches/near-cache
+- title: Data Rebalancing
+  url: data-rebalancing 
+- title: Data Streaming
+  url: data-streaming
+- title: Using Key-Value API
+  items:
+    - title: Basic Cache Operations 
+      url: key-value-api/basic-cache-operations
+    - title: Working with Binary Objects
+      url: key-value-api/binary-objects
+    - title: Using Scan Queries
+      url: key-value-api/using-scan-queries
+    - title: Read Repair
+      url: read-repair
+- title: Performing Transactions
+  url: key-value-api/transactions
+- title: Working with SQL
+  items:
+    - title: Introduction
+      url: SQL/sql-introduction
+    - title: Understanding Schemas
+      url: SQL/schemas
+    - title: Defining Indexes
+      url: SQL/indexes
+    - title: Using SQL API
+      url: SQL/sql-api
+    - title: Distributed Joins
+      url: SQL/distributed-joins
+    - title: SQL Transactions
+      url: SQL/sql-transactions
+    - title: Custom SQL Functions
+      url: SQL/custom-sql-func
+    - title: JDBC Driver
+      url: SQL/JDBC/jdbc-driver
+    - title: JDBC Client Driver
+      url: SQL/JDBC/jdbc-client-driver
+    - title: ODBC Driver
+      items:
+        - title: ODBC Driver
+          url: SQL/ODBC/odbc-driver
+        - title: Connection String and DSN
+          url:  /SQL/ODBC/connection-string-dsn
+        - title: Querying and Modifying Data
+          url: SQL/ODBC/querying-modifying-data
+        - title: Specification
+          url: SQL/ODBC/specification
+        - title: Data Types
+          url: SQL/ODBC/data-types
+        - title: Error Codes
+          url: SQL/ODBC/error-codes
+    - title: Multiversion Concurrency Control
+      url: transactions/mvcc
+- title: SQL Reference
+  url: sql-reference/sql-reference-overview
+  items:
+    - title: SQL Conformance
+      url: sql-reference/sql-conformance
+    - title: Data Definition Language (DDL)
+      url: sql-reference/ddl
+    - title: Data Manipulation Language (DML)
+      url: sql-reference/dml
+    - title: Transactions
+      url: sql-reference/transactions
+    - title: Operational Commands
+      url: sql-reference/operational-commands
+    - title: Aggregate functions
+      url: sql-reference/aggregate-functions
+    - title: Numeric Functions
+      url: sql-reference/numeric-functions
+    - title: String Functions
+      url: sql-reference/string-functions
+    - title: Data and Time Functions
+      url: sql-reference/date-time-functions
+    - title: System Functions
+      url: sql-reference/system-functions
+    - title: Data Types
+      url: sql-reference/data-types
+- title: Distributed Computing
+  items:
+    - title: Distributed Computing API
+      url: distributed-computing/distributed-computing
+    - title: Cluster Groups
+      url: distributed-computing/cluster-groups
+    - title: Executor Service
+      url: distributed-computing/executor-service
+    - title: MapReduce API
+      url: distributed-computing/map-reduce
+    - title: Load Balancing
+      url: distributed-computing/load-balancing
+    - title: Fault Tolerance
+      url: distributed-computing/fault-tolerance
+    - title: Job Scheduling
+      url: distributed-computing/job-scheduling
+    - title: Colocating Computations with Data
+      url: distributed-computing/collocated-computations
+- title: Code Deployment
+  items:
+    - title: Deploying User Code
+      url: code-deployment/deploying-user-code
+    - title: Peer Class Loading
+      url: code-deployment/peer-class-loading
+- title: Machine Learning
+  items:
+    - title: Machine Learning
+      url: machine-learning/machine-learning
+    - title: Partition Based Dataset
+      url: machine-learning/partition-based-dataset
+    - title: Updating Trained Models
+      url: machine-learning/updating-trained-models
+    - title: Binary Classification
+      items:
+        - title: Introduction
+          url: machine-learning/binary-classification/introduction
+        - title: Linear SVM (Support Vector Machine)
+          url: machine-learning/binary-classification/linear-svm
+        - title: Decision Trees
+          url: machine-learning/binary-classification/decision-trees
+        - title: Multilayer Perceptron
+          url: machine-learning/binary-classification/multilayer-perceptron
+        - title: Logistic Regression
+          url: machine-learning/binary-classification/logistic-regression
+        - title: k-NN Classification
+          url: machine-learning/binary-classification/knn-classification
+        - title: ANN (Approximate Nearest Neighbor)
+          url: machine-learning/binary-classification/ann
+        - title: Naive Bayes
+          url: machine-learning/binary-classification/naive-bayes
+    - title: Regression
+      items:
+        - title: Introduction
+          url: machine-learning/regression/introduction
+        - title: Linear Regression
+          url: machine-learning/regression/linear-regression
+        - title: Decision Trees Regression
+          url: machine-learning/regression/decision-trees-regression
+        - title: k-NN Regression
+          url: machine-learning/regression/knn-regression
+    - title: Clustering
+      items:
+        - title: Introduction
+          url: machine-learning/clustering/introduction
+        - title: K-Means Clustering
+          url: machine-learning/clustering/k-means-clustering
+        - title: Gaussian mixture (GMM)
+          url: machine-learning/clustering/gaussian-mixture
+    - title: Preprocessing
+      url: machine-learning/preprocessing
+    - title: Model Selection
+      items:
+        - title: Introduction
+          url: machine-learning/model-selection/introduction
+        - title: Evaluator
+          url: machine-learning/model-selection/evaluator
+        - title: Split the dataset on test and train datasets
+          url: machine-learning/model-selection/split-the-dataset-on-test-and-train-datasets
+        - title: Hyper-parameter tuning
+          url: machine-learning/model-selection/hyper-parameter-tuning
+        - title: Pipeline API
+          url: machine-learning/model-selection/pipeline-api
+    - title: Multiclass Classification
+      url: machine-learning/multiclass-classification
+    - title: Ensemble Methods
+      items:
+        - title:
+          url: machine-learning/ensemble-methods/introduction
+        - title: Stacking
+          url: machine-learning/ensemble-methods/stacking
+        - title: Bagging
+          url: machine-learning/ensemble-methods/baggin
+        - title: Random Forest
+          url: machine-learning/ensemble-methods/random-forest
+        - title: Gradient Boosting
+          url: machine-learning/ensemble-methods/gradient-boosting
+    - title: Recommendation Systems
+      url: machine-learning/recommendation-systems
+    - title: Importing Model
+      items:
+        - title: Introduction
+          url: machine-learning/importing-model/introduction
+        - title: Import Model from XGBoost
+          url: machine-learning/importing-model/model-import-from-gxboost
+        - title: Import Model from Apache Spark
+          url: machine-learning/importing-model/model-import-from-apache-spark
+- title: Using Continuous Queries
+  url: key-value-api/continuous-queries
+- title: Using Ignite Services
+  url: services/services
+- title: Using Ignite Messaging
+  url: messaging
+- title: Distributed Data Structures
+  items:
+    - title: Queue and Set
+      url: data-structures/queue-and-set
+    - title: Atomic Types 
+      url: data-structures/atomic-types
+    - title: CountDownLatch 
+      url: data-structures/countdownlatch
+    - title: Atomic Sequence 
+      url: data-structures/atomic-sequence
+    - title:  Semaphore 
+      url: data-structures/semaphore
+    - title: ID Generator
+      url: data-structures/id-generator
+- title: Distributed Locks
+  url: distributed-locks
+- title: REST API
+  url: restapi
+- title: .NET Specific
+  items:
+    - title: Configuration Options
+      url: net-specific/net-configuration-options
+    - title: Deployment Options
+      url: net-specific/net-deployment-options
+    - title: Standalone Nodes
+      url: net-specific/net-standalone-nodes
+    - title: Logging
+      url: net-specific/net-logging
+    - title: LINQ
+      url: net-specific/net-linq
+    - title: Java Services Execution
+      url: net-specific/net-java-services-execution
+    - title: .NET Platform Cache
+      url: net-specific/net-platform-cache
+    - title: Plugins
+      url: net-specific/net-plugins
+    - title: Serialization
+      url: net-specific/net-serialization
+    - title: Cross-Platform Support
+      url: net-specific/net-cross-platform-support
+    - title: Platform Interoperability
+      url: net-specific/net-platform-interoperability
+    - title: Remote Assembly Loading
+      url: net-specific/net-remote-assembly-loading
+    - title: Troubleshooting
+      url: net-specific/net-troubleshooting
+    - title: Integrations
+      items:
+        - title: ASP.NET Output Caching
+          url: net-specific/asp-net-output-caching
+        - title: ASP.NET Session State Caching
+          url: net-specific/asp-net-session-state-caching
+        - title: Entity Framework 2nd Level Cache
+          url: net-specific/net-entity-framework-cache
+- title: C++ Specific
+  items:
+    - title: Serialization
+      url: cpp-specific/cpp-serialization
+    - title: Platform Interoperability
+      url: cpp-specific/cpp-platform-interoperability
+    - title: Objects Lifetime
+      url: cpp-specific/cpp-objects-lifetime
+- title: Monitoring
+  items:
+    - title: Introduction
+      url: monitoring-metrics/intro
+    - title: Cluster ID and Tag
+      url: monitoring-metrics/cluster-id
+    - title: Cluster States
+      url: monitoring-metrics/cluster-states
+    - title: Metrics
+      items: 
+        - title: Configuring Metrics
+          url: monitoring-metrics/configuring-metrics
+        - title: JMX Metrics
+          url: monitoring-metrics/metrics
+    - title: New Metrics System 
+      items:
+        - title: Introduction 
+          url: monitoring-metrics/new-metrics-system
+        - title: Metrics
+          url: monitoring-metrics/new-metrics
+    - title: System Views
+      url: monitoring-metrics/system-views
+    - title: Tracing
+      url: monitoring-metrics/tracing
+- title: Working with Events
+  items:
+    - title: Enabling and Listenting to Events
+      url: events/listening-to-events
+    - title: Events
+      url: events/events
+- title: Tools
+  items:
+    - title: Control Script
+      url: tools/control-script
+    - title: Visor CMD
+      url: tools/visor-cmd
+    - title: GridGain Control Center
+      url: tools/gg-control-center
+    - title: SQLLine
+      url: tools/sqlline
+    - title: Tableau
+      url: tools/tableau
+    - title: Informatica
+      url: tools/informatica
+    - title: Pentaho
+      url: tools/pentaho
+- title: Security
+  url: security
+  items: 
+    - title: Authentication
+      url: security/authentication
+    - title: SSL/TLS 
+      url: security/ssl-tls
+    - title: Transparent Data Encryption
+      items:
+        - title: Introduction
+          url: security/tde
+        - title: Master key rotation
+          url: security/master-key-rotation
+    - title: Sandbox
+      url: security/sandbox
+- title: Extensions and Integrations
+  items:
+    - title: Spring
+      items:
+        - title: Spring Boot
+          url: extensions-and-integrations/spring/spring-boot
+        - title: Spring Data
+          url: extensions-and-integrations/spring/spring-data
+        - title: Spring Caching
+          url: extensions-and-integrations/spring/spring-caching
+    - title: Ignite for Spark
+      items:
+        - title: Overview
+          url: extensions-and-integrations/ignite-for-spark/overview
+        - title: IgniteContext and IgniteRDD
+          url:  extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd
+        - title: Ignite DataFrame
+          url: extensions-and-integrations/ignite-for-spark/ignite-dataframe
+        - title: Installation
+          url: extensions-and-integrations/ignite-for-spark/installation
+        - title: Test Ignite with Spark-shell
+          url: extensions-and-integrations/ignite-for-spark/spark-shell
+        - title: Troubleshooting
+          url: extensions-and-integrations/ignite-for-spark/troubleshooting
+    - title: Hibernate L2 Cache
+      url: extensions-and-integrations/hibernate-l2-cache
+    - title: MyBatis L2 Cache
+      url: extensions-and-integrations/mybatis-l2-cache
+    - title: Streaming
+      items:
+        - title: Kafka Streamer
+          url: extensions-and-integrations/streaming/kafka-streamer
+        - title: Camel Streamer
+          url: extensions-and-integrations/streaming/camel-streamer
+        - title: Flink Streamer
+          url: extensions-and-integrations/streaming/flink-streamer
+        - title: Flume Sink
+          url: extensions-and-integrations/streaming/flume-sink
+        - title: JMS Streamer
+          url: extensions-and-integrations/streaming/jms-streamer
+        - title: MQTT Streamer
+          url: extensions-and-integrations/streaming/mqtt-streamer
+        - title: RocketMQ Streamer
+          url: extensions-and-integrations/streaming/rocketmq-streamer
+        - title: Storm Streamer
+          url: extensions-and-integrations/streaming/storm-streamer
+        - title: ZeroMQ Streamer
+          url: extensions-and-integrations/streaming/zeromq-streamer
+        - title: Twitter Streamer
+          url: extensions-and-integrations/streaming/twitter-streamer
+    - title: Cassandra Integration
+      items:
+        - title: Overview
+          url: extensions-and-integrations/cassandra/overview
+        - title: Configuration
+          url: extensions-and-integrations/cassandra/configuration
+        - title: Usage Examples
+          url: extensions-and-integrations/cassandra/usage-examples
+        - title: DDL Generator
+          url: extensions-and-integrations/cassandra/ddl-generator
+    - title: PHP PDO
+      url: extensions-and-integrations/php-pdo
+- title: Plugins
+  url: plugins
+- title: Performance and Troubleshooting
+  items:
+    - title: General Performance Tips
+      url: /perf-and-troubleshooting/general-perf-tips
+    - title: Memory and JVM Tuning
+      url: /perf-and-troubleshooting/memory-tuning
+    - title: Persistence Tuning
+      url: /perf-and-troubleshooting/persistence-tuning
+    - title: SQL Tuning
+      url: /perf-and-troubleshooting/sql-tuning
+    - title: Thread Pools Tuning
+      url: /perf-and-troubleshooting/thread-pools-tuning
+    - title: Troubleshooting and Debugging
+      url: /perf-and-troubleshooting/troubleshooting
+    - title: Handling Exceptions
+      url: /perf-and-troubleshooting/handling-exceptions
+    - title: Benchmarking With Yardstick
+      url: /perf-and-troubleshooting/yardstick-benchmarking
diff --git a/docs/_docs/SQL/JDBC/error-codes.adoc b/docs/_docs/SQL/JDBC/error-codes.adoc
new file mode 100644
index 0000000..f2e1a33
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/error-codes.adoc
@@ -0,0 +1,81 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Error Codes
+
+Ignite JDBC drivers pass error codes in the `java.sql.SQLException` class, used to facilitate exception handling on the application side. To get an error code, use the `java.sql.SQLException.getSQLState()` method. It returns a string containing the ANSI SQLSTATE error code:
+
+[source,java]
+----
+include::{javaCodeDir}/JDBCThinDriver.java[tags=error-codes, indent=0]
+----
+
+
+The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI SQLSTATE] error codes currently supported by Ignite. Note that the list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|0700B|Conversion failure (for example, a string expression cannot be parsed as a number or a date).
+
+|0700E|Invalid transaction isolation level.
+
+|08001|The driver failed to open a connection to the cluster.
+
+|08003|The connection is in the closed state. Happened unexpectedly.
+
+|08004|The connection was rejected by the cluster.
+
+|08006|I/O error during communication.
+
+|22004|Null value not allowed.
+
+|22023|Unsupported parameter type.
+
+|23000|Data integrity constraint violation.
+
+|24000|Invalid result set state.
+
+|0A000|Requested operation is not supported.
+
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
+
+|42000|Query parsing exception.
+
+|50000|Ignite internal error.
+The code is not defined by ANSI and refers to an Ignite specific error. Refer to the `java.sql.SQLException` error message for more information.
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
new file mode 100644
index 0000000..ee2ffeb
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
@@ -0,0 +1,297 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JDBC Client Driver
+:javaFile: {javaCodeDir}/JDBCClientDriver.java
+
+JDBC Client Driver interacts with the cluster by means of a client node.
+
+== JDBC Client Driver
+
+The JDBC Client Driver connects to the cluster by using a lclient node connection. You must provide a complete Spring XML configuration as part of the JDBC connection string, and copy all the JAR files mentioned below to the classpath of your application or SQL tool:
+
+- All the JARs under `{IGNITE_HOME}\libs` directory.
+- All the JARs under `{IGNITE_HOME}\ignite-indexing` and `{IGNITE_HOME}\ignite-spring` directories.
+
+The driver itself is more robust, and might not support the latest SQL features of Ignite. However, because it uses the client node connection underneath, it can execute and distribute queries, and aggregate their results directly from the application side.
+
+The JDBC connection URL has the following pattern:
+
+[source,shell]
+----
+jdbc:ignite:cfg://[<params>@]<config_url>
+----
+
+Where:
+
+- `<config_url>` is required and must represent a valid URL that points to the configuration file for the client node. This node will be started within the Ignite JDBC Client Driver when it (the JDBC driver) tries to establish a connection with the cluster.
+- `<params>` is optional and has the following format:
+
+[source,text]
+----
+param1=value1:param2=value2:...:paramN=valueN
+----
+
+
+The name of the driver's class is `org.apache.ignite.IgniteJdbcDriver`. For example, here's how to open a JDBC connection to the Ignite cluster:
+
+[source,java]
+----
+include::{javaFile}[tags=register, indent=0]
+----
+
+[NOTE]
+====
+[discrete]
+=== Securing Connection
+
+For information on how to secure the JDBC client driver connection, you can refer to the link:security/ssl-tls[Security documentation].
+====
+
+=== Supported Parameters
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`cache`
+
+|Cache name. If it is not defined, then the default cache will be used. Note that the cache name is case sensitive.
+| None.
+
+|`nodeId`
+
+|ID of node where query will be executed. Useful for querying through local caches.
+| None.
+
+|`local`
+
+|Query will be executed only on a local node. Use this parameter with the `nodeId` parameter in order to limit data set by specified node.
+
+|`false`
+
+|`collocated`
+
+|Flag that is used for optimization purposes. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If you know in advance that the elements of your query selection are colocated together on the same node, Ignite can make significant performance and network optimizations.
+
+|`false`
+
+|`distributedJoins`
+
+|Allows use of distributed joins for non-colocated data.
+
+|`false`
+
+|`streaming`
+
+|Turns on bulk data load mode via INSERT statements for this connection. Refer to the <<Streaming Mode>> section for more details.
+
+|`false`
+
+|`streamingAllowOverwrite`
+
+|Tells Ignite to overwrite values for existing keys on duplication instead of skipping them. Refer to the <<Streaming Mode>> section for more details.
+
+|`false`
+
+|`streamingFlushFrequency`
+
+|Timeout, in milliseconds, that data streamer should use to flush data. By default, the data is flushed on connection close. Refer to the <<Streaming Mode>> section for more details.
+
+|`0`
+
+|`streamingPerNodeBufferSize`
+
+|Data streamer's per node buffer size. Refer to the <<Streaming Mode>> section for more details.
+
+|`1024`
+
+|`streamingPerNodeParallelOperations`
+
+|Data streamer's per node parallel operations number. Refer to the <<Streaming Mode>> section for more details.
+
+|`16`
+
+|`transactionsAllowed`
+
+|Presently ACID Transactions are supported, but only at the key-value API level. At the SQL level, Ignite supports atomic, but not transactional consistency.
+
+This means that the JDBC driver might throw a `Transactions are not supported` exception if you try to use this functionality.
+
+However, in cases when you need transactional syntax to work (even without transactional semantics), e.g. some BI tools might force the transactional behavior, set this parameter to `true` to prevent exceptions from being thrown.
+
+|`false`
+
+|`multipleStatementsAllowed`
+
+|JDBC driver will be able to process multiple SQL statements at a time, returning multiple `ResultSet` objects. If the parameter is disabled, the query with multiple statements fails.
+
+|`false`
+
+|`lazy`
+
+|Lazy query execution.
+
+By default, Ignite attempts to fetch the whole query result set to memory and send it to the client. For small and medium result sets, this provides optimal performance and minimizes the duration of internal database locks, thus increasing concurrency.
+
+However, if the result set is too big to fit in the available memory, it can lead to excessive GC pauses and even `OutOfMemoryError` errors. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+
+|`false`
+
+|`skipReducerOnUpdate`
+
+|Enables server side update feature.
+
+When Ignite executes a DML operation, it first fetches all of the affected intermediate rows for analysis to the query initiator (also known as reducer), and then prepares batches of updated values to be sent to remote nodes.
+
+This approach might impact performance and saturate the network if a DML operation has to move many entries over it.
+
+Use this flag as a hint for Ignite to perform all intermediate rows analysis and updates "in-place" on the corresponding remote data nodes.
+
+Defaults to `false`, meaning that intermediate results will be fetched to the query initiator first.
+|`false`
+
+
+|=======================================================================
+
+[NOTE]
+====
+[discrete]
+=== Cross-Cache Queries
+
+The cache to which the driver is connected is treated as the default schema. To query across multiple caches, you can use Cross-Cache queries.
+====
+
+=== Streaming Mode
+
+It's feasible to add data into a cluster in streaming mode (bulk mode) using the JDBC driver. In this mode, the driver instantiates `IgniteDataStreamer` internally and feeds data to it. To activate this mode, add the `streaming` parameter set to `true` to a JDBC connection string:
+
+[source,java]
+----
+// Register JDBC driver.
+Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+// Opening connection in the streaming mode.
+Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");
+----
+
+Presently, streaming mode is supported only for INSERT operations. This is useful in cases when you want to achieve fast data preloading into a cache. The JDBC driver defines multiple connection parameters that affect the behavior of the streaming mode. These parameters are listed in the parameters table above.
+
+[WARNING]
+====
+[discrete]
+=== Cache Name
+
+Make sure you specify a target cache for streaming as an argument to the `cache=` parameter in the JDBC connection string. If a cache is not specified or does not match the table used in streaming DML statements, updates will be ignored.
+====
+
+The parameters cover almost all of the settings of a general `IgniteDataStreamer` and allow you to tune the streamer according to your needs. Please refer to the link:data-streaming[Data Streaming] section for more information on how to configure the streamer.
+
+[NOTE]
+====
+[discrete]
+=== Time Based Flushing
+
+By default, the data is flushed when either a connection is closed or `streamingPerNodeBufferSize` is met. If you need to flush the data more frequently, adjust the `streamingFlushFrequency` parameter.
+====
+
+[source,java]
+----
+include::{javaFile}[tags=time-based-flushing, indent=0]
+----
+
+== Example
+
+To start processing the data located in the cluster, you need to create a JDBC `Connection` object using one of the methods below:
+
+[source,java]
+----
+// Register JDBC driver.
+Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+// Open JDBC connection (cache name is not specified, which means that we use default cache).
+Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");
+----
+
+Right after that you can execute your SQL `SELECT` queries:
+
+[source,java]
+----
+// Query names of all people.
+ResultSet rs = conn.createStatement().executeQuery("select name from Person");
+
+while (rs.next()) {
+    String name = rs.getString(1);
+}
+
+----
+
+[source,java]
+----
+// Query people with specific age using prepared statement.
+PreparedStatement stmt = conn.prepareStatement("select name, age from Person where age = ?");
+
+stmt.setInt(1, 30);
+
+ResultSet rs = stmt.executeQuery();
+
+while (rs.next()) {
+    String name = rs.getString("name");
+    int age = rs.getInt("age");
+}
+----
+
+You can use DML statements to modify the data.
+
+=== INSERT
+[source,java]
+----
+// Insert a Person with a Long key.
+PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+stmt.setInt(1, 1);
+stmt.setString(2, "John Smith");
+stmt.setInt(3, 25);
+
+stmt.execute();
+----
+
+=== MERGE
+[source,java]
+----
+// Merge a Person with a Long key.
+PreparedStatement stmt = conn.prepareStatement("MERGE INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+stmt.setInt(1, 1);
+stmt.setString(2, "John Smith");
+stmt.setInt(3, 25);
+
+stmt.executeUpdate();
+----
+
+=== UPDATE
+
+[source,java]
+----
+// Update a Person.
+conn.createStatement().
+  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");
+----
+
+=== DELETE
+
+[source,java]
+----
+conn.createStatement().execute("DELETE FROM Person WHERE age = 25");
+----
diff --git a/docs/_docs/SQL/JDBC/jdbc-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
new file mode 100644
index 0000000..09438c1
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
@@ -0,0 +1,649 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JDBC Driver
+:javaFile: {javaCodeDir}/JDBCThinDriver.java
+
+Ignite is shipped with JDBC drivers that allow processing of distributed data using standard SQL statements like `SELECT`, `INSERT`, `UPDATE` or `DELETE` directly from the JDBC side.
+
+Presently, there are two drivers supported by Ignite: the lightweight and easy to use JDBC Thin Driver described in this document and link:SQL/JDBC/jdbc-client-driver[JDBC Client Driver] that interacts with the cluster by means of a client node.
+
+== JDBC Thin Driver
+
+The JDBC Thin driver is a default, lightweight driver provided by Ignite. To start using the driver, just add `ignite-core-{version}.jar` to your application's classpath.
+
+The driver connects to one of the cluster nodes and forwards all the queries to it for final execution. The node handles the query distribution and the result's aggregations. Then the result is sent back to the client application.
+
+The JDBC connection string may be formatted with one of two patterns: `URL query` or `semicolon`:
+
+
+
+.Connection String Syntax
+[source,text]
+----
+// URL query pattern
+jdbc:ignite:thin://<hostAndPortRange0>[,<hostAndPortRange1>]...[,<hostAndPortRangeN>][/schema][?<params>]
+
+hostAndPortRange := host[:port_from[..port_to]]
+
+params := param1=value1[&param2=value2]...[&paramN=valueN]
+
+// Semicolon pattern
+jdbc:ignite:thin://<hostAndPortRange0>[,<hostAndPortRange1>]...[,<hostAndPortRangeN>][;schema=<schema_name>][;param1=value1]...[;paramN=valueN]
+----
+
+
+- `host` is required and defines the host of the cluster node to connect to.
+- `port_from` is the beginning of the port range to use to open the connection. 10800 is used by default if this parameter is omitted.
+- `port_to` is optional. It is set to the `port_from` value by default if this parameter is omitted.
+- `schema` is the schema name to access. PUBLIC is used by default. This name should correspond to the SQL ANSI-99 standard. Non-quoted identifiers are not case sensitive. Quoted identifiers are case sensitive. When semicolon format is used, the schema may be defined as a parameter with name schema.
+- `<params>` are optional.
+
+The name of the driver's class is `org.apache.ignite.IgniteJdbcThinDriver`. For instance, this is how you can open a JDBC connection to the cluster node listening on IP address 192.168.0.50:
+
+[source,java]
+----
+include::{javaFile}[tags=get-connection, indent=0]
+----
+
+
+[NOTE]
+====
+[discrete]
+=== Put the JDBC URL in quotes when connecting from bash
+
+Make sure to put the connection URL in double quotes (" ") when connecting from a bash environment, for example: `"jdbc:ignite:thin://[address]:[port];user=[username];password=[password]"`
+====
+
+=== Parameters
+The following table lists all the parameters that are supported by the JDBC connection string:
+
+[width="100%",cols="30%,40%,30%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`user`
+|Username for the SQL Connection. This parameter is required if authentication is enabled on the server.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+|ignite
+
+|`password`
+|Password for SQL Connection. Required if authentication is enabled on the server.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+|`ignite`
+
+|`distributedJoins`
+|Whether to execute distributed joins in link:SQL/distributed-joins#non-colocated-joins[non-colocated mode].
+|false
+
+|`enforceJoinOrder`
+
+|Whether to enforce join order of tables in the query. If set to `true`, the query optimizer does not reorder tables in the join.
+
+|`false`
+
+|`collocated`
+
+| Set this parameter to `true` if your SQL statement includes a GROUP BY clause that groups the results by either primary
+  or affinity key. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If
+  you know in advance that the elements of your query selection are colocated together on the same node and you group by
+  a primary or affinity key, then Ignite makes significant performance and network optimizations by grouping data locally
+   on each node participating in the query.
+|`false`
+
+|`replicatedOnly`
+
+|Whether the query contains only replicated tables. This is a hint for potentially more effective execution.
+
+|`false`
+
+|`autoCloseServerCursor`
+|Whether to close server-side cursors automatically when the last piece of a result set is retrieved. When this property is enabled, calling `ResultSet.close()` does not require a network call, which could improve performance. However, if the server-side cursor is already closed, you may get an exception when trying to call `ResultSet.getMetadata()`. This is why it defaults to `false`.
+|`false`
+
+| `partitionAwareness`
+| Enables xref:partition-awareness[] mode. In this mode, the driver tries to determine the nodes where the data that is being queried is located and send the query to these nodes.
+| `false`
+
+|`partitionAwarenessSQLCacheSize` [[partitionAwarenessSQLCacheSize]]
+| The number of distinct SQL queries that the driver keeps locally for optimization. When a query is executed for the first time, the driver receives the partition distribution for the table that is being queried and saves it for future use locally. When you query this table next time, the driver uses the partition distribution to determine where the data being queried is located to send the query to the right nodes. This local storage with SQL queries invalidates when the cluster topolo [...]
+| 1000
+
+|`partitionAwarenessPartitionDistributionsCacheSize` [[partitionAwarenessPartitionDistributionsCacheSize]]
+| The number of distinct objects that represent partition distribution that the driver keeps locally for optimization. See the description of the previous parameter for details. This local storage with partition distribution objects invalidates when the cluster topology changes. The optimal value for this parameter should equal the number of distinct tables (link:configuring-caches/cache-groups[cache groups]) you are going to use in your queries.
+| 1000
+
+|`socketSendBuffer`
+|Socket send buffer size. When set to 0, the OS default is used.
+|0
+
+|`socketReceiveBuffer`
+|Socket receive buffer size. When set to 0, the OS default is used.
+|0
+
+|`tcpNoDelay`
+| Whether to use `TCP_NODELAY` option.
+|`true`
+
+|`lazy`
+|Lazy query execution.
+By default, Ignite attempts to get and load the whole query result set into memory and then send it to the client. For small and medium result sets, this provides optimal performance and minimizes the duration of internal database locks, thus increasing concurrency.
+However, if the result set is too big to fit in the available memory, then it can lead to excessive GC pauses and even 'OutOfMemoryError's. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+|`false`
+
+|`skipReducerOnUpdate`
+|Enables server side updates.
+When Ignite executes a DML operation, it fetches all the affected intermediate rows and sends them to the query initiator (also known as reducer) for analysis. Then it prepares batches of updated values to be sent to remote nodes.
+This approach might impact performance and it can saturate the network if a DML operation has to move many entries over it.
+Use this flag to tell Ignite to perform all intermediate row analysis and updates "in-place" on corresponding remote data nodes.
+Defaults to `false`, meaning that the intermediate results are fetched to the query initiator first.
+|`false`
+
+
+|=======================================================================
+
+For the list of security parameters, refer to the <<Using SSL>> section.
+
+=== Connection String Examples
+
+- `jdbc:ignite:thin://myHost` - connect to myHost on the port 10800 with all defaults.
+- `jdbc:ignite:thin://myHost:11900` - connect to myHost on custom port 11900 with all defaults.
+- `jdbc:ignite:thin://myHost:11900;user=ignite;password=ignite` - connect to myHost on custom port 11900 with user credentials for authentication.
+- `jdbc:ignite:thin://myHost:11900;distributedJoins=true&autoCloseServerCursor=true` - connect to myHost on custom port 11900 with enabled distributed joins and autoCloseServerCursor optimization.
+- `jdbc:ignite:thin://myHost:11900/myschema;` - connect to myHost on custom port 11900 and access to MYSCHEMA.
+- `jdbc:ignite:thin://myHost:11900/"MySchema";lazy=false` - connect to myHost on custom port 11900 with disabled lazy query execution and access to MySchema (schema name is case sensitive).
+
+=== Multiple Endpoints
+
+You can enable automatic failover if a current connection is broken by setting multiple connection endpoints in the connection string.
+The JDBC Driver randomly picks an address from the list to connect to. If the connection fails, the JDBC Driver selects another address from the list until the connection is restored.
+The Driver stops reconnecting and throws an exception if all the endpoints are unreachable.
+
+The example below shows how to pass three addresses via the connection string:
+
+[source,java]
+----
+include::{javaFile}[tags=multiple-endpoints, indent=0]
+----
+
+
+=== Partition Awareness [[partition-awareness]]
+
+[WARNING]
+====
+[discrete]
+Partition awareness is an experimental feature whose API or design architecture might be changed
+before a GA version is released.
+====
+
+Partition awareness is a feature that makes the JDBC driver "aware" of the partition distribution in the cluster.
+It allows the driver to pick the nodes that own the data that is being queried and send the query directly to those nodes
+(if the addresses of the nodes are provided in the driver's configuration). Partition awareness can increase average
+performance of queries that use the affinity key.
+
+Without partition awareness, the JDBC driver connects to a single node, and all queries are executed through that node.
+If the data is hosted on a different node, the query has to be rerouted within the cluster, which adds an additional network hop.
+Partition awareness eliminates that hop by sending the query to the right node.
+
+To make use of the partition awareness feature, provide the addresses of all the server nodes in the connection properties.
+The driver will route requests to the nodes that store the data requested by the query.
+
+[WARNING]
+====
+[discrete]
+Note that presently you need to provide the addresses of all server nodes in the connection properties because the driver does not load them automatically after a connection is opened.
+It also means that if a new server node joins the cluster, you are advised to reconnect the driver and add the node's address to the connection properties.
+Otherwise, the driver will not be able to send direct requests to this node.
+====
+
+To enable partition awareness, add the `partitionAwareness=true` parameter to the connection string and provide the
+endpoints of multiple server nodes:
+
+[source, java]
+----
+include::{javaFile}[tags=partition-awareness, indent=0]
+----
+
+NOTE: Partition Awareness can be used only with the default affinity function.
+
+Also see the description of the two related parameters: xref:partitionAwarenessSQLCacheSize[partitionAwarenessSQLCacheSize] and xref:partitionAwarenessPartitionDistributionsCacheSize[partitionAwarenessPartitionDistributionsCacheSize].
+
+
+=== Cluster Configuration
+
+In order to accept and process requests from JDBC Thin Driver, a cluster node binds to a local network interface on port 10800 and listens to incoming requests.
+
+Use an instance of `ClientConnectorConfiguration` to change the connection parameters:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+  <property name="clientConnectorConfiguration">
+    <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration" />
+  </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration()
+    .setClientConnectorConfiguration(new ClientConnectorConfiguration());
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+The following parameters are supported:
+
+[width="100%",cols="30%,55%,15%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`host`
+
+|Host name or IP address to bind to. When set to `null`, binding is made to `localhost`.
+
+|`null`
+
+|`port`
+
+|TCP port to bind to. If the specified port is already in use, Ignite tries to find another available port using the `portRange` property.
+
+|`10800`
+
+|`portRange`
+
+| Defines the number of ports to try to bind to. E.g. if the port is set to `10800` and `portRange` is `100`, then the server tries to bind consecutively to any port in the `[10800, 10900]` range until it finds a free port.
+
+|`100`
+
+|`maxOpenCursorsPerConnection`
+
+|Maximum number of cursors that can be opened simultaneously for a single connection.
+
+|`128`
+
+|`threadPoolSize`
+
+|Number of request-handling threads in the thread pool.
+
+|`MAX(8, CPU cores)`
+
+|`socketSendBufferSize`
+
+|Size of the TCP socket send buffer. When set to 0, the system default value is used.
+
+|`0`
+
+|`socketReceiveBufferSize`
+
+|Size of the TCP socket receive buffer. When set to 0, the system default value is used.
+
+|`0`
+
+|`tcpNoDelay`
+
+|Whether to use `TCP_NODELAY` option.
+
+|`true`
+
+|`idleTimeout`
+
+|Idle timeout for client connections.
+Clients are disconnected automatically from the server after remaining idle for the configured timeout.
+When this parameter is set to zero or a negative value, the idle timeout is disabled.
+
+|`0`
+
+|`isJdbcEnabled`
+
+|Whether access through JDBC is enabled.
+
+|`true`
+
+|`isThinClientEnabled`
+
+|Whether access through thin client is enabled.
+
+|`true`
+
+
+|`sslEnabled`
+
+|If SSL is enabled, only SSL client connections are allowed. The node allows only one mode of connection: `SSL` or `plain`. A node cannot receive both types of client connections. But this option can be different for different nodes in the cluster.
+
+|`false`
+
+|`useIgniteSslContextFactory`
+
+|Whether to use SSL context factory from the node's configuration (see `IgniteConfiguration.sslContextFactory`).
+
+|`true`
+
+|`sslClientAuth`
+
+|Whether client authentication is required.
+
+|`false`
+
+|`sslContextFactory`
+
+|The class name that implements `Factory<SSLContext>` to provide node-side SSL. See link:security/ssl-tls[this] for more information.
+
+|`null`
+|=======================================================================
+
+[WARNING]
+====
+[discrete]
+=== JDBC Thin Driver is not thread safe
+
+The JDBC objects `Connections`, `Statements`, and `ResultSet` are not thread safe.
+Do not use statements and results sets from a single JDBC Connection in multiple threads.
+
+JDBC Thin Driver guards against concurrency. If concurrent access is detected, an exception
+(`SQLException`) is produced with the following message:
+
+....
+"Concurrent access to JDBC connection is not allowed
+[ownThread=<guard_owner_thread_name>, curThread=<current_thread_name>]",
+SQLSTATE="08006"
+....
+====
+
+
+=== Using SSL
+
+You can configure the JDBC Thin Driver to use SSL to secure communication with the cluster.
+SSL must be configured both on the cluster side and in the JDBC Driver.
+Refer to the  link:security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section for the information about cluster configuration.
+
+To enable SSL in the JDBC Driver, pass the `sslMode=require` parameter in the connection string and provide the key store and trust store parameters:
+
+[source, java]
+----
+include::{javaFile}[tags=ssl,indent=0]
+----
+
+The following table lists all parameters that affect SSL/TLS connection:
+
+[width="100%",cols="30%,40%,30%"]
+|====
+|Parameter |Description |Default Value
+|`sslMode`
+a|Enables SSL connection. Available modes:
+
+* `require`: SSL protocol is enabled on the client. Only SSL connection is available.
+* `disable`: SSL protocol is disabled on the client. Only plain connection is supported.
+
+|`disable`
+
+|`sslProtocol`
+|Protocol name for secure transport. Protocol implementations supplied by JSSE: `SSLv3 (SSL)`, `TLSv1 (TLS)`, `TLSv1.1`, `TLSv1.2`
+|`TLS`
+
+|`sslKeyAlgorithm`
+
+|The Key manager algorithm to be used to create a key manager. Note that in most cases the default value is sufficient.
+Algorithms implementations supplied by JSSE: `PKIX (X509 or SunPKIX)`, `SunX509`.
+
+| `None`
+
+|`sslClientCertificateKeyStoreUrl`
+
+|URL of the client key store file.
+This is a mandatory parameter since SSL context cannot be initialized without a key manager.
+If `sslMode` is `require` and the key store URL isn't specified in the Ignite properties, the value of the JSSE property `javax.net.ssl.keyStore` is used.
+
+|The value of the
+`javax.net.ssl.keyStore`
+system property.
+
+|`sslClientCertificateKeyStorePassword`
+
+|Client key store password.
+
+If `sslMode` is `require` and the key store password isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.keyStorePassword` is used.
+
+|The value of the `javax.net.ssl.
+keyStorePassword` system property.
+
+|`sslClientCertificateKeyStoreType`
+
+|Client key store type used in context initialization.
+
+If `sslMode` is `require` and the key store type isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.keyStoreType` is used.
+
+|The value of the
+`javax.net.ssl.keyStoreType`
+system property.
+If the system property is not defined, the default value is `JKS`.
+
+|`sslTrustCertificateKeyStoreUrl`
+
+|URL of the trust store file. This is an optional parameter; however, one of these properties must be set: `sslTrustCertificateKeyStoreUrl` or `sslTrustAll`
+
+If `sslMode` is `require` and the trust store URL isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStore` is used.
+
+|The value of the
+`javax.net.ssl.trustStore` system property.
+
+|`sslTrustCertificateKeyStorePassword`
+
+|Trust store password.
+
+If `sslMode` is `require` and the trust store password isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStorePassword` is used.
+
+|The value of the
+`javax.net.ssl.trustStorePassword` system property
+
+|`sslTrustCertificateKeyStoreType`
+
+|Trust store type.
+
+If `sslMode` is `require` and the trust store type isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStoreType` is used.
+
+|The value of the
+`javax.net.ssl.trustStoreType`
+system property. If the system property is not defined the default value is `JKS`
+
+|`sslTrustAll`
+
+a|Disables server's certificate validation. Set to `true` to trust any server certificate (revoked, expired, or self-signed SSL certificates).
+
+CAUTION: Do not enable this option in production on a network you do not entirely trust. Especially anything using the public internet.
+
+|`false`
+
+|`sslFactory`
+
+|Class name of the custom implementation of the
+`Factory<SSLSocketFactory>`.
+
+If `sslMode` is `require` and a factory is specified, the custom factory is used instead of the JSSE socket factory. In this case, other SSL properties are ignored.
+
+|`null`
+|====
+
+
+//See the `ssl*` parameters of the JDBC driver, and `ssl*` parameters and `useIgniteSslContextFactory` of the `ClientConnectorConfiguration` for more detailed information.
+
+The default implementation is based on JSSE, and works through two Java keystore files:
+
+- `sslClientCertificateKeyStoreUrl` - the client certificate keystore holds the keys and certificate for the client.
+- `sslTrustCertificateKeyStoreUrl` - the trusted certificate keystore contains the certificate information to validate the server's certificate.
+
+The trusted store is an optional parameter, however one of the following parameters: `sslTrustCertificateKeyStoreUrl` or `sslTrustAll` must be configured.
+
+[WARNING]
+====
+[discrete]
+=== Using the "sslTrustAll" option
+
+Do not enable this option in production on a network you do not entirely trust, especially anything using the public internet.
+====
+
+If you want to use your own implementation or method to configure the `SSLSocketFactory`, you can use JDBC Driver's `sslFactory` parameter. It is a string that must contain the name of the class that implements the interface `Factory<SSLSocketFactory>`. The class must be available for JDBC Driver's class loader.
+
+== Ignite DataSource
+
+The DataSource object is used as a deployed object that can be located by logical name via the JNDI naming service. JDBC Driver's `org.apache.ignite.IgniteJdbcThinDataSource` implements a JDBC DataSource interface allowing you to utilize the DataSource interface instead.
+
+In addition to generic DataSource properties, `IgniteJdbcThinDataSource` supports all the Ignite-specific properties that can be passed into a JDBC connection string. For instance, the `distributedJoins` property can be (re)set via the `IgniteJdbcThinDataSource#setDistributedJoins()` method.
+
+Refer to the link:{javadoc_base_url}/org/apache/ignite/IgniteJdbcThinDataSource.html[JavaDocs] for more details.
+
+== Examples
+
+To start processing the data located in the cluster, you need to create a JDBC Connection object via one of the methods below:
+
+[source, java]
+----
+// Open the JDBC connection via DriverManager.
+Connection conn = DriverManager.getConnection("jdbc:ignite:thin://192.168.0.50");
+----
+
+or
+
+[source,java]
+----
+include::{javaFile}[tags=connection-from-data-source,indent=0]
+----
+
+Then you can execute SQL SELECT queries as follows:
+
+[source,java]
+----
+include::{javaFile}[tags=select,indent=0]
+----
+
+You can also modify the data via DML statements.
+
+=== INSERT
+
+[source,java]
+----
+include::{javaFile}[tags=insert,indent=0]
+----
+
+
+=== MERGE
+
+
+[source,java]
+----
+include::{javaFile}[tags=merge,indent=0]
+
+----
+
+
+=== UPDATE
+
+
+[source,java]
+----
+// Update a Person.
+conn.createStatement().
+  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");
+----
+
+
+=== DELETE
+
+
+[source,java]
+----
+conn.createStatement().execute("DELETE FROM Person WHERE age = 25");
+----
+
+
+== Streaming
+
+JDBC Driver allows streaming data in bulk using the `SET` command. See the `SET` command link:sql-reference/operational-commands#set-streaming[documentation] for more information.
+
+
+
+
+
+
+== Error Codes
+
+The JDBC drivers pass error codes in the `java.sql.SQLException` class, used to facilitate exception handling on the application side. To get an error code, use the `java.sql.SQLException.getSQLState()` method. It returns a string containing the ANSI SQLSTATE error code defined:
+
+
+[source,java]
+----
+include::{javaFile}[tags=handle-exception,indent=0]
+----
+
+
+
+The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI SQLSTATE] error codes currently supported by Ignite. Note that the list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|0700B|Conversion failure (for example, a string expression cannot be parsed as a number or a date).
+
+|0700E|Invalid transaction isolation level.
+
+|08001|The driver failed to open a connection to the cluster.
+
+|08003|The connection is in the closed state. Happened unexpectedly.
+
+|08004|The connection was rejected by the cluster.
+
+|08006|I/O error during communication.
+
+|22004|Null value not allowed.
+
+|22023|Unsupported parameter type.
+
+|23000|Data integrity constraint violation.
+
+|24000|Invalid result set state.
+
+|0A000|Requested operation is not supported.
+
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
+
+|42000|Query parsing exception.
+
+|50000| Internal error.
+The code is not defined by ANSI and refers to an Ignite specific error. Refer to the `java.sql.SQLException` error message for more information.
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/connection-string-dsn.adoc b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
new file mode 100644
index 0000000..6c5e1c4
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
@@ -0,0 +1,255 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Connection String and DSN
+
+== Connection String Format
+
+The ODBC Driver supports standard connection string format. Here is the formal syntax:
+
+[source,text]
+----
+connection-string ::= empty-string[;] | attribute[;] | attribute; connection-string
+empty-string ::=
+attribute ::= attribute-keyword=attribute-value | DRIVER=[{]attribute-value[}]
+attribute-keyword ::= identifier
+attribute-value ::= character-string
+----
+
+
+In simple terms, an ODBC connection URL is a string with parameters of the choice separated by semicolon.
+
+== Supported Arguments
+
+The ODBC driver supports and uses several connection string/DSN arguments. All parameter names are case-insensitive - `ADDRESS`, `Address`, and `address` all are valid parameter names and refer to the same parameter. If an argument is not specified, the default value is used. The exception to this rule is the `ADDRESS` attribute. If it is not specified, `SERVER` and `PORT` attributes are used instead.
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Attribute keyword |Description |Default Value
+
+|`ADDRESS`
+|Address of the remote node to connect to. The format is: `<host>[:<port>]`. For example: `localhost`, `example.com:12345`, `127.0.0.1`, `192.168.3.80:5893`.
+If this attribute is specified, then `SERVER` and `PORT` arguments are ignored.
+|None.
+
+|`SERVER`
+|Address of the node to connect to.
+This argument value is ignored if ADDRESS argument is specified.
+|None.
+
+|`PORT`
+|Port on which `OdbcProcessor` of the node is listening.
+This argument value is ignored if `ADDRESS` argument is specified.
+|`10800`
+
+|`USER`
+|Username for SQL Connection. This parameter is required if authentication is enabled on the server.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+|Empty string
+
+|`PASSWORD`
+|Password for SQL Connection. This parameter is required if authentication is enabled on the server.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+|Empty string
+
+|`SCHEMA`
+|Schema name.
+|`PUBLIC`
+
+|`DSN`
+|DSN name to connect to.
+| None.
+
+|`PAGE_SIZE`
+|Number of rows returned in response to a fetching request to the data source. Default value should be fine in most cases. Setting a low value can result in slow data fetching while setting a high value can result in additional memory usage by the driver, and additional delay when the next page is being retrieved.
+|`1024`
+
+|`DISTRIBUTED_JOINS`
+|Enables the link:SQL/distributed-joins#non-colocated-joins[non-colocated distributed joins] feature for all queries that are executed over the ODBC connection.
+|`false`
+
+|`ENFORCE_JOIN_ORDER`
+|Enforces a join order of tables in SQL queries. If set to `true`, the query optimizer does not reorder tables in the join.
+|`false`
+
+|`PROTOCOL_VERSION`
+|Used to specify ODBC protocol version to use. Currently, there are following versions: `2.1.0`, `2.1.5`, `2.3.0`, `2.3.2`, `2.5.0`. You can use earlier versions of the protocol for backward compatibility.
+|`2.3.0`
+
+|`REPLICATED_ONLY`
+|Set this property to `true` if the query is to be executed over fully replicated tables. This can enforce execution optimizations.
+|`false`
+
+|`COLLOCATED`
+| Set this parameter to `true` if your SQL statement includes a GROUP BY clause that groups the results by either primary
+or affinity key. When Ignite executes a distributed query, it sends sub-queries to individual cluster members. If
+you know in advance that the elements of your query selection are colocated together on the same node and you group by
+a primary or affinity key, then Ignite makes significant performance and network optimizations by grouping data locally
+ on each node participating in the query.
+|`false`
+
+|`LAZY`
+|Lazy query execution.
+By default, Ignite attempts to fetch the whole query result set to memory and send it to the client. For small and medium result sets, this provides optimal performance and minimize duration of internal database locks, thus increasing concurrency.
+However, if the result set is too big to fit in the available memory, then it can lead to excessive GC pauses and even `OutOfMemoryError` errors. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+|`false`
+
+|`SKIP_REDUCER_ON_UPDATE`
+|Enables server side update feature.
+When Ignite executes a DML operation, first, it fetches all the affected intermediate rows for analysis to the query initiator (also known as reducer), and only then prepares batches of updated values that will be sent to remote nodes.
+This approach might affect performance, and saturate the network if a DML operation has to move many entries over it.
+Use this flag to tell Ignite to do all intermediate rows analysis and updates "in-place" on corresponding remote data nodes.
+Defaults to `false`, meaning that intermediate results will be fetched to the query initiator first.
+|`false`
+
+|`SSL_MODE`
+|Determines whether the SSL connection should be negotiated with the server. Use `require` or `disable` mode as needed.
+| None.
+
+|`SSL_KEY_FILE`
+|Specifies the name of the file containing the SSL server private key.
+| None.
+
+|`SSL_CERT_FILE`
+|Specifies the name of the file containing the SSL server certificate.
+| None.
+
+|`SSL_CA_FILE`
+|Specifies the name of the file containing the SSL server certificate authority (CA).
+| None.
+|=======================================================================
+
+== Connection String Samples
+You can find samples of the connection string below. These strings can be used with `SQLDriverConnect` ODBC call to establish connection with a node.
+
+
+[tabs]
+--
+tab:Authentication[]
+[source,text]
+----
+DRIVER={Apache Ignite};
+ADDRESS=localhost:10800;
+SCHEMA=somecachename;
+USER=yourusername;
+PASSWORD=yourpassword;
+SSL_MODE=[require|disable];
+SSL_KEY_FILE=<path_to_private_key>;
+SSL_CERT_FILE=<path_to_client_certificate>;
+SSL_CA_FILE=<path_to_trusted_certificates>
+----
+
+tab:Specific Cache[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=localhost:10800;CACHE=yourCacheName
+----
+
+tab:Default cache[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=localhost:10800
+----
+
+tab:DSN[]
+[source,text]
+----
+DSN=MyIgniteDSN
+----
+
+tab:Custom page size[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=example.com:12901;CACHE=MyCache;PAGE_SIZE=4096
+----
+--
+
+
+
+== Configuring DSN
+The same arguments apply if you prefer to use link:https://en.wikipedia.org/wiki/Data_source_name[DSN] (Data Source Name) for connection purposes.
+
+To configure DSN on Windows, you should use a system tool called `odbcad32` (for 32-bit [x86] systems) or `odbc64` (for 64-bit systems) which is an ODBC Data Source Administrator.
+
+When installing the DSN tool, _if you use the pre-built msi file_, make sure you've installed Microsoft Visual C++ 2010 (https://www.microsoft.com/en-ie/download/details.aspx?id=5555[32-bit/x86] or https://www.microsoft.com/en-us/download/details.aspx?id=14632[64-bit/x64]).
+
+Launch this tool, via `Control panel->Administrative Tools->Data Sources (ODBC)`. Once the ODBC Data Source Administrator is launched, select `Add...->Apache Ignite` and configure your DSN.
+
+
+image::images/odbc_dsn_configuration.png[Configuring DSN]
+
+
+To do the same on Linux, you have to locate the `odbc.ini` file. The file location varies among Linux distributions and depends on a specific Driver Manager used by the Linux distribution. As an example, if you are using unixODBC then you can run the following command which will print system wide ODBC related details:
+
+
+[source,text]
+----
+odbcinst -j
+----
+
+
+Use the `SYSTEM DATA SOURCES` and `USER DATA SOURCES` properties to locate the `odbc.ini` file.
+
+Once you locate the `odbc.ini` file, open it with the editor of your choice and add the DSN section to it, as shown below:
+
+[source,text]
+----
+[DSN Name]
+description=<Insert your description here>
+driver=Apache Ignite
+<Other arguments here...>
+----
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/data-types.adoc b/docs/_docs/SQL/ODBC/data-types.adoc
new file mode 100644
index 0000000..ab2d8e1
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/data-types.adoc
@@ -0,0 +1,38 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Types
+
+Supported data types listing.
+
+The following SQL data types, listed in this link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/sql-data-types[specification], are supported:
+
+- `SQL_CHAR`
+- `SQL_VARCHAR`
+- `SQL_LONGVARCHAR`
+- `SQL_SMALLINT`
+- `SQL_INTEGER`
+- `SQL_FLOAT`
+- `SQL_DOUBLE`
+- `SQL_BIT`
+- `SQL_TINYINT`
+- `SQL_BIGINT`
+- `SQL_BINARY`
+- `SQL_VARBINARY`
+- `SQL_LONGVARBINARY`
+- `SQL_GUID`
+- `SQL_DECIMAL`
+- `SQL_TYPE_DATE`
+- `SQL_TYPE_TIMESTAMP`
+- `SQL_TYPE_TIME`
diff --git a/docs/_docs/SQL/ODBC/error-codes.adoc b/docs/_docs/SQL/ODBC/error-codes.adoc
new file mode 100644
index 0000000..a1d29ce
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/error-codes.adoc
@@ -0,0 +1,155 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Error Codes
+
+To get an error code, use the `SQLGetDiagRec()` function. It returns a string holding the ANSI SQL error code defined. For example:
+
+[source,c++]
+----
+SQLHENV env;
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+SQLCHAR connectStr[] = "DRIVER={Apache Ignite};SERVER=localhost;PORT=10800;SCHEMA=Person;";
+SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, 0, 0, 0, SQL_DRIVER_COMPLETE);
+
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] = "SELECT firstName, lastName, resume, salary FROM Person";
+SQLRETURN ret = SQLExecDirect(stmt, query, SQL_NTS);
+
+if (ret != SQL_SUCCESS)
+{
+	SQLCHAR sqlstate[7] = "";
+	SQLINTEGER nativeCode;
+
+	SQLCHAR message[1024];
+	SQLSMALLINT reallen = 0;
+
+	int i = 1;
+	ret = SQLGetDiagRec(SQL_HANDLE_STMT, stmt, i, sqlstate,
+                      &nativeCode, message, sizeof(message), &reallen);
+
+	while (ret != SQL_NO_DATA)
+	{
+		std::cout << sqlstate << ": " << message;
+
+		++i;
+		ret = SQLGetDiagRec(SQL_HANDLE_STMT, stmt, i, sqlstate,
+                        &nativeCode, message, sizeof(message), &reallen);
+	}
+}
+----
+
+The table below lists all the error codes supported by Ignite presently. This list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|01S00
+|Invalid connection string attribute.
+
+|01S02
+|The driver did not support the specified value and substituted a similar value.
+
+|08001
+|The driver failed to open a connection to the cluster.
+
+|08002
+|The connection is already established.
+
+|08003
+|The connection is in the closed state. Happened unexpectedly.
+
+|08004
+|The connection is rejected by the cluster.
+
+|08S01
+|Connection failure.
+
+|22026
+|String length mismatch in data-at-execution dialog.
+
+|23000
+|Integrity constraint violation (e.g. duplicate key, null key and so on).
+
+|24000
+|Invalid cursor state.
+
+|42000
+|Syntax error in request.
+
+|42S01
+|Table already exists.
+
+|42S02
+|Table not found.
+
+|42S11
+|Index already exists.
+
+|42S12
+|Index not found.
+
+|42S21
+|Column already exists.
+
+|42S22
+|Column not found.
+
+|HY000
+|General error. See error message for details.
+
+|HY001
+|Memory allocation error.
+
+|HY003
+|Invalid application buffer type.
+
+|HY004
+|Invalid SQL data type.
+
+|HY009
+|Invalid use of null-pointer.
+
+|HY010
+|Function call sequence error.
+
+|HY090
+|Invalid string or buffer length (e.g. negative or zero length).
+
+|HY092
+|Option type out of range.
+
+|HY097
+|Column type out of range.
+
+|HY105
+|Invalid parameter type.
+
+|HY106
+|Fetch type out of range.
+
+|HYC00
+|Feature is not implemented.
+
+|IM001
+|Function is not supported.
+|=======================================================================
diff --git a/docs/_docs/SQL/ODBC/odbc-driver.adoc b/docs/_docs/SQL/ODBC/odbc-driver.adoc
new file mode 100644
index 0000000..9f4e9b8
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/odbc-driver.adoc
@@ -0,0 +1,343 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ODBC Driver
+
+== Overview
+Ignite includes an ODBC driver that allows you both to select and to modify data stored in a distributed cache using standard SQL queries and native ODBC API.
+
+For detailed information on ODBC please refer to link:https://msdn.microsoft.com/en-us/library/ms714177.aspx[ODBC Programmer's Reference].
+
+The ODBC driver implements version 3.0 of the ODBC API.
+
+== Cluster Configuration
+
+The ODBC driver is treated as a dynamic library on Windows and a shared object on Linux. An application does not load it directly. Instead, it uses the Driver Manager API that loads and unloads ODBC drivers whenever required.
+
+Internally, the ODBC driver uses TCP to connect to a cluster. The cluster-side connection parameters can be configured via the `IgniteConfiguration.clientConnectorConfiguration` property.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration"/>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+
+ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+cfg.setClientConnectorConfiguration(clientConnectorCfg);
+
+----
+--
+
+Client connector configuration supports the following properties:
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`host`
+|Host name or IP address to bind to. When set to null, binding is made to `localhost`.
+|`null`
+
+|`port`
+|TCP port to bind to. If the specified port is already in use, Ignite will try to find another available port using the `portRange` property.
+|`10800`
+
+|`portRange`
+|Defines the number of ports to try to bind to. E.g. if the port is set to `10800` and `portRange` is `100`, then server will sequentially try to bind to any port from `[10800, 10900]` until it finds a free port.
+|`100`
+
+|`maxOpenCursorsPerConnection`
+|Maximum number of cursors that can be opened simultaneously for a single connection.
+|`128`
+
+|`threadPoolSize`
+|Number of request-handling threads in the thread pool.
+|`MAX(8, CPU cores)`
+
+|`socketSendBufferSize`
+|Size of the TCP socket send buffer. When set to 0, the system default value is used.
+|`0`
+
+|`socketReceiveBufferSize`
+|Size of the TCP socket receive buffer. When set to 0, the system default value is used.
+|`0`
+
+|`tcpNoDelay`
+|Whether to use the `TCP_NODELAY` option.
+|`true`
+
+|`idleTimeout`
+|Idle timeout for client connections.
+Clients will automatically be disconnected from the server after being idle for the configured timeout.
+When this parameter is set to zero or a negative value, idle timeout will be disabled.
+|`0`
+
+|`isOdbcEnabled`
+|Whether access through ODBC is enabled.
+|`true`
+
+|`isThinClientEnabled`
+|Whether access through thin client is enabled.
+|`true`
+|=======================================================================
+
+
+You can change these parameters as shown in the example below:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/odbc.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+...
+ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+
+clientConnectorCfg.setHost("127.0.0.1");
+clientConnectorCfg.setPort(12345);
+clientConnectorCfg.setPortRange(2);
+clientConnectorCfg.setMaxOpenCursorsPerConnection(512);
+clientConnectorCfg.setSocketSendBufferSize(65536);
+clientConnectorCfg.setSocketReceiveBufferSize(131072);
+clientConnectorCfg.setThreadPoolSize(4);
+
+cfg.setClientConnectorConfiguration(clientConnectorCfg);
+...
+----
+--
+
+A connection that is established from the ODBC driver side to the cluster via `ClientListenerProcessor` is also configurable. Find more details on how to alter connection settings from the driver side link:SQL/ODBC/connection-string-dsn[here].
+
+== Thread-Safety
+
+The current implementation of Ignite ODBC driver only provides thread-safety at the connections level. This means that you should not access the same connection from multiple threads without additional synchronization, though you can create separate connections for every thread and use them simultaneously.
+
+== Prerequisites
+
+Apache Ignite ODBC Driver was officially tested on:
+
+[cols="1,3a"]
+|===
+|OS
+|- Windows (XP and up, both 32-bit and 64-bit versions)
+- Windows Server (2008 and up, both 32-bit and 64-bit versions)
+- Ubuntu (18.04 64-bit)
+
+|C++ compiler
+
+|MS Visual C++ (10.0 and up), g++ (4.4.0 and up)
+
+|Visual Studio
+
+|2010 and above
+|===
+
+== Building ODBC Driver
+
+Ignite is shipped with pre-built installers for both 32- and 64-bit versions of the driver for Windows. So if you just want to install ODBC driver on Windows you may go straight to the <<Installing ODBC Driver>> section for installation instructions.
+
+If you use Linux you will still need to build ODBC driver before you can install it. So if you are using Linux or if you still want to build the driver by yourself for Windows, then keep reading.
+
+Ignite ODBC Driver source code is shipped as part of the Ignite package and it should be built before usage.
+
+Since the ODBC Driver is written in {cpp}, it is shipped as part of Ignite {cpp} and depends on some of the {cpp} libraries. More specifically, it depends on the `utils` and `binary` Ignite libraries. This means that you will need to build them prior to building the ODBC driver itself.
+
+We assume here that you are using the binary Ignite release. If you are using the source release, instead of `%IGNITE_HOME%\platforms\cpp` path you should use `%IGNITE_HOME%\modules\platforms\cpp` throughout.
+
+=== Building on Windows
+
+You will need MS Visual Studio 2010 or later to be able to build the ODBC driver on Windows. Once you have it, open Ignite solution `%IGNITE_HOME%\platforms\cpp\project\vs\ignite.sln` (or `ignite_86.sln` if you are running 32-bit platform), left-click on odbc project in the "Solution Explorer" and choose "Build". Visual Studio will automatically detect and build all the necessary dependencies.
+
+The path to the .sln file may vary depending on whether you're building from source files or binaries. If you don't see your .sln file in `%IGNITE_HOME%\platforms\cpp\project\vs\`, try looking in `%IGNITE_HOME%\modules\platforms\cpp\project\vs\`.
+
+NOTE: If you are using VS 2015 or later (MSVC 14.0 or later), you need to add `legacy_stdio_definitions.lib` as an additional library to odbc project linker's settings in order to be able to build the project. To add this library to the linker input in the IDE, open the context menu for the project node, choose `Properties`, then in the `Project Properties` dialog box, choose `Linker`, and edit the `Linker Input` to add `legacy_stdio_definitions.lib` to the semi-colon-separated list.
+
+Once the build process is complete, you can find `ignite.odbc.dll` in `%IGNITE_HOME%\platforms\cpp\project\vs\x64\Release` for the 64-bit version and in `%IGNITE_HOME%\platforms\cpp\project\vs\Win32\Release` for the 32-bit version.
+
+NOTE: Be sure to use the corresponding driver (32-bit or 64-bit) for your system.
+
+=== Building installers on Windows
+
+Once you have built driver binaries you may want to build installers for easier installation. Ignite uses link:http://wixtoolset.org[WiX Toolset] to generate ODBC installers, so to build them you'll need to download and install WiX. Make sure you have added the `bin` directory of the WiX Toolset to your PATH variable.
+
+Once everything is ready, open a terminal and navigate to the directory `%IGNITE_HOME%\platforms\cpp\odbc\install`. Execute the following commands one by one to build installers:
+
+
+[tabs]
+--
+tab:64-bit driver[]
+[source,shell]
+----
+candle.exe ignite-odbc-amd64.wxs
+light.exe -ext WixUIExtension ignite-odbc-amd64.wixobj
+----
+
+tab:32-bit driver[]
+[source,shell]
+----
+candle.exe ignite-odbc-x86.wxs
+light.exe -ext WixUIExtension ignite-odbc-x86.wixobj
+----
+--
+
+As a result, `ignite-odbc-amd64.msi` and `ignite-odbc-x86.msi` files should appear in the directory. You can use them to install your freshly built drivers.
+
+=== Building on Linux
+
+On a Linux-based operating system, you will need to install an ODBC Driver Manager of your choice to be able to build and use the Ignite ODBC Driver. The ODBC Driver has been tested with link:http://www.unixodbc.org[UnixODBC].
+
+==== Prerequisites
+include::includes/cpp-linux-build-prerequisites.adoc[]
+
+NOTE: The JDK is used only during the build process and not by the ODBC driver itself.
+
+==== Building ODBC driver
+- Create a build directory for cmake. We'll refer to it as `${CPP_BUILD_DIR}`
+- (Optional) Choose installation directory prefix (by default `/usr/local`). We'll refer to it as `${CPP_INSTALL_DIR}`
+- Build and install the driver by executing the following commands:
+
+[tabs]
+--
+tab:Ubuntu[]
+[source,bash,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON ${IGNITE_HOME}/platforms/cpp -DCMAKE_INSTALL_PREFIX=${CPP_INSTALL_DIR}
+make
+sudo make install
+----
+
+tab:CentOS/RHEL[]
+[source,shell,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake3 -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON  ${IGNITE_HOME}/platforms/cpp -DCMAKE_INSTALL_PREFIX=${CPP_INSTALL_DIR}
+make 
+sudo make install
+----
+
+--
+
+After the build process is over, you can find out where your ODBC driver has been placed by running the following command:
+
+[source,shell]
+----
+whereis libignite-odbc
+----
+
+The path should look something like: `/usr/local/lib/libignite-odbc.so`
+
+== Installing ODBC Driver
+
+In order to use ODBC driver, you need to register it in your system so that your ODBC Driver Manager will be able to locate it.
+
+=== Installing on Windows
+
+For 32-bit Windows, you should use the 32-bit version of the driver. For the
+64-bit Windows, you can use the 64-bit driver as well as the 32-bit. You may want to install both 32-bit and 64-bit drivers on 64-bit Windows to be able to use your driver from both 32-bit and 64-bit applications.
+
+==== Installing using installers
+
+NOTE: Microsoft Visual C++ 2010 Redistributable Package for 32-bit or 64-bit should be installed first.
+
+This is the easiest way and one should use it by default. Just launch the installer for the version of the driver that you need and follow the instructions:
+
+32-bit installer: `%IGNITE_HOME%\platforms\cpp\bin\odbc\ignite-odbc-x86.msi`
+64-bit installer: `%IGNITE_HOME%\platforms\cpp\bin\odbc\ignite-odbc-amd64.msi`
+
+==== Installing manually
+
+To install ODBC driver on Windows manually, you should first choose a directory on your
+file system where your driver or drivers will be located. Once you have
+chosen the location, you have to put your driver there and ensure that all driver
+dependencies can be resolved as well, i.e., they can be found either in the `%PATH%` or
+in the same directory where the driver DLL resides.
+
+After that, you have to use one of the install scripts from the following directory
+`%IGNITE_HOME%/platforms/cpp/odbc/install`. Note, that you may need OS administrator privileges to execute these scripts.
+
+[tabs]
+--
+tab:x86[]
+[source,shell]
+----
+install_x86 <absolute_path_to_32_bit_driver>
+----
+
+tab:AMD64[]
+[source,shell]
+----
+install_amd64 <absolute_path_to_64_bit_driver> [<absolute_path_to_32_bit_driver>]
+----
+
+--
+
+
+=== Installing on Linux
+
+To be able to build and install ODBC driver on Linux, you need to first install
+ODBC Driver Manager. The ODBC driver has been tested with link:http://www.unixodbc.org[UnixODBC].
+
+Once you have built the driver and performed the `make install` command, the ODBC Driver i.e. `libignite-odbc.so` will be placed in the `/usr/local/lib` folder. To install it as an ODBC driver in your Driver Manager and be able to use it, perform the following steps:
+
+- Ensure linker is able to locate all dependencies of the ODBC driver. You can check this by using `ldd` command. Assuming ODBC driver is located under `/usr/local/lib`:
++
+`ldd /usr/local/lib/libignite-odbc.so`
++
+If there are unresolved links to other libraries, you may want to add directories with these libraries to the `LD_LIBRARY_PATH`.
+
+- Edit file `${IGNITE_HOME}/platforms/cpp/odbc/install/ignite-odbc-install.ini` and ensure that Driver parameter of the Apache Ignite section points to where `libignite-odbc.so` is located.
+
+- To install the ODBC driver, use the following command:
+
+[source,shell]
+----
+odbcinst -i -d -f ${IGNITE_HOME}/platforms/cpp/odbc/install/ignite-odbc-install.ini
+----
+To perform this command, you may need root privileges.
+
+Now the Apache Ignite ODBC driver is installed and ready for use. You can connect to it and use it just like any other ODBC driver.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/querying-modifying-data.adoc b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
new file mode 100644
index 0000000..bfe7834
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
@@ -0,0 +1,491 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Querying and Modifying Data
+
+== Overview
+This page elaborates on how to connect to a cluster and execute a variety of SQL queries using the ODBC driver.
+
+
+At the implementation layer, the ODBC driver uses SQL Fields queries to retrieve data from the cluster.
+This means that from ODBC side you can access only those fields that are link:SQL/sql-api#configuring-queryable-fields[defined in the cluster configuration].
+
+Moreover, the ODBC driver supports DML (Data Modification Layer), which means that you can modify your data using an ODBC connection.
+
+NOTE: Refer to the link:{githubUrl}/modules/platforms/cpp/examples/odbc-example[ODBC example] that incorporates complete logic and exemplary queries described below.
+
+== Configuring the Cluster
+As the first step, you need to set up a configuration that will be used by the cluster nodes.
+The configuration should include caches configurations as well with properly defined `QueryEntities` properties.
+`QueryEntities` are essential for the cases when your application (or the ODBC driver in our scenario) is going to query and modify the data using SQL statements.
+Alternatively you can create tables using DDL.
+
+[tabs]
+--
+tab:DDL[]
+[source,cpp]
+----
+SQLHENV env;
+
+// Allocate an environment handle
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+// Use ODBC ver 3
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+
+// Allocate a connection handle
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+// Prepare the connection string
+SQLCHAR connectStr[] = "DSN=My Ignite DSN";
+
+// Connecting to the Cluster.
+SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, NULL, 0, NULL, SQL_DRIVER_COMPLETE);
+
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query1[] = "CREATE TABLE Person ( "
+    "id LONG PRIMARY KEY, "
+    "firstName VARCHAR, "
+    "lastName VARCHAR, "
+    "salary FLOAT) "
+    "WITH \"template=partitioned\"";
+
+SQLExecDirect(stmt, query1, SQL_NTS);
+
+SQLCHAR query2[] = "CREATE TABLE Organization ( "
+    "id LONG PRIMARY KEY, "
+    "name VARCHAR) "
+    "WITH \"template=partitioned\"";
+
+SQLExecDirect(stmt, query2, SQL_NTS);
+
+SQLCHAR query3[] = "CREATE INDEX idx_organization_name ON Organization (name)";
+
+SQLExecDirect(stmt, query3, SQL_NTS);
+----
+
+tab:Spring XML[]
+[source,xml]
+----
+include::code-snippets/xml/odbc-cache-config.xml[tags=ignite-config;!discovery, indent=0]
+----
+--
+
+As you can see, we defined two caches that will contain the data of `Person` and `Organization` types.
+For both types, we listed specific fields and indexes that will be read or updated using SQL.
+
+
+== Connecting to the Cluster
+
+After the cluster is configured and started, we can connect to it from the ODBC driver side. To do this, you need to prepare a valid connection string and pass it as a parameter to the ODBC driver at the connection time. Refer to the link:SQL/ODBC/connection-string-dsn[Connection String] page for more details.
+
+Alternatively, you can also use a link:SQL/ODBC/connection-string-dsn#configuring-dsn[pre-configured DSN] for connection purposes as shown in the example below.
+
+
+[source,c++]
+----
+SQLHENV env;
+
+// Allocate an environment handle
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+// Use ODBC ver 3
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+
+// Allocate a connection handle
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+// Prepare the connection string
+SQLCHAR connectStr[] = "DSN=My Ignite DSN";
+
+// Connecting to Ignite Cluster.
+SQLRETURN ret = SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, NULL, 0, NULL, SQL_DRIVER_COMPLETE);
+
+if (!SQL_SUCCEEDED(ret))
+{
+  SQLCHAR sqlstate[7] = { 0 };
+  SQLINTEGER nativeCode;
+
+  SQLCHAR errMsg[BUFFER_SIZE] = { 0 };
+  SQLSMALLINT errMsgLen = static_cast<SQLSMALLINT>(sizeof(errMsg));
+
+  SQLGetDiagRec(SQL_HANDLE_DBC, dbc, 1, sqlstate, &nativeCode, errMsg, errMsgLen, &errMsgLen);
+
+  std::cerr << "Failed to connect to Ignite: "
+            << reinterpret_cast<char*>(sqlstate) << ": "
+            << reinterpret_cast<char*>(errMsg) << ", "
+            << "Native error code: " << nativeCode
+            << std::endl;
+
+  // Releasing allocated handles.
+  SQLFreeHandle(SQL_HANDLE_DBC, dbc);
+  SQLFreeHandle(SQL_HANDLE_ENV, env);
+
+  return;
+}
+----
+
+
+== Querying Data
+
+After everything is up and running, we're ready to execute `SQL SELECT` queries using the `ODBC API`.
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] = "SELECT firstName, lastName, salary, Organization.name FROM Person "
+  "INNER JOIN \"Organization\".Organization ON Person.orgId = Organization.id";
+SQLSMALLINT queryLen = static_cast<SQLSMALLINT>(sizeof(queryLen));
+
+SQLRETURN ret = SQLExecDirect(stmt, query, queryLen);
+
+if (!SQL_SUCCEEDED(ret))
+{
+  SQLCHAR sqlstate[7] = { 0 };
+  SQLINTEGER nativeCode;
+
+  SQLCHAR errMsg[BUFFER_SIZE] = { 0 };
+  SQLSMALLINT errMsgLen = static_cast<SQLSMALLINT>(sizeof(errMsg));
+
+  SQLGetDiagRec(SQL_HANDLE_DBC, dbc, 1, sqlstate, &nativeCode, errMsg, errMsgLen, &errMsgLen);
+
+  std::cerr << "Failed to perfrom SQL query: "
+            << reinterpret_cast<char*>(sqlstate) << ": "
+            << reinterpret_cast<char*>(errMsg) << ", "
+            << "Native error code: " << nativeCode
+            << std::endl;
+}
+else
+{
+  // Printing the result set.
+  struct OdbcStringBuffer
+  {
+    SQLCHAR buffer[BUFFER_SIZE];
+    SQLLEN resLen;
+  };
+
+  // Getting a number of columns in the result set.
+  SQLSMALLINT columnsCnt = 0;
+  SQLNumResultCols(stmt, &columnsCnt);
+
+  // Allocating buffers for columns.
+  std::vector<OdbcStringBuffer> columns(columnsCnt);
+
+  // Binding colums. For simplicity we are going to use only
+  // string buffers here.
+  for (SQLSMALLINT i = 0; i < columnsCnt; ++i)
+    SQLBindCol(stmt, i + 1, SQL_C_CHAR, columns[i].buffer, BUFFER_SIZE, &columns[i].resLen);
+
+  // Fetching and printing data in a loop.
+  ret = SQLFetch(stmt);
+  while (SQL_SUCCEEDED(ret))
+  {
+    for (size_t i = 0; i < columns.size(); ++i)
+      std::cout << std::setw(16) << std::left << columns[i].buffer << " ";
+
+    std::cout << std::endl;
+
+    ret = SQLFetch(stmt);
+  }
+}
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+[NOTE]
+====
+[discrete]
+=== Columns binding
+
+In the example above, we bind all columns to the SQL_C_CHAR columns. This means that all values are going to be converted to strings upon fetching. This is done for the sake of simplicity. Value conversion upon fetching can be pretty slow; so your default decision should be to fetch the value the same way as it is stored.
+====
+
+== Inserting Data
+
+To insert new data into the cluster, `SQL INSERT` statements can be used from the ODBC side.
+
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] =
+	"INSERT INTO Person (id, orgId, firstName, lastName, resume, salary) "
+	"VALUES (?, ?, ?, ?, ?, ?)";
+
+SQLPrepare(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+// Binding columns.
+int64_t key = 0;
+int64_t orgId = 0;
+char name[1024] = { 0 };
+SQLLEN nameLen = SQL_NTS;
+double salary = 0.0;
+
+SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0);
+SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &orgId, 0, 0);
+SQLBindParameter(stmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,	sizeof(name), sizeof(name), name, 0, &nameLen);
+SQLBindParameter(stmt, 4, SQL_PARAM_INPUT, SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, &salary, 0, 0);
+
+// Filling cache.
+key = 1;
+orgId = 1;
+strncpy(name, "John", sizeof(name));
+salary = 2200.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 1;
+strncpy(name, "Jane", sizeof(name));
+salary = 1300.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 2;
+strncpy(name, "Richard", sizeof(name));
+salary = 900.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 2;
+strncpy(name, "Mary", sizeof(name));
+salary = 2400.0;
+
+SQLExecute(stmt);
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+Next, we are going to insert additional organizations without the usage of prepared statements.
+
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query1[] = "INSERT INTO \"Organization\".Organization (id, name) VALUES (1L, 'Some company')";
+
+SQLExecDirect(stmt, query1, static_cast<SQLSMALLINT>(sizeof(query1)));
+
+SQLFreeStmt(stmt, SQL_CLOSE);
+
+SQLCHAR query2[] = "INSERT INTO \"Organization\".Organization (id, name) VALUES (2L, 'Some other company')";
+
+  SQLExecDirect(stmt, query2, static_cast<SQLSMALLINT>(sizeof(query2)));
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+[WARNING]
+====
+[discrete]
+=== Error Checking
+
+For simplicity the example code above does not check for an error return code. You will want to do error checking in production.
+====
+
+== Updating Data
+
+Let's now update the salary for some of the persons stored in the cluster using SQL `UPDATE` statement.
+
+
+[source,c++]
+----
+void AdjustSalary(SQLHDBC dbc, int64_t key, double salary)
+{
+  SQLHSTMT stmt;
+
+  // Allocate a statement handle
+  SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+  SQLCHAR query[] = "UPDATE Person SET salary=? WHERE id=?";
+
+  SQLBindParameter(stmt, 1, SQL_PARAM_INPUT,
+      SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, &salary, 0, 0);
+
+  SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG,
+      SQL_BIGINT, 0, 0, &key, 0, 0);
+
+  SQLExecDirect(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+  // Releasing statement handle.
+  SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+}
+
+...
+AdjustSalary(dbc, 3, 1200.0);
+AdjustSalary(dbc, 1, 2500.0);
+----
+
+== Deleting Data
+
+Finally, let's remove a few records with the help of SQL `DELETE` statement.
+
+[source,c++]
+----
+void DeletePerson(SQLHDBC dbc, int64_t key)
+{
+  SQLHSTMT stmt;
+
+  // Allocate a statement handle
+  SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+  SQLCHAR query[] = "DELETE FROM Person WHERE id=?";
+
+  SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT,
+      0, 0, &key, 0, 0);
+
+  SQLExecDirect(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+  // Releasing statement handle.
+  SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+}
+
+...
+DeletePerson(dbc, 1);
+DeletePerson(dbc, 4);
+----
+
+== Batching With Arrays of Parameters
+
+The ODBC driver supports batching with link:https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/using-arrays-of-parameters[arrays of parameters] for DML statements.
+
+Let's try to insert the same records we did in the example above but now with a single `SQLExecute` call:
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocating a statement handle.
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] =
+	"INSERT INTO Person (id, orgId, firstName, lastName, resume, salary) "
+	"VALUES (?, ?, ?, ?, ?, ?)";
+
+SQLPrepare(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+// Binding columns.
+int64_t key[4] = {0};
+int64_t orgId[4] = {0};
+char name[1024 * 4] = {0};
+SQLLEN nameLen[4] = {0};
+double salary[4] = {0};
+
+SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, key, 0, 0);
+SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, orgId, 0, 0);
+SQLBindParameter(stmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,	1024, 1024, name, 0, &nameLen);
+SQLBindParameter(stmt, 4, SQL_PARAM_INPUT, SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, salary, 0, 0);
+
+// Filling cache.
+key[0] = 1;
+orgId[0] = 1;
+strncpy(name, "John", 1023);
+salary[0] = 2200.0;
+nameLen[0] = SQL_NTS;
+
+key[1] = 2;
+orgId[1] = 1;
+strncpy(name + 1024, "Jane", 1023);
+salary[1] = 1300.0;
+nameLen[1] = SQL_NTS;
+
+key[2] = 3;
+orgId[2] = 2;
+strncpy(name + 1024 * 2, "Richard", 1023);
+salary[2] = 900.0;
+nameLen[2] = SQL_NTS;
+
+key[3] = 4;
+orgId[3] = 2;
+strncpy(name + 1024 * 3, "Mary", 1023);
+salary[3] = 2400.0;
+nameLen[3] = SQL_NTS;
+
+// Asking the driver to store the total number of processed argument sets
+// in the following variable.
+SQLULEN setsProcessed = 0;
+SQLSetStmtAttr(stmt, SQL_ATTR_PARAMS_PROCESSED_PTR, &setsProcessed, SQL_IS_POINTER);
+
+// Setting the size of the arguments array. This is 4 in our case.
+SQLSetStmtAttr(stmt, SQL_ATTR_PARAMSET_SIZE, reinterpret_cast<SQLPOINTER>(4), 0);
+
+// Executing the statement.
+SQLExecute(stmt);
+
+// Releasing the statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+NOTE: This type of batching is currently supported for `INSERT`, `UPDATE`, `DELETE`, and `MERGE` statements and does not work for `SELECTs`. The data-at-execution capability is not supported with Arrays of Parameters batching either.
+
+== Streaming
+
+The ODBC driver allows streaming data in bulk using the `SET` command. See the `SET` link:sql-reference/operational-commands#set-streaming[command documentation] for more information.
+
+NOTE: In streaming mode, the array of parameters and data-at-execution parameters are not supported.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/specification.adoc b/docs/_docs/SQL/ODBC/specification.adoc
new file mode 100644
index 0000000..68e671b
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/specification.adoc
@@ -0,0 +1,1090 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Specification
+
+== Overview
+
+ODBC defines several Interface conformance levels. In this section you can find which features are supported by the Apache Ignite ODBC driver.
+
+== Core Interface Conformance
+
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Feature |Supported|Comments
+
+|Allocate and free all types of handles, by calling `SQLAllocHandle` and `SQLFreeHandle`.
+|YES
+|
+
+|Use all forms of the `SQLFreeStmt` function.
+|YES
+|
+
+|Bind result set columns, by calling `SQLBindCol`.
+|YES
+|
+
+|Handle dynamic parameters, including arrays of parameters, in the input direction only, by calling `SQLBindParameter` and `SQLNumParams`.
+|YES
+|
+
+|Specify a bind offset.
+|YES
+|
+
+|Use the data-at-execution dialog, involving calls to `SQLParamData` and `SQLPutData`
+|YES
+|
+
+|Manage cursors and cursor names, by calling `SQLCloseCursor`, `SQLGetCursorName`, and `SQLSetCursorName`.
+|PARTIALLY
+|`SQLCloseCursor` is implemented. Named cursors are not supported by Ignite SQL.
+
+|Gain access to the description (metadata) of result sets, by calling `SQLColAttribute`, `SQLDescribeCol`, `SQLNumResultCols`, and `SQLRowCount`.
+|YES
+|
+
+|Query the data dictionary, by calling the catalog functions `SQLColumns`, `SQLGetTypeInfo`, `SQLStatistics`, and `SQLTables`.
+|PARTIALLY
+|`SQLStatistics` is not supported.
+
+|Manage data sources and connections, by calling `SQLConnect`, `SQLDataSources`, `SQLDisconnect`, and `SQLDriverConnect`. Obtain information on drivers, no matter which ODBC level they support, by calling `SQLDrivers`.
+|YES
+|
+
+|Prepare and execute SQL statements, by calling `SQLExecDirect`, `SQLExecute`, and `SQLPrepare`.
+|YES
+|
+
+|Fetch one row of a result set or multiple rows, in the forward direction only, by calling `SQLFetch` or by calling `SQLFetchScroll` with the `FetchOrientation` argument set to `SQL_FETCH_NEXT`
+|YES
+|
+
+|Obtain an unbound column in parts, by calling `SQLGetData`.
+|YES
+|
+
+|Obtain current values of all attributes, by calling `SQLGetConnectAttr`, `SQLGetEnvAttr`, and `SQLGetStmtAttr`, and set all attributes to their default values and set certain attributes to non-default values by calling `SQLSetConnectAttr`, `SQLSetEnvAttr`, and `SQLSetStmtAttr`.
+|PARTIALLY
+|Not all attributes are supported by now. See table below for details.
+
+|Manipulate certain fields of descriptors, by calling `SQLCopyDesc`, `SQLGetDescField`, `SQLGetDescRec`, `SQLSetDescField`, and `SQLSetDescRec`.
+|NO
+|
+
+|Obtain diagnostic information, by calling `SQLGetDiagField` and `SQLGetDiagRec`.
+|YES
+|
+
+|Detect driver capabilities, by calling `SQLGetFunctions` and `SQLGetInfo`. Also, detect the result of any text substitutions made to an SQL statement before it is sent to the data source, by calling `SQLNativeSql`.
+|YES
+|
+
+|Use the syntax of `SQLEndTran` to commit a transaction. A Core-level driver need not support true transactions; therefore, the application cannot specify `SQL_ROLLBACK` nor `SQL_AUTOCOMMIT_OFF` for the `SQL_ATTR_AUTOCOMMIT` connection attribute.
+|YES
+|
+
+|Call `SQLCancel` to cancel the data-at-execution dialog and, in multi-thread environments, to cancel an ODBC function executing in another thread. Core-level interface conformance does not mandate support for asynchronous execution of functions, nor the use of `SQLCancel` to cancel an ODBC function executing asynchronously. Neither the platform nor the ODBC driver need be multi-thread for the driver to conduct independent activities at the same time. However, in multi-thread environment [...]
+|NO
+|Current implementation does not support asynchronous execution. Also, is not supported for data-at-execution.
+
+|Obtain the `SQL_BEST_ROWID` row-identifying column of tables, by calling `SQLSpecialColumns`.
+|PARTIALLY
+|Current implementation always returns empty row set.
+
+|=======================================================================
+
+
+== Level 1 Interface Conformance
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Specify the schema of database tables and views (using two-part naming).
+|YES
+|
+
+|Invoke true asynchronous execution of ODBC functions, where applicable ODBC functions are all synchronous or all asynchronous on a given connection.
+|NO
+|
+
+|Use scrollable cursors, and thereby achieve access to a result set in methods other than forward-only, by calling `SQLFetchScroll` with the `FetchOrientation` argument other than `SQL_FETCH_NEXT`.
+|NO
+|
+
+|Obtain primary keys of tables, by calling `SQLPrimaryKeys`.
+|PARTIALLY
+|Returns empty result set by now.
+
+|Use stored procedures, through the ODBC escape sequence for procedure calls, and query the data dictionary regarding stored procedures, by calling `SQLProcedureColumns` and `SQLProcedures`.
+|NO
+|
+
+|Connect to a data source by interactively browsing the available servers, by calling `SQLBrowseConnect`.
+|NO
+|
+
+|Use ODBC functions instead of SQL statements to perform certain database operations: `SQLSetPos` with `SQL_POSITION` and `SQL_REFRESH`.
+|NO
+|
+
+|Gain access to the contents of multiple result sets generated by batches and stored procedures, by calling `SQLMoreResults`.
+|YES
+|
+
+|Delimit transactions spanning several ODBC functions, with true atomicity and the ability to specify `SQL_ROLLBACK` in `SQLEndTran`.
+|NO
+|Ignite SQL does not support transactions.
+|=======================================================================
+
+== Level 2 Interface Conformance
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Feature|Supported|Comments
+
+|Use three-part names of database tables and views.
+|NO
+|Ignite SQL does not support catalogs.
+
+|Describe dynamic parameters, by calling `SQLDescribeParam`.
+|YES
+|
+
+|Use not only input parameters but also output and input/output parameters, and result values of stored procedures.
+|NO
+|Ignite SQL does not support output parameters
+
+|Use bookmarks, including retrieving bookmarks, by calling `SQLDescribeCol` and `SQLColAttribute` on column number 0; fetching based on a bookmark, by calling `SQLFetchScroll` with the `FetchOrientation` argument set to `SQL_FETCH_BOOKMARK`; and update, delete, and fetch by bookmark operations, by calling `SQLBulkOperations` with the Operation argument set to `SQL_UPDATE_BY_BOOKMARK`, `SQL_DELETE_BY_BOOKMARK`, or `SQL_FETCH_BY_BOOKMARK`.
+|NO
+|Ignite SQL does not support bookmarks.
+
+|Retrieve advanced information about the data dictionary, by calling `SQLColumnPrivileges`, `SQLForeignKeys`, and `SQLTablePrivileges`.
+|PARTIALLY
+|`SQLForeignKeys` implemented, but returns empty result set.
+
+|Use ODBC functions instead of SQL statements to perform additional database operations, by calling `SQLBulkOperations` with `SQL_ADD`, or `SQLSetPos` with `SQL_DELETE` or `SQL_UPDATE`.
+|NO
+|
+
+|Enable asynchronous execution of ODBC functions for specified individual statements.
+|NO
+|
+
+|Obtain the `SQL_ROWVER` row-identifying column of tables, by calling `SQLSpecialColumns`.
+|PARTIALLY
+|Implemented by returning an empty row set.
+
+|Set the `SQL_ATTR_CONCURRENCY` statement attribute to at least one value other than `SQL_CONCUR_READ_ONLY`.
+|NO
+|
+
+|The ability to time out login request and SQL queries (`SQL_ATTR_LOGIN_TIMEOUT` and `SQL_ATTR_QUERY_TIMEOUT`).
+|PARTIALLY
+|`SQL_ATTR_QUERY_TIMEOUT` support implemented.
+`SQL_ATTR_LOGIN_TIMEOUT` is not implemented yet.
+
+|The ability to change the default isolation level; the ability to execute transactions with the "serializable" level of isolation.
+|NO
+|Ignite does not support SQL transactions.
+|=======================================================================
+
+== Function Support
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Function|Supported|Conformance level
+
+|`SQLAllocHandle`
+|YES
+|Core
+
+|`SQLBindCol`
+|YES
+|Core
+
+|`SQLBindParameter`
+|YES
+|Core
+
+|`SQLBrowseConnect`
+|NO
+|Level 1
+
+|`SQLBulkOperations`
+|NO
+|Level 1
+
+|`SQLCancel`
+|NO
+|Core
+
+|`SQLCloseCursor`
+|YES
+|Core
+
+|`SQLColAttribute`
+|YES
+|Core
+
+|`SQLColumnPrivileges`
+|NO
+|Level 2
+
+|`SQLColumns`
+|YES
+|Core
+
+|`SQLConnect`
+|YES
+|Core
+
+|`SQLCopyDesc`
+|NO
+|Core
+
+|`SQLDataSources`
+|N/A
+|Core
+
+|`SQLDescribeCol`
+|YES
+|Core
+
+|`SQLDescribeParam`
+|YES
+|Level 2
+
+|`SQLDisconnect`
+|YES
+|Core
+
+|`SQLDriverConnect`
+|YES
+|Core
+
+|`SQLDrivers`
+|N/A
+|Core
+
+|`SQLEndTran`
+|PARTIALLY
+|Core
+
+|`SQLExecDirect`
+|YES
+|Core
+
+|`SQLExecute`
+|YES
+|Core
+
+|`SQLFetch`
+|YES
+|Core
+
+|`SQLFetchScroll`
+|YES
+|Core
+
+|`SQLForeignKeys`
+|PARTIALLY
+|Level 2
+
+|`SQLFreeHandle`
+|YES
+|Core
+
+|`SQLFreeStmt`
+|YES
+|Core
+
+|`SQLGetConnectAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetCursorName`
+|NO
+|Core
+
+|`SQLGetData`
+|YES
+|Core
+
+|`SQLGetDescField`
+|NO
+|Core
+
+|`SQLGetDescRec`
+|NO
+|Core
+
+|`SQLGetDiagField`
+|YES
+|Core
+
+|`SQLGetDiagRec`
+|YES
+|Core
+
+|`SQLGetEnvAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetFunctions`
+|NO
+|Core
+
+|`SQLGetInfo`
+|YES
+|Core
+
+|`SQLGetStmtAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetTypeInfo`
+|YES
+|Core
+
+|`SQLMoreResults`
+|YES
+|Level 1
+
+|`SQLNativeSql`
+|YES
+|Core
+
+|`SQLNumParams`
+|YES
+|Core
+
+|`SQLNumResultCols`
+|YES
+|Core
+
+|`SQLParamData`
+|YES
+|Core
+
+|`SQLPrepare`
+|YES
+|Core
+
+|`SQLPrimaryKeys`
+|PARTIALLY
+|Level 1
+
+|`SQLProcedureColumns`
+|NO
+|Level 1
+
+|`SQLProcedures`
+|NO
+|Level 1
+
+|`SQLPutData`
+|YES
+|Core
+
+|`SQLRowCount`
+|YES
+|Core
+
+|`SQLSetConnectAttr`
+|PARTIALLY
+|Core
+
+|`SQLSetCursorName`
+|NO
+|Core
+
+|`SQLSetDescField`
+|NO
+|Core
+
+|`SQLSetDescRec`
+|NO
+|Core
+
+|`SQLSetEnvAttr`
+|PARTIALLY
+|Core
+
+|`SQLSetPos`
+|NO
+|Level 1
+
+|`SQLSetStmtAttr`
+|PARTIALLY
+|Core
+
+|`SQLSpecialColumns`
+|PARTIALLY
+|Core
+
+|`SQLStatistics`
+|NO
+|Core
+
+|`SQLTablePrivileges`
+|NO
+|Level 2
+
+|`SQLTables`
+|YES
+|Core
+|=======================================================================
+
+== Environment Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_CONNECTION_POOLING`
+|NO
+|Optional
+
+|`SQL_ATTR_CP_MATCH`
+|NO
+|Optional
+
+|`SQL_ATTR_ODBC_VER`
+|YES
+|Core
+
+|`SQL_ATTR_OUTPUT_NTS`
+|YES
+|Optional
+|=======================================================================
+
+== Connection Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_ACCESS_MODE`
+|NO
+|Core
+
+|`SQL_ATTR_ASYNC_ENABLE`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_AUTO_IPD`
+|NO
+|Level 2
+
+|`SQL_ATTR_AUTOCOMMIT`
+|NO
+|Level 1
+
+|`SQL_ATTR_CONNECTION_DEAD`
+|YES
+|Level 1
+
+|`SQL_ATTR_CONNECTION_TIMEOUT`
+|YES
+|Level 2
+
+|`SQL_ATTR_CURRENT_CATALOG`
+|NO
+|Level 2
+
+|`SQL_ATTR_LOGIN_TIMEOUT`
+|NO
+|Level 2
+
+|`SQL_ATTR_ODBC_CURSORS`
+|NO
+|Core
+
+|`SQL_ATTR_PACKET_SIZE`
+|NO
+|Level 2
+
+|`SQL_ATTR_QUIET_MODE`
+|NO
+|Core
+
+|`SQL_ATTR_TRACE`
+|NO
+|Core
+
+|`SQL_ATTR_TRACEFILE`
+|NO
+|Core
+
+|`SQL_ATTR_TRANSLATE_LIB`
+|NO
+|Core
+
+|`SQL_ATTR_TRANSLATE_OPTION`
+|NO
+|Core
+
+|`SQL_ATTR_TXN_ISOLATION`
+|NO
+|Level 1 / Level 2
+|=======================================================================
+
+== Statement Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_APP_PARAM_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_APP_ROW_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_ASYNC_ENABLE`
+|NO
+|Level 1/ Level 2
+
+|`SQL_ATTR_CONCURRENCY`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_CURSOR_SCROLLABLE`
+|NO
+|Level 1
+
+|`SQL_ATTR_CURSOR_SENSITIVITY`
+|NO
+|Level 2
+
+|`SQL_ATTR_CURSOR_TYPE`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_ENABLE_AUTO_IPD`
+|NO
+|Level 2
+
+|`SQL_ATTR_FETCH_BOOKMARK_PTR`
+|NO
+|Level 2
+
+|`SQL_ATTR_IMP_PARAM_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_IMP_ROW_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_KEYSET_SIZE`
+|NO
+|Level 2
+
+|`SQL_ATTR_MAX_LENGTH`
+|NO
+|Level 1
+
+|`SQL_ATTR_MAX_ROWS`
+|NO
+|Level 1
+
+|`SQL_ATTR_METADATA_ID`
+|NO
+|Core
+
+|`SQL_ATTR_NOSCAN`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_BIND_OFFSET_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAM_BIND_TYPE`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_OPERATION_PTR`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_STATUS_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAMS_PROCESSED_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAMSET_SIZE`
+|YES
+|Core
+
+|`SQL_ATTR_QUERY_TIMEOUT`
+|YES
+|Level 2
+
+|`SQL_ATTR_RETRIEVE_DATA`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_ARRAY_SIZE`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_BIND_OFFSET_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_BIND_TYPE`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_NUMBER`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_OPERATION_PTR`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_STATUS_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_ROWS_FETCHED_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_SIMULATE_CURSOR`
+|NO
+|Level 2
+
+|`SQL_ATTR_USE_BOOKMARKS`
+|NO
+|Level 2
+|=======================================================================
+
+== Descriptor Header Fields Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_DESC_ALLOC_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_ARRAY_SIZE`
+|NO
+|Core
+
+|`SQL_DESC_ARRAY_STATUS_PTR`
+|NO
+|Core / Level 1
+
+|`SQL_DESC_BIND_OFFSET_PTR`
+|NO
+|Core
+
+|`SQL_DESC_BIND_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_COUNT`
+|NO
+|Core
+
+|`SQL_DESC_ROWS_PROCESSED_PTR`
+|NO
+|Core
+|=======================================================================
+
+== Descriptor Record Fields Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_DESC_AUTO_UNIQUE_VALUE`
+|NO
+|Level 2
+
+|`SQL_DESC_BASE_COLUMN_NAME`
+|NO
+|Core
+
+|`SQL_DESC_BASE_TABLE_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_CASE_SENSITIVE`
+|NO
+|Core
+
+|`SQL_DESC_CATALOG_NAME`
+|NO
+|Level 2
+
+|`SQL_DESC_CONCISE_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_DATA_PTR`
+|NO
+|Core
+
+|`SQL_DESC_DATETIME_INTERVAL_CODE`
+|NO
+|Core
+
+|`SQL_DESC_DATETIME_INTERVAL_PRECISION`
+|NO
+|Core
+
+|`SQL_DESC_DISPLAY_SIZE`
+|NO
+|Core
+
+|`SQL_DESC_FIXED_PREC_SCALE`
+|NO
+|Core
+
+|`SQL_DESC_INDICATOR_PTR`
+|NO
+|Core
+
+|`SQL_DESC_LABEL`
+|NO
+|Level 2
+
+|`SQL_DESC_LENGTH`
+|NO
+|Core
+
+|`SQL_DESC_LITERAL_PREFIX`
+|NO
+|Core
+
+|`SQL_DESC_LITERAL_SUFFIX`
+|NO
+|Core
+
+|`SQL_DESC_LOCAL_TYPE_NAME`
+|NO
+|Core
+
+|`SQL_DESC_NAME`
+|NO
+|Core
+
+|`SQL_DESC_NULLABLE`
+|NO
+|Core
+
+|`SQL_DESC_OCTET_LENGTH`
+|NO
+|Core
+
+|`SQL_DESC_OCTET_LENGTH_PTR`
+|NO
+|Core
+
+|`SQL_DESC_PARAMETER_TYPE`
+|NO
+|Core / Level 2
+
+|`SQL_DESC_PRECISION`
+|NO
+|Core
+
+|`SQL_DESC_ROWVER`
+|NO
+|Level 1
+
+|`SQL_DESC_SCALE`
+|NO
+|Core
+
+|`SQL_DESC_SCHEMA_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_SEARCHABLE`
+|NO
+|Core
+
+|`SQL_DESC_TABLE_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_TYPE_NAME`
+|NO
+|Core
+
+|`SQL_DESC_UNNAMED`
+|NO
+|Core
+
+|`SQL_DESC_UNSIGNED`
+|NO
+|Core
+
+|`SQL_DESC_UPDATABLE`
+|NO
+|Core
+
+|=======================================================================
+
+== SQL Data Types
+
+The following SQL data types listed in the link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/sql-data-types[specification] are supported:
+
+[width="100%",cols="80%,20%"]
+|=======================================================================
+|Data Type |Supported
+
+|`SQL_CHAR`
+|YES
+
+|`SQL_VARCHAR`
+|YES
+
+|`SQL_LONGVARCHAR`
+|YES
+
+|`SQL_WCHAR`
+|NO
+
+|`SQL_WVARCHAR`
+|NO
+
+|`SQL_WLONGVARCHAR`
+|NO
+
+|`SQL_DECIMAL`
+|YES
+
+|`SQL_NUMERIC`
+|NO
+
+|`SQL_SMALLINT`
+|YES
+
+|`SQL_INTEGER`
+|YES
+
+|`SQL_REAL`
+|NO
+
+|`SQL_FLOAT`
+|YES
+
+|`SQL_DOUBLE`
+|YES
+
+|`SQL_BIT`
+|YES
+
+|`SQL_TINYINT`
+|YES
+
+|`SQL_BIGINT`
+|YES
+
+|`SQL_BINARY`
+|YES
+
+|`SQL_VARBINARY`
+|YES
+
+|`SQL_LONGVARBINARY`
+|YES
+
+|`SQL_TYPE_DATE`
+|YES
+
+|`SQL_TYPE_TIME`
+|YES
+
+|`SQL_TYPE_TIMESTAMP`
+|YES
+
+|`SQL_TYPE_UTCDATETIME`
+|NO
+
+|`SQL_TYPE_UTCTIME`
+|NO
+
+|`SQL_INTERVAL_MONTH`
+|NO
+
+|`SQL_INTERVAL_YEAR`
+|NO
+
+|`SQL_INTERVAL_YEAR_TO_MONTH`
+|NO
+
+|`SQL_INTERVAL_DAY`
+|NO
+
+|`SQL_INTERVAL_HOUR`
+|NO
+
+|`SQL_INTERVAL_MINUTE`
+|NO
+
+|`SQL_INTERVAL_SECOND`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_HOUR`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_MINUTE`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_SECOND`
+|NO
+
+|`SQL_INTERVAL_HOUR_TO_MINUTE`
+|NO
+
+|`SQL_INTERVAL_HOUR_TO_SECOND`
+|NO
+
+|`SQL_INTERVAL_MINUTE_TO_SECOND`
+|NO
+
+|`SQL_GUID`
+|YES
+|=======================================================================
+
+
+== C Data Types
+
+The following C data types listed in the link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/c-data-types[specification] are supported:
+
+[width="100%",cols="80%,20%"]
+|=======================================================================
+|Data Type |Supported
+
+|`SQL_C_CHAR`
+|YES
+
+|`SQL_C_WCHAR`
+|YES
+
+|`SQL_C_SHORT`
+|YES
+
+|`SQL_C_SSHORT`
+|YES
+
+|`SQL_C_USHORT`
+|YES
+
+|`SQL_C_LONG`
+|YES
+
+|`SQL_C_SLONG`
+|YES
+
+|`SQL_C_ULONG`
+|YES
+
+|`SQL_C_FLOAT`
+|YES
+
+|`SQL_C_DOUBLE`
+|YES
+
+|`SQL_C_BIT`
+|YES
+
+|`SQL_C_TINYINT`
+|YES
+
+|`SQL_C_STINYINT`
+|YES
+
+|`SQL_C_UTINYINT`
+|YES
+
+|`SQL_C_BIGINT`
+|YES
+
+|`SQL_C_SBIGINT`
+|YES
+
+|`SQL_C_UBIGINT`
+|YES
+
+|`SQL_C_BINARY`
+|YES
+
+|`SQL_C_BOOKMARK`
+|NO
+
+|`SQL_C_VARBOOKMARK`
+|NO
+
+|`SQL_C_INTERVAL`* (all interval types)
+|NO
+
+|`SQL_C_TYPE_DATE`
+|YES
+
+|`SQL_C_TYPE_TIME`
+|YES
+
+|`SQL_C_TYPE_TIMESTAMP`
+|YES
+
+|`SQL_C_NUMERIC`
+|YES
+
+|`SQL_C_GUID`
+|YES
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/custom-sql-func.adoc b/docs/_docs/SQL/custom-sql-func.adoc
new file mode 100644
index 0000000..c531fc6
--- /dev/null
+++ b/docs/_docs/SQL/custom-sql-func.adoc
@@ -0,0 +1,49 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Custom SQL Functions
+
+:javaFile: {javaCodeDir}/SqlAPI.java
+
+The SQL Engine can extend the SQL functions' set, defined by the ANSI-99 specification, via the addition of custom SQL functions written in Java.
+
+A custom SQL function is just a public static method marked by the `@QuerySqlFunction` annotation.
+
+////
+TODO looks like it's unsupported in C#
+////
+
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-example, indent=0]
+----
+
+
+The class that owns the custom SQL function has to be registered in the `CacheConfiguration`.
+To do that, use the `setSqlFunctionClasses(...)` method.
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-config, indent=0]
+----
+
+Once you have deployed a cache with the above configuration, you can call the custom function from within SQL queries:
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-query, indent=0]
+----
+
+NOTE: Classes registered with `CacheConfiguration.setSqlFunctionClasses(...)` must be added to the classpath of all the nodes where the defined custom functions might be executed. Otherwise, you will get a `ClassNotFoundException` error when trying to execute the custom function.
diff --git a/docs/_docs/SQL/distributed-joins.adoc b/docs/_docs/SQL/distributed-joins.adoc
new file mode 100644
index 0000000..5394c3a
--- /dev/null
+++ b/docs/_docs/SQL/distributed-joins.adoc
@@ -0,0 +1,110 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Distributed Joins
+
+A distributed join is a SQL statement with a join clause that combines two or more partitioned tables.
+If the tables are joined on the partitioning column (affinity key), the join is called a _colocated join_. Otherwise, it is called a _non-colocated join_.
+
+Colocated joins are more efficient because they can be effectively distributed between the cluster nodes.
+
+By default, Ignite treats each join query as if it is a colocated join and executes it accordingly (see the corresponding section below).
+
+WARNING: If your query is non-colocated, you have to enable the non-colocated mode of query execution by setting `SqlFieldsQuery.setDistributedJoins(true)`; otherwise, the results of the query execution may be incorrect.
+
+[CAUTION]
+====
+If you often join tables, we recommend that you partition your tables on the same column (on which you join the tables).
+
+Non-colocated joins should be reserved for cases when it's impossible to use colocated joins.
+====
+
+== Colocated Joins
+
+The following image illustrates the procedure of executing a colocated join. A colocated join (`Q`) is sent to all the nodes that store the data matching the query condition. Then the query is executed over the local data set on each node (`E(Q)`). The results (`R`) are aggregated on the node that initiated the query (the client node).
+
+image::images/collocated_joins.png[]
+
+
+== Non-colocated Joins
+
+If you execute a query in a non-colocated mode, the SQL Engine executes the query locally on all the nodes that store the data matching the query condition. But because the data is not colocated, each node will request missing data (that is not present locally) from other nodes by sending either broadcast or unicast requests. This process is depicted on the image below.
+
+image::images/non_collocated_joins.png[]
+
+If the join is done on the primary or affinity key, the nodes send unicast requests because in this case the nodes know the location of the missing data. Otherwise, nodes send broadcast requests. For performance reasons, both broadcast and unicast requests are aggregated into batches.
+
+Enable the non-colocated mode of query execution by setting a JDBC/ODBC parameter or, if you use SQL API, by calling `SqlFieldsQuery.setDistributedJoins(true)`.
+
+WARNING: If you use a non-collocated join on a column from a link:data-modeling/data-partitioning#replicated[replicated table], the column must have an index.
+Otherwise, you will get an exception.
+
+
+
+== Hash Joins
+
+//tag::hash-join[]
+To boost performance of join queries, Ignite supports the https://en.wikipedia.org/wiki/Hash_join[hash join
+algorithm].
+Hash joins can be more efficient than nested loop joins for many scenarios, except when the probe side of the join is very small.
+However, hash joins can only be used with equi-joins, i.e. a type of join with equality comparison in the join-predicate.
+
+//end::hash-join[]
+
+To enforce the use of hash joins:
+
+. Use the `enforceJoinOrder` option:
++
+[tabs]
+--
+tab:Java API[]
+[source,java]
+----
+include::{javaCodeDir}/SqlAPI.java[tags=enforceJoinOrder,indent=0]
+----
+
+tab:JDBC[]
+[source,java]
+----
+Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+// Open the JDBC connection.
+Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1?enforceJoinOrder=true");
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/SqlJoinOrder.cs[tag=sqlJoinOrder,indent=0]
+----
+
+tab:C++[]
+[source,c++]
+----
+include::code-snippets/cpp/src/sql_join_order.cpp[tag=sql-join-order,indent=0]
+----
+--
+
+. Specify `USE INDEX(HASH_JOIN_IDX)` on the table for which you want to create the hash-join index:
++
+--
+
+[source, sql]
+----
+SELECT * FROM TABLE_A, TABLE_B USE INDEX(HASH_JOIN_IDX) WHERE TABLE_A.column1 = TABLE_B.column2
+----
+--
+
+
+
+
diff --git a/docs/_docs/SQL/indexes.adoc b/docs/_docs/SQL/indexes.adoc
new file mode 100644
index 0000000..4f6a36f
--- /dev/null
+++ b/docs/_docs/SQL/indexes.adoc
@@ -0,0 +1,357 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Defining Indexes
+
+:javaFile: {javaCodeDir}/Indexes.java
+:csharpFile: {csharpCodeDir}/DefiningIndexes.cs
+
+In addition to common DDL commands, such as CREATE/DROP INDEX, developers can use Ignite's link:SQL/sql-api[SQL APIs] to define indexes.
+
+[NOTE]
+====
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from Java code, link:setup#enabling-modules[add this module to your classpath].
+====
+
+Ignite automatically creates indexes for each primary key and affinity key field.
+When you define an index on a field in the value object, Ignite creates a composite index consisting of the indexed field and the cache's primary key.
+In SQL terms, it means that the index will be composed of two columns: the column you want to index and the primary key column.
+
+== Creating Indexes With SQL
+
+Refer to the link:sql-reference/ddl#create-index[CREATE INDEX] section.
+
+== Configuring Indexes Using Annotations
+
+Indexes, as well as queryable fields, can be configured from code via the `@QuerySqlField` annotation. In the example below, the Ignite SQL engine will create indexes for the `id` and `salary` fields.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=configuring-with-annotation,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=idxAnnotationCfg,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The type name is used as the table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained in the link:SQL/schemas[Schemas] section).
+
+Both `id` and `salary` are indexed fields. `id` will be sorted in ascending order (default) and `salary` in descending order.
+
+If you do not want to index a field, but you still need to use it in SQL queries, then the field must be annotated without the `index = true` parameter.
+Such a field is called a _queryable field_.
+In the example above, `name` is defined as a link:SQL/sql-api#configuring-queryable-fields[queryable field].
+
+The `age` field is neither queryable nor is it an indexed field, and thus it will not be accessible from SQL queries.
+
+When you define the indexed fields, you need to <<Registering Indexed Types,register indexed types>>.
+
+////
+Now you can execute the SQL query as follows:
+
+[source,java]
+----
+SqlFieldsQuery qry = new SqlFieldsQuery("SELECT id, name FROM Person" +
+		"WHERE id > 1500 LIMIT 10");
+----
+////
+
+
+[NOTE]
+====
+[discrete]
+=== Updating Indexes and Queryable Fields at Runtime
+
+Use the link:sql-reference/ddl#create-index[CREATE/DROP INDEX] commands if you need to manage indexes or make an object's new fields visible to the SQL engine at​ runtime.
+====
+
+=== Indexing Nested Objects
+Fields of nested objects can also be indexed and queried using annotations. For example, consider a `Person` object that has an `Address` object as a field:
+
+[source,java]
+----
+public class Person {
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private long id;
+
+    /** Queryable field. Will be visible for SQL engine. */
+    @QuerySqlField
+    private String name;
+
+    /** Will NOT be visible for SQL engine. */
+    private int age;
+
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private Address address;
+}
+----
+
+Where the structure of the `Address` class might look like:
+
+[source,java]
+----
+public class Address {
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField (index = true)
+    private String street;
+
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private int zip;
+}
+----
+
+In the above example, the `@QuerySqlField(index = true)` annotation is specified on all the fields of the `Address` class, as well as the `Address` object in the `Person` class.
+
+This makes it possible to execute SQL queries like the following:
+
+[source,java]
+----
+QueryCursor<List<?>> cursor = personCache.query(new SqlFieldsQuery( "select * from Person where street = 'street1'"));
+----
+
+Note that you do not need to specify `address.street` in the WHERE clause of the SQL query. This is because the fields of the `Address` class are flattened within the `Person` table which simply allows us to access the `Address` fields in the queries directly.
+
+WARNING: If you create indexes for nested objects, you won't be able to run UPDATE or INSERT statements on the table.
+
+=== Registering Indexed Types
+After indexed and queryable fields are defined, they have to be registered in the SQL engine along with the object types they belong to.
+
+To specify which types should be indexed, pass the corresponding key-value pairs in the `CacheConfiguration.setIndexedTypes()` method as shown in the example below.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=register-indexed-types,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=register-indexed-types,indent=0]
+----
+tab:C++[unsupported]
+--
+
+This method accepts only pairs of types: one for key class and another for value class. Primitives are passed as boxed types.
+
+[NOTE]
+====
+[discrete]
+=== Predefined Fields
+In addition to all the fields marked with a `@QuerySqlField` annotation, each table will have two special predefined fields: `pass:[_]key` and `pass:[_]val`, which represent links to whole key and value objects. This is useful, for instance, when one of them is of a primitive type and you want to filter by its value. To do this, run a query like: `SELECT * FROM Person WHERE pass:[_]key = 100`.
+====
+
+NOTE: Since Ignite supports link:key-value-api/binary-objects[Binary Objects], there is no need to add classes of indexed types to the classpath of cluster nodes. The SQL query engine can detect values of indexed and queryable fields, avoiding object deserialization.
+
+=== Group Indexes
+
+To set up a multi-field index that can accelerate queries with complex conditions, you can use a `@QuerySqlField.Group` annotation. You can add multiple `@QuerySqlField.Group` annotations in `orderedGroups` if you want a field to be a part of more than one group.
+
+For instance, in the `Person` class below we have the field `age` which belongs to an indexed group named `age_salary_idx` with a group order of "0" and descending sort order. Also, in the same group, we have the field `salary` with a group order of "3" and ascending sort order. Furthermore, the field `salary` itself is a single column index (the `index = true` parameter is specified in addition to the `orderedGroups` declaration). Group `order` does not have to be a particular number. I [...]
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/Indexes_groups.java[tag=group-indexes,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DefiningIndexes.cs[tag=groupIdx,indent=0]
+----
+tab:C++[unsupported]
+--
+
+NOTE: Annotating a field with `@QuerySqlField.Group` outside of `@QuerySqlField(orderedGroups={...})` will have no effect.
+
+== Configuring Indexes Using Query Entities
+
+Indexes and queryable fields can also be configured via the `org.apache.ignite.cache.QueryEntity` class which is convenient for Spring XML based configuration.
+
+All concepts that are discussed as part of the annotation based configuration above are also valid for the `QueryEntity` based approach. Furthermore, the types whose fields are configured with the `@QuerySqlField` annotation and are registered with the `CacheConfiguration.setIndexedTypes()` method are internally converted into query entities.
+
+The example below shows how to define a single field index, group indexes, and queryable fields.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/query-entities.xml[tags=ignite-config,indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tag=index-using-queryentity,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DefiningIndexes.cs[tag=queryEntity,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+A short name of the `valueType` is used as a table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained on the link:SQL/schemas[Schemas] page).
+
+Once the `QueryEntity` is defined, you can execute the SQL query as follows:
+
+[source,java]
+----
+include::{javaFile}[tag=query,indent=0]
+----
+
+[NOTE]
+====
+[discrete]
+=== Updating Indexes and Queryable Fields at Runtime
+
+Use the link:sql-reference/ddl#create-index[CREATE/DROP INDEX] command if you need to manage indexes or make new fields of the object visible to the SQL engine at​ runtime.
+====
+
+== Configuring Index Inline Size
+
+Proper index inline size can help speed up queries on indexed fields.
+//For primitive types and BinaryObjects, Ignite uses a predefined inline index size
+Refer to the dedicated section in the link:SQL/sql-tuning#increasing-index-inline-size[SQL Tuning guide] for the information on how to choose a proper inline size.
+
+In most cases, you will only need to set the inline size for indexes on variable-length fields, such as strings or arrays.
+The default value is 10.
+
+You can change the default value by setting either
+
+* inline size for each index individually, or
+* `CacheConfiguration.sqlIndexMaxInlineSize` property for all indexes within a given cache, or
+* `IGNITE_MAX_INDEX_PAYLOAD_SIZE` system property for all indexes in the cluster
+
+The settings are applied in the order listed above.
+
+//Ignite automatically creates indexes on the primary key and on the affinity key.
+//The inline size for these indexes can be configured via the `CacheConfiguration.sqlIndexMaxInlineSize` property.
+
+You can also configure inline size for each index individually, which will overwrite the default value.
+To set the index inline size for a user-defined index, use one of the following methods. In all cases, the value is set in bytes.
+
+* When using annotations:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=annotation-with-inline-size,indent=0]
+----
+tab:C#/.NET[]
+[source,java]
+----
+include::{csharpFile}[tag=annotation-with-inline-size,indent=0]
+----
+tab:C++[unsupported]
+--
+
+* When using `QueryEntity`:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=query-entity-with-inline-size,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=query-entity-with-inline-size,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+* If you create indexes using the `CREATE INDEX` command, you can use the `INLINE_SIZE` option to set the inline size. See examples in the link:sql-reference/ddl[corresponding section].
++
+[source, sql]
+----
+create index country_idx on Person (country) INLINE_SIZE 13;
+----
+
+
+== Custom Keys
+If you use only predefined SQL data types for primary keys, then you do not need to perform additional manipulation with the SQL schema configuration. Those data types are defined by the `GridQueryProcessor.SQL_TYPES` constant, as listed below.
+
+Predefined SQL data types include:
+
+- all the primitives and their wrappers except `char` and `Character`
+- `String`
+- `BigDecimal`
+- `byte[]`
+- `java.util.Date`, `java.sql.Date`, `java.sql.Timestamp`
+- `java.util.UUID`
+
+However, once you decide to introduce a custom complex key and refer to its fields from DML statements, you need to:
+
+- Define those fields in the `QueryEntity` the same way as you set fields for the value object.
+- Use the new configuration parameter `QueryEntity.setKeyFields(..)` to distinguish key fields from value fields.
+
+The example below shows how to do this.
+
+[tabs]
+--
+
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/custom-keys.xml[tags=ignite-config;!discovery, indent=0]
+
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=custom-key,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=custom-key,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+[NOTE]
+====
+[discrete]
+=== Automatic Hash Code Calculation and Equals Implementation
+
+If a custom key can be serialized into a binary form, then Ingnite calculates its hash code and implement the `equals()` method automatically.
+
+However, if the key's type is `Externalizable`, and if it cannot be serialized into the binary form, then you are required to implement the `hashCode` and `equals` methods manually. See the link:key-value-api/binary-objects[Binary Objects] page for more details.
+====
+
+
diff --git a/docs/_docs/SQL/schemas.adoc b/docs/_docs/SQL/schemas.adoc
new file mode 100644
index 0000000..613fc46
--- /dev/null
+++ b/docs/_docs/SQL/schemas.adoc
@@ -0,0 +1,94 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Understanding Schemas
+
+== Overview
+
+Ignite has a number of default schemas and supports creating custom schemas.
+
+There are two schemas that are available by default:
+
+- The SYS schema, which contains a number of system views with information about cluster nodes. You can't create tables in this schema. Refer to the link:monitoring-metrics/system-views[System Views] page for further information.
+- The <<PUBLIC Schema,PUBLIC schema>>, which is used by default whenever a schema is not specified.
+
+Custom schemas are created in the following cases:
+
+- You can specify custom schemas in the cluster configuration. See <<Custom Schemas>>.
+- Ignite creates a schema for each cache created via one of the programming interfaces or XML configuration. See <<Cache and Schema Names>>.
+
+
+== PUBLIC Schema
+
+The PUBLIC schema is used by default whenever a schema is required and is not specified. For example, when you connect to the cluster via JDBC without setting the schema explicitly, you will connect to the PUBLIC schema.
+
+
+== Custom Schemas
+Custom schemas can be set via the `sqlSchemas` property of `IgniteConfiguration`. You can specify a list of schemas in the configuration before starting your cluster and then create objects in these schemas at runtime.
+
+Below is a configuration example with two custom schemas.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/schemas.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/Schemas.java[tags=custom-schemas, indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UnderstandingSchemas.cs[tag=schemas,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+To connect to a specific schema via, for example, a JDBC driver, provide the schema name in the connection string:
+
+[source,text]
+----
+jdbc:ignite:thin://127.0.0.1/MY_SCHEMA
+----
+
+== Cache and Schema Names
+When you create a cache with link:SQL/sql-api#configuring-queryable-fields[queryable fields], you can manipulate the cached data using the link:SQL/sql-api[SQL API]. In SQL terms, each such cache corresponds to a separate schema whose name equals the name of the cache.
+
+Similarly, when you create a table via a DDL statement, you can access it as a key-value cache via Ignite's supported programming interfaces. The name of the corresponding cache can be specified by providing the `CACHE_NAME` parameter in the `WITH` part of the `CREATE TABLE` statement.
+
+[source,sql]
+----
+CREATE TABLE City (
+  ID INT(11),
+  Name CHAR(35),
+  CountryCode CHAR(3),
+  District CHAR(20),
+  Population INT(11),
+  PRIMARY KEY (ID, CountryCode)
+) WITH "backups=1, CACHE_NAME=City";
+----
+
+See the link:sql-reference/ddl#create-table[CREATE TABLE] page for more details.
+
+If you do not use this parameter, the cache name is defined in the following format (in capital letters):
+
+....
+SQL_<SCHEMA_NAME>_<TABLE_NAME>
+....
diff --git a/docs/_docs/SQL/sql-api.adoc b/docs/_docs/SQL/sql-api.adoc
new file mode 100644
index 0000000..c372c5a
--- /dev/null
+++ b/docs/_docs/SQL/sql-api.adoc
@@ -0,0 +1,352 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL API
+:javaSourceFile: {javaCodeDir}/SqlAPI.java
+
+In addition to using the JDBC driver, Java developers can use Ignite's SQL APIs to query and modify data stored in Ignite.
+
+The `SqlFieldsQuery` class is an interface for executing SQL statements and navigating through the results. `SqlFieldsQuery` is executed through the `IgniteCache.query(SqlFieldsQuery)` method, which returns a query cursor.
+
+== Configuring Queryable Fields
+
+If you want to query a cache using SQL statements, you need to define which fields of the value objects are queryable. Queryable fields are the fields of your data model that the SQL engine can "see" and query.
+
+NOTE: If you create tables using JDBC or SQL tools, you do not need to define queryable fields.
+
+[NOTE]
+====
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from java code, link:setup#enabling-modules[add this module to the classpath of your application].
+====
+
+In Java, queryable fields can be configured in two ways:
+
+* using annotations
+* by defining query entities
+
+
+=== @QuerySqlField Annotation
+
+To make specific fields queryable, annotate the fields in the value class definition with the `@QuerySqlField` annotation and call `CacheConfiguration.setIndexedTypes(...)`.
+////
+TODO : CacheConfiguration.setIndexedTypes is presented only in java, C# got different API, rewrite sentence above
+////
+
+
+[tabs]
+--
+tab:Java[]
+
+[source,java]
+----
+include::{javaCodeDir}/QueryEntitiesExampleWithAnnotation.java[tags=query-entity-annotation, indent=0]
+----
+
+Make sure to call `CacheConfiguration.setIndexedTypes(...)` to let the SQL engine know about the annotated fields.
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=sqlQueryFields,indent=0]
+----
+tab:C++[unsupported]
+--
+
+=== Query Entities
+
+You can define queryable fields using the `QueryEntity` class. Query entities can be configured via XML configuration.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/query-entities.xml[tags=ignite-config,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/QueryEntityExample.java[tags=query-entity,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=queryEntities,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Querying
+
+To execute a select query on a cache, simply create an object of `SqlFieldsQuery` providing the query string to the constructor and run `cache.query(...)`.
+Note that in the following example, the Person cache must be configured to be <<Configuring Queryable Fields,visible to the SQL engine>>.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=simple-query,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=querying,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql.cpp[tag=sql-fields-query,indent=0]
+----
+--
+
+`SqlFieldsQuery` returns a cursor that iterates through the results that match the SQL query.
+
+=== Local Execution
+
+To force local execution of a query, use `SqlFieldsQuery.setLocal(true)`. In this case, the query is executed against the data stored on the node where the query is run. It means that the results of the query are almost always incomplete. Use the local mode only if you are confident you understand this limitation.
+
+=== Subqueries in WHERE Clause
+
+`SELECT` queries used in `INSERT` and `MERGE` statements as well as `SELECT` queries generated by `UPDATE` and `DELETE` operations are distributed and executed in either link:SQL/distributed-joins[colocated or non-colocated distributed modes].
+
+However, if there is a subquery that is executed as part of a `WHERE` clause, then it can be executed in the colocated mode only.
+
+For instance, let's consider the following query:
+
+[source,sql]
+----
+DELETE FROM Person WHERE id IN
+    (SELECT personId FROM Salary s WHERE s.amount > 2000);
+----
+The SQL engine generates the `SELECT` query in order to get a list of entries to be deleted. The query is distributed and executed across the cluster and looks like the one below:
+[source,sql]
+----
+SELECT _key, _val FROM Person WHERE id IN
+    (SELECT personId FROM Salary s WHERE s.amount > 2000);
+----
+However, the subquery from the `IN` clause (`SELECT personId FROM Salary ...`) is not distributed further and is executed over the local data set available on the node.
+
+== Inserting, Updating, Deleting, and Merging
+
+With `SqlFieldsQuery` you can execute the other DML commands in order to modify the data:
+
+
+[tabs]
+--
+tab:INSERT[]
+[source,java]
+----
+include::{javaSourceFile}[tag=insert,indent=0]
+----
+
+tab:UPDATE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=update,indent=0]
+----
+
+tab:DELETE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=delete,indent=0]
+----
+
+tab:MERGE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=merge,indent=0]
+----
+--
+
+When using `SqlFieldsQuery` to execute DDL statements, you must call `getAll()` on the cursor returned from the `query(...)` method.
+
+== Specifying the Schema
+
+By default, any SELECT statement executed via `SqlFieldsQuery` is resolved against the PUBLIC schema. However, if the table you want to query is in a different schema, you can specify the schema by calling `SqlFieldsQuery.setSchema(...)`. In this case, the statement is executed in the given schema.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=set-schema,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=schema,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql.cpp[tag=sql-fields-query-scheme,indent=0]
+----
+--
+
+Alternatively, you can define the schema in the statement:
+
+[source,java]
+----
+SqlFieldsQuery sql = new SqlFieldsQuery("select name from Person.City");
+----
+
+== Creating Tables
+
+You can pass any supported DDL statement to `SqlFieldsQuery` and execute it on a cache as shown below.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=create-table,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=creatingTables,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql_create.cpp[tag=sql-create,indent=0]
+----
+--
+
+
+In terms of SQL schema, the following tables are created as a result of executing the code:
+
+* Table "Person" in the "Person" schema (if it hasn't been created before).
+* Table "City" in the "Person" schema.
+
+To query the "City" table, use statements like `select * from Person.City` or `new SqlFieldsQuery("select * from City").setSchema("PERSON")` (note the uppercase).
+
+
+////////////////////////////////////////////////////////////////////////////////
+== Joining Tables
+
+
+== Cross-Table Queries
+
+
+`SqlQuery.setSchema("PUBLIC")`
+
+++++
+<code-tabs>
+<code-tab data-tab="Java">
+++++
+[source,java]
+----
+IgniteCache cache = ignite.getOrCreateCache(
+    new CacheConfiguration<>()
+        .setName("Person")
+        .setIndexedTypes(Long.class, Person.class));
+
+// Creating City table.
+cache.query(new SqlFieldsQuery("CREATE TABLE City " +
+    "(id int primary key, name varchar, region varchar)").setSchema("PUBLIC")).getAll();
+
+// Creating Organization table.
+cache.query(new SqlFieldsQuery("CREATE TABLE Organization " +
+    "(id int primary key, name varchar, cityName varchar)").setSchema("PUBLIC")).getAll();
+
+// Joining data between City, Organizaion and Person tables. The latter
+// was created with either annotations or QueryEntity approach.
+SqlFieldsQuery qry = new SqlFieldsQuery("SELECT o.name from Organization o " +
+    "inner join \"Person\".Person p on o.id = p.orgId " +
+    "inner join City c on c.name = o.cityName " +
+    "where p.age > 25 and c.region <> 'Texas'");
+
+// Set the query's default schema to PUBLIC.
+// Table names from the query without the schema set will be
+// resolved against PUBLIC schema.
+// Person table belongs to "Person" schema (person cache) and this is why
+// that schema name is set explicitly.
+qry.setSchema("PUBLIC");
+
+// Executing the query.
+cache.query(qry).getAll();
+----
+++++
+</code-tab>
+<code-tab data-tab="C#/.NET">
+++++
+[source,csharp]
+----
+
+----
+++++
+</code-tab>
+<code-tab data-tab="C++">
+++++
+[source,cpp]
+----
+TODO
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+== Cancelling Queries
+There are two ways to cancel long running queries.
+
+The first approach is to prevent run away queries by setting a query execution timeout.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=set-timeout,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=qryTimeout,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The second approach is to halt the query by using `QueryCursor.close()`.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=cancel-by-closing,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=cursorDispose,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Example
+
+The Ignite Community Edition distribution package includes a ready-to-run `SqlDmlExample` as a part of its link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/sql/SqlDmlExample.java[source code]. This example demonstrates the usage of all the above-mentioned DML operations.
diff --git a/docs/_docs/SQL/sql-introduction.adoc b/docs/_docs/SQL/sql-introduction.adoc
new file mode 100644
index 0000000..bfe6d11
--- /dev/null
+++ b/docs/_docs/SQL/sql-introduction.adoc
@@ -0,0 +1,53 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Working with SQL
+
+Ignite comes with ANSI-99 compliant, horizontally scalable and fault-tolerant distributed SQL database. The distribution is provided either by partitioning the data across cluster nodes or by full replication, depending on the use case.
+
+As a SQL database, Ignite supports all DML commands including SELECT, UPDATE, INSERT, and DELETE queries and also implements a subset of DDL commands relevant for distributed systems.
+
+You can interact with Ignite as you would with any other SQL enabled storage by connecting with link:SQL/JDBC/jdbc-driver/[JDBC] or link:SQL/ODBC/odbc-driver[ODBC] drivers from both external tools and applications. Java, .NET and C++ developers can leverage native  link:SQL/sql-api[SQL APIs].
+
+Internally, SQL tables have the same data structure as link:data-modeling/data-modeling#key-value-cache-vs-sql-table[key-value caches]. It means that you can change partition distribution of your data and leverage link:data-modeling/affinity-collocation[affinity colocation techniques] for better performance.
+
+Ignite's SQL engine uses H2 Database to parse and optimize queries and generate execution plans.
+
+== Distributed Queries
+
+Queries against link:data-modeling/data-partitioning#partitioned[partitioned] tables are executed in a distributed manner:
+
+- The query is parsed and split into multiple “map” queries and a single “reduce” query.
+- All the map queries are executed on all the nodes where required data resides.
+- All the nodes provide result sets of local execution to the query initiator that, in turn, will merge provided result sets into the final results.
+
+You can force a query to be processed locally, i.e. on the subset of data that is stored on the node where the query is executed.
+
+== Local Queries
+
+If a query is executed over a link:data-modeling/data-partitioning#replicated[replicated] table, it will be run against the local data.
+
+Queries over partitioned tables are executed in a distributed manner.
+However, you can force local execution of a query over a partitioned table.
+See link:SQL/sql-api#local-execution[Local Execution] for details.
+
+
+////
+== Known Limitations
+TODO
+
+https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-known-limitations
+
+https://issues.apache.org/jira/browse/IGNITE-7822 - describe this if not fixed
+////
diff --git a/docs/_docs/SQL/sql-transactions.adoc b/docs/_docs/SQL/sql-transactions.adoc
new file mode 100644
index 0000000..6824746
--- /dev/null
+++ b/docs/_docs/SQL/sql-transactions.adoc
@@ -0,0 +1,87 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Transactions
+:javaSourceFile: {javaCodeDir}/SqlTransactions.java
+
+IMPORTANT: Support for SQL transactions is currently in the beta stage. For production use, consider key-value transactions.
+
+== Overview
+SQL Transactions are supported for caches that use the `TRANSACTIONAL_SNAPSHOT` atomicity mode. The `TRANSACTIONAL_SNAPSHOT` mode is the implementation of multiversion concurrency control (MVCC) for Ignite caches. For more information about MVCC and current limitations, visit the link:transactions/mvcc[Multiversion Concurrency Control] page.
+
+See the link:sql-reference/transactions[Transactions] page for the transaction syntax supported by Ignite.
+
+== Enabling MVCC
+To enable MVCC for a cache, use the `TRANSACTIONAL_SNAPSHOT` atomicity mode in the cache configuration. If you create a table with the `CREATE TABLE` command, specify the atomicity mode as a parameter in the `WITH` part of the command:
+
+[tabs]
+--
+tab:SQL[]
+[source,sql]
+----
+CREATE TABLE Person WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT"
+----
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="cacheConfiguration">
+        <bean class="org.apache.ignite.configuration.CacheConfiguration">
+
+            <property name="name" value="myCache"/>
+
+            <property name="atomicityMode" value="TRANSACTIONAL_SNAPSHOT"/>
+
+        </bean>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=enable,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/SqlTransactions.cs[tag=mvcc,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+
+== Limitations
+
+=== Cross-Cache Transactions
+
+The `TRANSACTIONAL_SNAPSHOT` mode is enabled per cache and does not permit caches with different atomicity modes within one transaction. Thus, if you want to cover multiple tables in one SQL transaction, all tables must be created with the `TRANSACTIONAL_SNAPSHOT` mode.
+
+=== Nested Transactions
+
+Ignite supports three modes of handling nested SQL transactions that can be enabled via a JDBC/ODBC connection parameter.
+
+[source,sql]
+----
+jdbc:ignite:thin://127.0.0.1/?nestedTransactionsMode=COMMIT
+----
+
+
+When a nested transaction occurs within another transaction, the system behavior depends on the `nestedTransactionsMode` parameter:
+
+- `ERROR` — When the nested transaction is encountered, an error is thrown and the enclosing transaction is rolled back. This is the default behavior.
+- `COMMIT` — The enclosing transaction is committed; the nested transaction starts and is committed when its COMMIT statement is encountered. The rest of the statements in the enclosing transaction are executed as implicit transactions.
+- `IGNORE` — DO NOT USE THIS MODE. The beginning of the nested transaction is ignored, statements within the nested transaction will be executed as part of the enclosing transaction, and all changes will be committed with the commit of the nested transaction. The subsequent statements of the enclosing transaction will be executed as implicit transactions.
diff --git a/docs/_docs/SQL/sql-tuning.adoc b/docs/_docs/SQL/sql-tuning.adoc
new file mode 100644
index 0000000..35872e8
--- /dev/null
+++ b/docs/_docs/SQL/sql-tuning.adoc
@@ -0,0 +1,471 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Performance Tuning
+
+This article outlines basic and advanced optimization techniques for Ignite SQL queries. Some of the sections are also useful for debugging and troubleshooting.
+
+
+== Using the EXPLAIN Statement
+
+Ignite supports the `EXPLAIN` statement, which could be used to read the execution plan of a query.
+Use this command to analyse your queries for possible optimization.
+Note that the plan will contain multiple rows: the last one will contain a query for the reducing side (usually your application), others are for map nodes (usually server nodes).
+Read the link:SQL/sql-introduction#distributed-queries[Distributed Queries] section to learn how queries are executed in Ignite.
+
+[source,sql]
+----
+EXPLAIN SELECT name FROM Person WHERE age = 26;
+----
+
+The execution plan is generated by H2 as described link:http://www.h2database.com/html/performance.html#explain_plan[here, window=_blank].
+
+== OR Operator and Selectivity
+
+//*TODO*: is this still valid?
+
+If a query contains an `OR` operator, then indexes may not be used as expected depending on the complexity of the query.
+For example, for the query `select name from Person where gender='M' and (age = 20 or age = 30)`, an index on the `gender` field will be used instead of an index on the `age` field, although the latter is a more selective index.
+As a workaround for this issue, you can rewrite the query with `UNION ALL` (notice that `UNION` without `ALL` will return `DISTINCT` rows, which will change the query semantics and will further penalize your query performance):
+
+[source,sql]
+----
+SELECT name FROM Person WHERE gender='M' and age = 20
+UNION ALL
+SELECT name FROM Person WHERE gender='M' and age = 30
+----
+
+== Avoid Having Too Many Columns
+
+Avoid having too many columns in the result set of a `SELECT` query. Due to limitations of the H2 query parser, queries with 100+ columns may perform worse than expected.
+
+== Lazy Loading
+
+By default, Ignite attempts to load the whole result set to memory and send it back to the query initiator (which is usually your application).
+This approach provides optimal performance for queries of small or medium result sets.
+However, if the result set is too big to fit in the available memory, it can lead to prolonged GC pauses and even `OutOfMemoryError` exceptions.
+
+To minimize memory consumption, at the cost of a moderate performance hit, you can load and process the result sets lazily by passing the `lazy` parameter to the JDBC and ODBC connection strings or use a similar method available for Java, .NET, and C++ APIs:
+
+[tabs]
+--
+
+tab:Java[]
+[source,java]
+----
+SqlFieldsQuery query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10");
+
+// Result set will be loaded lazily.
+query.setLazy(true);
+----
+tab:JDBC[]
+[source,sql]
+----
+jdbc:ignite:thin://192.168.0.15?lazy=true
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+var query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10")
+{
+    // Result set will be loaded lazily.
+    Lazy = true
+};
+----
+tab:C++[]
+--
+
+////
+*TODO* Add tabs for ODBC and other programming languages - C# and C++
+////
+
+== Querying Colocated Data
+
+When Ignite executes a distributed query, it sends sub-queries to individual cluster nodes to fetch the data and groups the results on the reducer node (usually your application).
+If you know in advance that the data you are querying is link:data-modeling/affinity-collocation[colocated] by the `GROUP BY` condition, you can use `SqlFieldsQuery.collocated = true` to tell the SQL engine to do the grouping on the remote nodes.
+This will reduce network traffic between the nodes and query execution time.
+When this flag is set to `true`, the query is executed on individual nodes first and the results are sent to the reducer node for final calculation.
+
+Consider the following example, in which we assume that the data is colocated by `department_id` (in other words, the `department_id` field is configured as the affinity key).
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Employee GROUP BY department_id
+----
+
+Because of the nature of the SUM operation, Ignite sums up the salaries across the elements stored on individual nodes, and then sends these sums to the reducer node where the final result are calculated.
+This operation is already distributed, and enabling the `collocated` flag only slightly improves performance.
+
+Let's take a slightly different example:
+
+[source,sql]
+----
+SELECT AVG(salary) FROM Employee GROUP BY department_id
+----
+
+In this example, Ignite has to fetch all (`salary`, `department_id`) pairs to the reducer node and calculate the results there.
+However, if employees are colocated by the `department_id` field, i.e. employee data for the same department is stored on the same node, setting `SqlFieldsQuery.collocated = true` reduces query execution time because Ignite calculates the averages for each department on the individual nodes and sends the results to the reducer node for final calculation.
+
+
+== Enforcing Join Order
+
+When this flag is set, the query optimizer will not reorder tables in joins.
+In other words, the order in which joins are applied during query execution will be the same as specified in the query.
+Without this flag, the query optimizer can reorder joins to improve performance.
+However, sometimes it might make an incorrect decision.
+This flag helps to control and explicitly specify the order of joins instead of relying on the optimizer.
+
+Consider the following example:
+
+[source, sql]
+----
+SELECT * FROM Person p
+JOIN Company c ON p.company = c.name where p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+This query contains a join between two tables: `Person` and `Company`.
+To get the best performance, we should understand which join will return the smallest result set.
+The table with the smaller result set size should be given first in the join pair.
+To get the size of each result set, let's test each part.
+
+.Q1:
+[source, sql]
+----
+SELECT count(*)
+FROM Person p
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000;
+----
+
+.Q2:
+[source, sql]
+----
+SELECT count(*)
+FROM Company c
+where
+c.name NOT LIKE 'O%';
+----
+
+After running Q1 and Q2, we can get two different outcomes:
+
+Case 1:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 30000
+|Q2 |100000
+|===
+
+Q2 returns more entries than Q1.
+In this case, we don't need to modify the original query, because smaller subset has already been located on the left side of the join.
+
+Case 2:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 50000
+|Q2 |10000
+|===
+
+Q1 returns more entries than Q2. So we need to change the initial query as follows:
+
+[source, sql]
+----
+SELECT *
+FROM Company c
+JOIN Person p
+ON p.company = c.name
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+The force join order hint can be specified as follows:
+
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC driver connection parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC driver connection attribute]
+* If you use link:SQL/sql-api[SqlFieldsQuery] to execute SQL queries, you can set the enforce join order hint by calling the `SqlFieldsQuery.setEnforceJoinOrder(true)` method.
+
+
+== Increasing Index Inline Size
+
+Every entry in the index has a constant size which is calculated during index creation. This size is called _index inline size_.
+Ideally this size should be enough to store full indexed entry in serialized form.
+When values are not fully included in the index, Ignite may need to perform additional data page reads during index lookup, which can impair performance if persistence is enabled.
+
+
+Here is how values are stored in the index:
+
+// the source code block below uses css-styles from the pygments library. If you change the highlighting library, you should change the syles as well.
+[source,java,subs="quotes"]
+----
+[tok-kt]#int#
+0     1       5
+| tag | value |
+[tok-k]#Total: 5 bytes#
+
+[tok-kt]#long#
+0     1       9
+| tag | value |
+[tok-k]#Total: 9 bytes#
+
+[tok-kt]#String#
+0     1      3             N
+| tag | size | UTF-8 value |
+[tok-k]#Total: 3 + string length#
+
+[tok-kt]#POJO (BinaryObject)#
+0     1         5
+| tag | BO hash |
+[tok-k]#Total: 5#
+----
+
+For primitive data types (bool, byte, short, int, etc.), Ignite automatically calculates the index inline size so that the values are included in full.
+For example, for `int` fields, the inline size is 5 (1 byte for the tag and 4 bytes for the value itself). For `long` fields, the inline size is 9 (1 byte for the tag + 8 bytes for the value).
+
+For binary objects, the index includes the hash of each object, which is enough to avoid collisions. The inline size is 5.
+
+For variable length data, indexes include only first several bytes of the value.
+Therefore, when indexing fields with variable-length data, we recommend that you estimate the length of your field values and set the inline size to a value that includes most (about 95%) or all values.
+For example, if you have a `String` field with 95% of the values containing 10 characters or fewer, you can set the inline size for the index on that field to 13.
+
+
+The inline sizes explained above apply to single field indexes.
+However, when you define an index on a field in the value object or on a non-primary key column, Ignite creates a _composite index_ by appending the primary key to the indexed value.
+Therefore, when calculating the inline size for composite indexes, add up the inline size of the primary key.
+
+
+Below is an example of index inline size calculation for a cache where both key and value are complex objects.
+
+[source, java]
+----
+public class Key {
+    @QuerySqlField
+    private long id;
+
+    @QuerySqlField
+    @AffinityKeyMapped
+    private long affinityKey;
+}
+
+public class Value {
+    @QuerySqlField(index = true)
+    private long longField;
+
+    @QuerySqlField(index = true)
+    private int intField;
+
+    @QuerySqlField(index = true)
+    private String stringField; // we suppose that 95% of the values are 10 symbols
+}
+----
+
+The following table summarizes the inline index sizes for the indexes defined in the example above.
+
+[cols="1,1,1,2",opts="stretch,header"]
+|===
+|Index | Kind | Recommended Inline Size | Comment
+
+| (_key)
+|Primary key index
+| 5
+|Inlined hash of a binary object (5)
+
+|(affinityKey, _key)
+|Affinity key index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(longField, _key)
+|Secondary index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(intField, _key)
+|Secondary index
+|10
+|Inlined int (5) + binary object up to hash (5)
+
+|(stringField, _key)
+|Secondary index
+|18
+|Inlined string (13) + binary object's hash (5) (assuming that the string is {tilde}10 symbols)
+
+|===
+//_
+
+//The inline size for the first two indexes is set via `CacheConfiguration.sqlIndexMaxInlineSize = 29` (because a single property is responsible for two indexes, we set it to the largest value).
+//The inline size for the rest of the indexes is set when you define a corresponding index.
+Note that you will only have to set the inline size for the index on `stringField`. For other indexes, Ignite calculates the inline size automatically.
+
+Refer to the link:SQL/indexes#configuring-index-inline-size[Configuring Index Inline Size] section for the information on how to change the inline size.
+
+You can check the inline size of an existing index in the link:monitoring-metrics/system-views#indexes[INDEXES] system view.
+
+[WARNING]
+====
+Note that since Ignite encodes strings to `UTF-8`, some characters use more than 1 byte.
+====
+
+== Query Parallelism
+
+By default, a SQL query is executed in a single thread on each participating node. This approach is optimal for queries returning small result sets involving index search. For example:
+
+[source,sql]
+----
+SELECT * FROM Person WHERE p.id = ?;
+----
+
+Certain queries might benefit from being executed in multiple threads.
+This relates to queries with table scans and aggregations, which is often the case for HTAP and OLAP workloads.
+For example:
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Person;
+----
+
+The number of threads created on a single node for query execution is configured per cache and by default equals 1.
+You can change the value by setting the `CacheConfiguration.queryParallelism` parameter.
+If you create SQL tables using the CREATE TABLE command, you can use a link:configuring-caches/configuration-overview#cache-templates[cache template] to set this parameter.
+
+If a query contains `JOINs`, then all the participating caches must have the same degree of parallelism.
+
+== Index Hints
+
+Index hints are useful in scenarios when you know that one index is more suitable for certain queries than another.
+You can use them to instruct the query optimizer to choose a more efficient execution plan.
+To do this, you can use `USE INDEX(indexA,...,indexN)` statement as shown in the following example.
+
+
+[source,sql]
+----
+SELECT * FROM Person USE INDEX(index_age)
+WHERE salary > 150000 AND age < 35;
+----
+
+
+== Partition Pruning
+
+Partition pruning is a technique that optimizes queries that use affinity keys in the `WHERE` condition.
+When executing such a query, Ignite  scans only those partitions where the requested data is stored.
+This reduces query time because the query is sent only to the nodes that store the requested partitions.
+
+In the following example, the employee objects are colocated by the `id` field (if an affinity key is not set
+explicitly then the primary key is used as the affinity key):
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR)
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+In the next example, the affinity key is set explicitly and, therefore, will be used to colocate data and direct
+queries to the nodes that keep primary copies of the data:
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR) WITH "AFFINITY_KEY=department_id"
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+
+[NOTE]
+====
+Refer to link:data-modeling/affinity-collocation[affinity colocation] page for more details
+on how data gets colocated and how it helps boost performance in distributed storages like Ignite.
+====
+
+== Skip Reducer on Update
+
+When Ignite executes a DML operation, it first fetches all the affected intermediate rows for analysis to the reducer node (usually your application), and only then prepares batches of updated values that will be sent to remote nodes.
+
+This approach might affect performance and saturate the network if a DML operation has to move many entries.
+
+Use this flag as a hint for the SQL engine to do all intermediate rows analysis and updates “in-place” on the server nodes. The hint is supported for JDBC and ODBC connections.
+
+
+[tabs]
+--
+tab:JDBC Connection String[]
+[source,text]
+----
+//jdbc connection string
+jdbc:ignite:thin://192.168.0.15/skipReducerOnUpdate=true
+----
+--
+
+== SQL On-heap Row Cache
+
+Ignite stores data and indexes in its own memory space outside of Java heap. This means that with every data
+access, a part of the data will be copied from the off-heap space to Java heap, potentially deserialized, and kept in
+the heap as long as your application or server node references it.
+
+The SQL on-heap row cache is intended to store hot rows (key-value objects) in Java heap, minimizing resources
+spent for data copying and deserialization. Each cached row refers to an entry in the off-heap region and can be
+invalidated when one of the following happens:
+
+* The master entry stored in the off-heap region is updated or removed.
+* The data page that stores the master entry is evicted from RAM.
+
+The on-heap row cache can be enabled for a specific cache/table (if you use `CREATE TABLE` to create SQL tables and caches, then the parameter can be passed via a link:configuring-caches/configuration-overview#cache-templates[cache template]):
+
+
+[source,xml]
+----
+include::code-snippets/xml/sql-on-heap-cache.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+////
+*TODO* Add tabs for ODBC/JDBC and other programming languages - Java C# and C++
+////
+
+If the row cache is enabled, you might be able to trade RAM for performance. You might get up to a 2x performance increase for some SQL queries and use cases by allocating more RAM for rows caching purposes.
+
+[WARNING]
+====
+[discrete]
+=== SQL On-Heap Row Cache Size
+
+Presently, the cache is unlimited and can occupy as much RAM as allocated to your memory data regions. Make sure to:
+
+* Set the JVM max heap size equal to the total size of all the data regions that store caches for which this on-heap row cache is enabled.
+
+* link:perf-troubleshooting-guide/memory-tuning#java-heap-and-gc-tuning[Tune] JVM garbage collection accordingly.
+====
+
+== Using TIMESTAMP instead of DATE
+
+//TODO: is this still valid?
+Use the `TIMESTAMP` type instead of `DATE` whenever possible. Presently, the `DATE` type is serialized/deserialized very inefficiently resulting in performance degradation.
diff --git a/docs/_docs/binary-client-protocol/binary-client-protocol.adoc b/docs/_docs/binary-client-protocol/binary-client-protocol.adoc
new file mode 100644
index 0000000..9caf373
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/binary-client-protocol.adoc
@@ -0,0 +1,286 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Binary Client Protocol
+
+== Overview
+
+Ignite binary client protocol enables user applications to communicate with an existing Ignite cluster without starting a full-fledged Ignite node. An application can connect to the cluster through a raw TCP socket. Once the connection is established, the application can communicate with the Ignite cluster and perform cache operations using the established format.
+
+To communicate with the Ignite cluster, a client must obey the data format and communication details explained below.
+
+== Data Format
+
+=== Byte Ordering
+
+Ignite binary client protocol has little-endian byte ordering.
+
+=== Data Objects
+
+User data, such as cache keys and values, are represented in the Ignite link:key-value-api/binary-objects[Binary Object] format. A data object can be a standard (predefined) type or a complex object. For the complete list of data types supported, see the link:binary-client-protocol/data-format[Data Format] section.
+
+== Message Format
+
+All messages- requests and responses, including handshake, start with an `int` type message length (excluding these first 4 bytes) followed by the payload (message body).
+
+=== Handshake
+
+The binary client protocol requires a connection handshake to ensure that client and server versions are compatible. The following tables show the structure of handshake message request and response. Refer to the <<Example>> section on how to send and receive a handshake request and response respectively.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|   Description
+|int| Length of handshake payload
+|byte|    Handshake code, always 1.
+|short|   Version major.
+|short|   Version minor.
+|short|   Version patch.
+|byte|    Client code, always 2.
+|String|  Username
+|String|  Password
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type (success) |   Description
+|int| Success message length, 1.
+|byte|    Success flag, 1.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type (failure)  |  Description
+|int| Error message length.
+|byte|    Success flag, 0.
+|short|   Server version major.
+|short|   Server version minor.
+|short|   Server version patch.
+|String|  Error message.
+|===
+
+
+=== Standard Message Header
+
+Client operation messages are composed of a header and operation-specific data. Each operation has its own <<Client Operations,data request and response format>>, with a common header.
+
+The following tables and examples show the request and response structure of a client operation message header:
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |   Description
+|int| Length of payload.
+|short|   Operation code
+|long|    Request id, generated by client and returned as-is in response
+|===
+
+
+.Request header
+[source, java]
+----
+private static void writeRequestHeader(int reqLength, short opCode, long reqId, DataOutputStream out) throws IOException {
+  // Message length
+  writeIntLittleEndian(10 + reqLength, out);
+
+  // Op code
+  writeShortLittleEndian(opCode, out);
+
+  // Request id
+  writeLongLittleEndian(reqId, out);
+}
+----
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type | Description
+|int| Length of response message.
+|long|    Request id (see above)
+|int| Status code (0 for success, otherwise error code)
+|String|  Error message (present only when status is not 0)
+|===
+
+
+
+.Response header
+[source, java]
+----
+private static void readResponseHeader(DataInputStream in) throws IOException {
+  // Response length
+  final int len = readIntLittleEndian(in);
+
+  // Request id
+  long resReqId = readLongLittleEndian(in);
+
+  // Success code
+  int statusCode = readIntLittleEndian(in);
+}
+----
+
+
+== Connectivity
+
+=== TCP Socket
+
+Client applications should connect to server nodes with a TCP socket. By default, the connector is enabled on port 10800. You can configure the port number and other server-side​ connection parameters in the `clientConnectorConfiguration` property of `IgniteConfiguration` of your cluster, as shown below:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!-- Thin client connection configuration. -->
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+            <property name="host" value="127.0.0.1"/>
+            <property name="port" value="10900"/>
+            <property name="portRange" value="30"/>
+        </bean>
+    </property>
+
+    <!-- Other Ignite Configurations. -->
+
+</bean>
+
+----
+
+
+tab:Java[]
+
+[source, java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+
+ClientConnectorConfiguration ccfg = new ClientConnectorConfiguration();
+ccfg.setHost("127.0.0.1");
+ccfg.setPort(10900);
+ccfg.setPortRange(30);
+
+// Set client connection configuration in IgniteConfiguration
+cfg.setClientConnectorConfiguration(ccfg);
+
+// Start Ignite node
+Ignition.start(cfg);
+----
+
+--
+
+=== Connection Handshake
+
+Besides socket connection, the thin client protocol requires a connection handshake to ensure that client and server versions are compatible. Note that handshake must be the first message after the connection is established.
+
+For the handshake message request and response structure, see the <<Handshake>> section above.
+
+
+=== Example
+
+
+.Socket and Handshake Connection
+[source, java]
+----
+Socket socket = new Socket();
+socket.connect(new InetSocketAddress("127.0.0.1", 10800));
+
+String username = "yourUsername";
+
+String password = "yourPassword";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Message length
+writeIntLittleEndian(18 + username.length() + password.length(), out);
+
+// Handshake operation
+writeByteLittleEndian(1, out);
+
+// Protocol version 1.0.0
+writeShortLittleEndian(1, out);
+writeShortLittleEndian(1, out);
+writeShortLittleEndian(0, out);
+
+// Client code: thin client
+writeByteLittleEndian(2, out);
+
+// username
+writeString(username, out);
+
+// password
+writeString(password, out);
+
+// send request
+out.flush();
+
+// Receive handshake response
+DataInputStream in = new DataInputStream(socket.getInputStream());
+int length = readIntLittleEndian(in);
+int successFlag = readByteLittleEndian(in);
+
+// Since Ignite binary protocol uses little-endian byte order,
+// we need to implement big-endian to little-endian
+// conversion methods for write and read.
+
+// Write int in little-endian byte order
+private static void writeIntLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.write((v >>> 0) & 0xFF);
+  out.write((v >>> 8) & 0xFF);
+  out.write((v >>> 16) & 0xFF);
+  out.write((v >>> 24) & 0xFF);
+}
+
+// Write short in little-endian byte order
+private static final void writeShortLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.write((v >>> 0) & 0xFF);
+  out.write((v >>> 8) & 0xFF);
+}
+
+// Write byte in little-endian byte order
+private static void writeByteLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.writeByte(v);
+}
+
+// Read int in little-endian byte order
+private static int readIntLittleEndian(DataInputStream in) throws IOException {
+  int ch1 = in.read();
+  int ch2 = in.read();
+  int ch3 = in.read();
+  int ch4 = in.read();
+  if ((ch1 | ch2 | ch3 | ch4) < 0)
+    throw new EOFException();
+  return ((ch4 << 24) + (ch3 << 16) + (ch2 << 8) + (ch1 << 0));
+}
+
+
+// Read byte in little-endian byte order
+private static byte readByteLittleEndian(DataInputStream in) throws IOException {
+  return in.readByte();
+}
+
+// Other write and read methods
+
+----
+
+
+== Client Operations
+
+Upon successful handshake, a client can start performing various cache operations:
+
+* link:binary-client-protocol/key-value-queries[Key-Value Queries]
+* link:binary-client-protocol/sql-and-scan-queries[SQL and Scan Queries]
+* link:binary-client-protocol/binary-type-metadata[Binary-Type Operations]
+* link:binary-client-protocol/cache-configuration[Cache Configuration Operations]
diff --git a/docs/_docs/binary-client-protocol/binary-type-metadata.adoc b/docs/_docs/binary-client-protocol/binary-type-metadata.adoc
new file mode 100644
index 0000000..320a83c
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/binary-type-metadata.adoc
@@ -0,0 +1,421 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Binary Type Metadata
+
+== Operation Codes
+
+Upon a successful handshake with an Ignite server node a client can start performing binary-type related operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+
+[cols="2,1",opts="header"]
+|===
+|Operation  | OP_CODE
+|OP_GET_BINARY_TYPE_NAME| 3000
+|OP_REGISTER_BINARY_TYPE_NAME|    3001
+|OP_GET_BINARY_TYPE | 3002
+|OP_PUT_BINARY_TYPE|  3003
+|OP_RESOURCE_CLOSE|   0
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_GET_BINARY_TYPE_NAME
+
+Gets the platform-specific full binary type name by id. For example, .NET and Java can map to the same type Foo, but classes will be Apache.Ignite.Foo in .NET and org.apache.ignite.Foo in Java.
+
+Names are registered with OP_REGISTER_BINARY_TYPE_NAME.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type   | Description
+|Header |  Request header.
+|byte |    Platform id:
+JAVA = 0
+DOTNET = 1
+|int| Type id; Java-style hash code of the type name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  |Description
+|Header |  Response header.
+|String |  Binary type name.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+int typeLen = type.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_GET_BINARY_TYPE_NAME, 1, out);
+
+// Platform id
+writeByteLittleEndian(0, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+----
+
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting String
+int typeCode = readByteLittleEndian(in); // type code
+int strLen = readIntLittleEndian(in); // length
+
+byte[] buf = new byte[strLen];
+
+readFully(in, buf, 0, strLen);
+
+String s = new String(buf);
+
+System.out.println(s);
+----
+
+
+--
+
+== OP_GET_BINARY_TYPE
+
+Gets the binary type information by id.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type   | Description
+|Header |  Request header.
+|int | Type id; Java-style hash code of the type name.
+|===
+
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type | Description
+|Header|  Response header.
+|bool|    False: binary type does not exist, response end.
+True: binary type exists, response as follows.
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|String|  Affinity key field name.
+|int| BinaryField count.
+|BinaryField * count| Structure of BinaryField:
+
+`String`  Field name
+
+`int` Type id; Java-style hash code of the type name.
+
+`int` Field id; Java-style hash code of the field name.
+
+|bool|    Is Enum or not.
+
+If set to true, then you have to pass the following 2 parameters. Otherwise, skip them.
+|int| _Pass only if 'is enum' parameter is 'true'_.
+
+Enum field count.
+|String + int|    _Pass only if 'is enum' parameter is 'true'_.
+
+Enum values. An enum value is a pair of a literal value (String) and numerical value (int).
+
+Repeat for as many times as the Enum field count that is obtained in the previous parameter.
+
+|int| Schema count.
+|BinarySchema|    Structure of BinarySchema:
+
+`int` Unique schema id.
+
+`int` Number of fields in the schema.
+
+`int` Field Id; Java-style hash code of the field name. Repeat for as many times as the total number of fields in the schema.
+
+Repeat for as many times as the BinarySchema count that is obtained in the previous parameter.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(4, OP_BINARY_TYPE_GET, 1, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+boolean typeExist = readBooleanLittleEndian(in);
+
+int typeId = readIntLittleEndian(in);
+
+String typeName = readString(in);
+
+String affinityFieldName = readString(in);
+
+int fieldCount = readIntLittleEndian(in);
+
+for (int i = 0; i < fieldCount; i++)
+    readBinaryTypeField(in);
+
+boolean isEnum = readBooleanLittleEndian(in);
+
+int schemaCount = readIntLittleEndian(in);
+
+// Read binary schemas
+for (int i = 0; i < schemaCount; i++) {
+  int schemaId = readIntLittleEndian(in); // Schema Id
+
+  int fieldCount = readIntLittleEndian(in); // field count
+
+  for (int j = 0; j < fieldCount; j++) {
+    System.out.println(readIntLittleEndian(in)); // field id
+  }
+}
+
+private static void readBinaryTypeField (DataInputStream in) throws IOException{
+  String fieldName = readString(in);
+  int fieldTypeId = readIntLittleEndian(in);
+  int fieldId = readIntLittleEndian(in);
+  System.out.println(fieldName);
+}
+----
+--
+
+
+== OP_REGISTER_BINARY_TYPE_NAME
+
+Registers the platform-specific full binary type name by id. For example, .NET and Java can map to the same type Foo, but classes will be Apache.Ignite.Foo in .NET and org.apache.ignite.Foo in Java.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  | Description
+|Header |  Request header.
+|byte|    Platform id:
+JAVA = 0
+DOTNET = 1
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|===
+
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  |Description
+|Header | Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+int typeLen = type.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(20 + typeLen, OP_PUT_BINARY_TYPE_NAME, 1, out);
+
+//Platform id
+writeByteLittleEndian(0, out);
+
+//Type id
+writeIntLittleEndian(type.hashCode(), out);
+
+// Type name
+writeString(type, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+
+--
+
+== OP_PUT_BINARY_TYPE
+
+Registers binary type information in cluster.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |  Description
+|Header|  Response header.
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|String|  Affinity key field name.
+|int| BinaryField count.
+|BinaryField| Structure of BinaryField:
+
+`String`  Field name
+
+`int` Type id; Java-style hash code of the type name.
+
+`int` Field id; Java-style hash code of the field name.
+
+Repeat for as many times as the BinaryField count that is passed in the previous parameter.
+|bool|    Is Enum or not.
+
+If set to true, then you have to pass the following 2 parameters. Otherwise, skip them.
+|int| Pass only if 'is enum' parameter is 'true'.
+
+Enum field count.
+|String + int|    Pass only if 'is enum' parameter is 'true'.
+
+Enum values. An enum value is a pair of a literal value (String) and numerical value (int).
+
+Repeat for as many times as the Enum field count that is passed in the previous parameter.
+|int| BinarySchema count.
+|BinarySchema|    Structure of BinarySchema:
+
+`int` Unique schema id.
+
+`int` Number of fields in the schema.
+
+`int` Field id; Java-style hash code of the field name. Repeat for as many times as the total number of fields in the schema.
+
+Repeat for as many times as the BinarySchema count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type | Description
+|Header |  Response header.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(120, OP_BINARY_TYPE_PUT, 1, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+
+// Type name
+writeString(type, out);
+
+// Affinity key field name
+writeByteLittleEndian(101, out);
+
+// Field count
+writeIntLittleEndian(3, out);
+
+// Field 1
+String field1 = "id";
+writeBinaryTypeField(field1, "long", out);
+
+// Field 2
+String field2 = "name";
+writeBinaryTypeField(field2, "String", out);
+
+// Field 3
+String field3 = "salary";
+writeBinaryTypeField(field3, "int", out);
+
+// isEnum
+out.writeBoolean(false);
+
+// Schema count
+writeIntLittleEndian(1, out);
+
+// Schema
+writeIntLittleEndian(657, out);  // Schema id; can be any custom value
+writeIntLittleEndian(3, out);  // field count
+writeIntLittleEndian(field1.hashCode(), out);
+writeIntLittleEndian(field2.hashCode(), out);
+writeIntLittleEndian(field3.hashCode(), out);
+
+private static void writeBinaryTypeField (String field, String fieldType, DataOutputStream out) throws IOException{
+  writeString(field, out);
+  writeIntLittleEndian(fieldType.hashCode(), out);
+  writeIntLittleEndian(field.hashCode(), out);
+}
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+
+--
+
diff --git a/docs/_docs/binary-client-protocol/cache-configuration.adoc b/docs/_docs/binary-client-protocol/cache-configuration.adoc
new file mode 100644
index 0000000..9c2a9b1
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/cache-configuration.adoc
@@ -0,0 +1,714 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cache Configuration
+
+== Operation Codes
+
+Upon successful handshake with an Ignite server node, a client can start performing various cahe configuration operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+| Operation | OP_CODE
+|OP_CACHE_GET_NAMES|  1050
+|OP_CACHE_CREATE_WITH_NAME|   1051
+|OP_CACHE_GET_OR_CREATE_WITH_NAME|    1052
+|OP_CACHE_CREATE_WITH_CONFIGURATION|  1053
+|OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION|   1054
+|OP_CACHE_GET_CONFIGURATION|  1055
+|OP_CACHE_DESTROY|    1056
+|OP_QUERY_SCAN|   2000
+|OP_QUERY_SCAN_CURSOR_GET_PAGE|   2001
+|OP_QUERY_SQL|    2002
+|OP_QUERY_SQL_CURSOR_GET_PAGE|    2003
+|OP_QUERY_SQL_FIELDS| 2004
+|OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE| 2005
+|OP_BINARY_TYPE_NAME_GET| 3000
+|OP_BINARY_TYPE_NAME_PUT| 3001
+|OP_BINARY_TYPE_GET|  3002
+|OP_BINARY_TYPE_PUT|  3003
+|===
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_CACHE_CREATE_WITH_NAME
+
+Creates a cache with a given name. Cache template can be applied if there is '{asterisk}' in the cache name. Throws exception if a cache with specified name already exists.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request header.
+|String|  Cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+String cacheName = "myNewCache";
+
+int nameLength = cacheName.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5 + nameLength, OP_CACHE_CREATE_WITH_NAME, 1, out);
+
+// Cache name
+writeString(cacheName, out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+----
+--
+
+
+
+== OP_CACHE_GET_OR_CREATE_WITH_NAME
+
+Creates a cache with a given name. Cache template can be applied if there is '{asterisk}' in the cache name. Does nothing if the cache exists.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|String|  Cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+String cacheName = "myNewCache";
+
+int nameLength = cacheName.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5 + nameLength, OP_CACHE_GET_OR_CREATE_WITH_NAME, 1, out);
+
+// Cache name
+writeString(cacheName, out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_GET_NAMES
+
+Gets existing cache names.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Cache count.
+|String|  Cache name.
+
+Repeat for as many times as the cache count that is obtained in the previous parameter.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_GET_NAMES, 1, out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+// Cache count
+int cacheCount = readIntLittleEndian(in);
+
+// Cache names
+for (int i = 0; i < cacheCount; i++) {
+  int type = readByteLittleEndian(in); // type code
+
+  int strLen = readIntLittleEndian(in); // length
+
+  byte[] buf = new byte[strLen];
+
+  readFully(in, buf, 0, strLen);
+
+  String s = new String(buf); // cache name
+
+  System.out.println(s);
+}
+
+----
+--
+
+
+== OP_CACHE_GET_CONFIGURATION
+
+Gets configuration for the given cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Flag.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Length of the configuration in bytes (all the configuration parameters).
+|CacheConfiguration|  Structure of Cache configuration (See below).
+|===
+
+
+Cache Configuration
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|int| Number of backups.
+|int| CacheMode:
+
+LOCAL = 0
+
+REPLICATED = 1
+
+PARTITIONED = 2
+
+|bool|    CopyOnRead
+|String|  DataRegionName
+|bool|    EagerTTL
+|bool|    StatisticsEnabled
+|String|  GroupName
+|bool|    Invalidate
+|long|    DefaultLockTimeout (milliseconds)
+|int| MaxQueryIterators
+|String|  Name
+|bool|    IsOnheapCacheEnabled
+|int| PartitionLossPolicy:
+
+READ_ONLY_SAFE = 0
+
+READ_ONLY_ALL = 1
+
+READ_WRITE_SAFE = 2
+
+READ_WRITE_ALL = 3
+
+IGNORE = 4
+
+|int| QueryDetailMetricsSize
+|int| QueryParellelism
+|bool|    ReadFromBackup
+|int| RebalanceBatchSize
+|long|    RebalanceBatchesPrefetchCount
+|long|    RebalanceDelay (milliseconds)
+|int| RebalanceMode:
+
+SYNC = 0
+
+ASYNC = 1
+
+NONE = 2
+
+|int| RebalanceOrder
+|long|    RebalanceThrottle (milliseconds)
+|long|    RebalanceTimeout (milliseconds)
+|bool|    SqlEscapeAll
+|int| SqlIndexInlineMaxSize
+|String|  SqlSchema
+|int| WriteSynchronizationMode:
+
+FULL_SYNC = 0
+
+FULL_ASYNC = 1
+
+PRIMARY_SYNC = 2
+
+|int| CacheKeyConfiguration count.
+|CacheKeyConfiguration|   Structure of CacheKeyConfiguration:
+
+`String` Type name
+
+`String` Affinity key field name
+
+Repeat for as many times as the CacheKeyConfiguration count that is obtained in the previous parameter.
+int QueryEntity count.
+|QueryEntity * count| Structure of QueryEntity (see below).
+|===
+
+
+QueryEntity
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|String|  Key type name.
+|String|  Value type name.
+|String|  Table name.
+|String|  Key field name.
+|String|  Value field name.
+|int| QueryField count
+|QueryField * count|  Structure of QueryField:
+
+`String` Name
+
+`String` Type name
+
+`bool` Is key field
+
+`bool` Is notNull constraint field
+
+Repeat for as many times as the QueryField count that is obtained in the previous parameter.
+|int| Alias count
+|(String + String) * count|   Field name aliases.
+|int| QueryIndex count
+|QueryIndex * count | Structure of QueryIndex:
+
+`String`  Index name
+
+`byte`    Index type:
+
+SORTED = 0
+
+FULLTEXT = 1
+
+GEOSPATIAL = 2
+
+`int` Inline size
+
+`int` Field count
+
+`(string + bool) * count`  Fields (name + IsDescensing)
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String cacheName = "myCache";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_GET_CONFIGURATION, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+// Config length
+int configLen = readIntLittleEndian(in);
+
+// CacheAtomicityMode
+int cacheAtomicityMode = readIntLittleEndian(in);
+
+// Backups
+int backups = readIntLittleEndian(in);
+
+// CacheMode
+int cacheMode = readIntLittleEndian(in);
+
+// CopyOnRead
+boolean copyOnRead = readBooleanLittleEndian(in);
+
+// Other configurations
+
+----
+--
+
+
+== OP_CACHE_CREATE_WITH_CONFIGURATION
+
+Creates cache with provided configuration. An exception is thrown if the name is already in use.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Length of the configuration in bytes (all the used configuration parameters).
+|short|   Number of configuration parameters.
+|short + property type |   Configuration Property data.
+
+Repeat for as many times as the number of configuration parameters.
+|===
+
+
+Any number of configuration parameters can be provided. Note that `Name` is required.
+
+Cache configuration data is specified in key-value form, where key is the `short` property id and value is property-specific data. Table below describes all available parameters.
+
+
+[cols="1,1,3",opts="header"]
+|===
+|Property Code |   Property Type|   Description
+|2|   int| CacheAtomicityMode:
+
+TRANSACTIONAL = 0,
+
+ATOMIC = 1
+|3|   int| Backups
+|1|   int| CacheMode:
+LOCAL = 0, REPLICATED = 1, PARTITIONED = 2
+|5|   boolean| CopyOnRead
+|100| String|  DataRegionName
+|405| boolean| EagerTtl
+|406| boolean| StatisticsEnabled
+|400| String|  GroupName
+|402| long|    DefaultLockTimeout (milliseconds)
+|403| int| MaxConcurrentAsyncOperations
+|206| int| MaxQueryIterators
+|0|   String|  Name
+|101| bool|    IsOnheapcacheEnabled
+|404| int| PartitionLossPolicy:
+
+READ_ONLY_SAFE = 0,
+
+ READ_ONLY_ALL = 1,
+
+ READ_WRITE_SAFE = 2,
+
+ READ_WRITE_ALL = 3,
+
+ IGNORE = 4
+|202| int| QueryDetailMetricsSize
+|201| int| QueryParallelism
+|6|   bool|    ReadFromBackup
+|303| int| RebalanceBatchSize
+|304| long|    RebalanceBatchesPrefetchCount
+|301| long|    RebalanceDelay (milliseconds)
+|300| int| RebalanceMode: SYNC = 0, ASYNC = 1, NONE = 2
+|305| int| RebalanceOrder
+|306| long|    RebalanceThrottle (milliseconds)
+|302| long|    RebalanceTimeout (milliseconds)
+|205| bool|    SqlEscapeAll
+|204| int| SqlIndexInlineMaxSize
+|203| String|  SqlSchema
+|4|   int| WriteSynchronizationMode:
+
+FULL_SYNC = 0,
+
+ FULL_ASYNC = 1,
+
+PRIMARY_SYNC = 2
+|401| int + CacheKeyConfiguration * count| CacheKeyConfiguration count + CacheKeyConfiguration
+
+Structure of CacheKeyConfiguration:
+
+`String` Type name
+
+`String` Affinity key field name
+|200 | int + QueryEntity * count |  QueryEntity count + QueryEntity
+
+Structure of QueryEntity: (see below)
+|===
+
+
+
+QueryEntity
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|String|  Key type name.
+|String|  Value type name.
+|String|  Table name.
+|String|  Key field name.
+|String|  Value field name.
+|int| QueryField count
+|QueryField|  Structure of QueryField:
+
+`String` Name
+
+`String` Type name
+
+`bool` Is key field
+
+`bool` Is notNull constraint field
+
+Repeat for as many times as the QueryField count.
+|int| Alias count
+|String + String| Field name alias.
+
+Repeat for as many times as the alias count.
+|int| QueryIndex count
+|QueryIndex|  Structure of QueryIndex:
+
+`String`  Index name
+
+`byte`    Index type:
+
+SORTED = 0
+
+FULLTEXT = 1
+
+GEOSPATIAL = 2
+
+`int` Inline size
+
+`int` Field count
+
+`string + bool` Fields (name + IsDescensing)
+
+Repeat for as many times as the field count that is passed in the previous parameter.
+
+Repeat for as many times as the QueryIndex count.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(30, OP_CACHE_CREATE_WITH_CONFIGURATION, 1, out);
+
+// Config length in bytes
+writeIntLittleEndian(16, out);
+
+// Number of properties
+writeShortLittleEndian(2, out);
+
+// Backups opcode
+writeShortLittleEndian(3, out);
+// Backups: 2
+writeIntLittleEndian(2, out);
+
+// Name opcode
+writeShortLittleEndian(0, out);
+// Name
+writeString("myNewCache", out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION
+
+Creates cache with provided configuration. Does nothing if the name is already in use.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|CacheConfiguration|  Cache configuration (see format above).
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+writeRequestHeader(30, OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION, 1, out);
+
+// Config length in bytes
+writeIntLittleEndian(16, out);
+
+// Number of properties
+writeShortLittleEndian(2, out);
+
+// Backups opcode
+writeShortLittleEndian(3, out);
+
+// Backups: 2
+writeIntLittleEndian(2, out);
+
+// Name opcode
+writeShortLittleEndian(0, out);
+
+// Name
+writeString("myNewCache", out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_DESTROY
+
+Destroys the cache with a given name.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String cacheName = "myCache";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(4, OP_CACHE_DESTROY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+--
+
diff --git a/docs/_docs/binary-client-protocol/data-format.adoc b/docs/_docs/binary-client-protocol/data-format.adoc
new file mode 100644
index 0000000..b56b8c0
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/data-format.adoc
@@ -0,0 +1,1072 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Format
+
+Standard data types are represented as a combination of type code and value.
+
+:table_opts: cols="1,1,4",opts="header"
+
+[{table_opts}]
+|===
+|Field |  Size in bytes |  Description
+|`type_code` |  1 |   Signed one-byte integer code that indicates the type of the value.
+|`value` |  Variable|    Value itself. Its format and size depends on the type_code
+|===
+
+
+Below you can find description of the supported types and their format.
+
+
+== Primitives
+
+Primitives are the very basic types, such as numbers.
+
+
+=== Byte
+[{table_opts}]
+|===
+| Field  | Size in bytes  | Description
+|Type |   1|   1
+|Value  | 1  | Single byte value.
+
+|===
+
+=== Short
+
+Type code: 2;
+
+2-bytes long signed integer number. Little-endian.
+
+Structure:
+
+
+[{table_opts}]
+|===
+| Field |   Size in bytes | Description
+| `Value`  |  2|   The value.
+|===
+
+
+=== Int
+
+Type code: 3;
+
+4-bytes long signed integer number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`value`|   4|   The value.
+|===
+
+=== Long
+
+Type code: 4;
+
+8-bytes long signed integer number. Little-endian.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|`value` |   8  | The value.
+|===
+
+
+=== Float
+
+Type code: 5;
+
+4-byte long IEEE 754 floating-point number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+| value|   4|   The value.
+|===
+
+=== Double
+Type code: 6;
+
+8-byte long IEEE 754 floating-point number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value  | 8|   The value.
+
+|===
+
+=== Char
+Type code: 7;
+
+Single UTF-16 code unit. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value |   2 |   The UTF-16 code unit in little-endian.
+|===
+
+
+=== Bool
+
+Type code: 8;
+
+Boolean value. Zero for false and non-zero for true.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes |   Description
+
+|value |  1 |  The value. Zero for false and non-zero for true.
+
+|===
+
+=== NULL
+
+Type code: 101;
+
+This is not exactly a type. It's just a null value, which can be assigned to object of any type.
+Has no payload, only consists of the type code.
+
+== Standard objects
+
+=== String
+
+Type code: 9;
+
+String in UTF-8 encoding. Should always be a valid UTF-8 string.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes |   Description
+|length|  4|   Signed integer number in little-endian. Length of the string in UTF-8 code units, i.e. in bytes.
+| data |    length |  String data in UTF-8 encoding. Without BOM.
+
+|===
+
+=== UUID (Guid)
+
+
+Type code: 10;
+
+A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|most_significant_bits|   8|   64-bit number in little endian, representing 64 most significant bits of UUID.
+|least_significant_bits|  8|   64-bit number in little endian, representing 64 least significant bits of UUID.
+
+|===
+
+=== Timestamp
+
+Type code: 33;
+
+More precise than a Date data type. Except for a milliseconds since epoch, contains a nanoseconds fraction of a last millisecond, which value could be in a range from 0 to 999999. It means, the full time stamp in nanoseconds can be obtained with the following expression: `msecs_since_epoch \* 1000000 + msec_fraction_in_nsecs`.
+
+NOTE: The nanoseconds time stamp evaluation expression is provided for clarification purposes only. One should not use the expression in production code, as in some languages the expression may result in integer number overflow.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes  | Description
+|`msecs_since_epoch`|   8|   Signed integer number in little-endian. Number of milliseconds elapsed since 00:00:00 1 Jan 1970 UTC. This format widely known as a Unix or POSIX time.
+|`msec_fraction_in_nsecs`|  4|   Signed integer number in little-endian. Nanosecond fraction of a millisecond.
+
+|===
+
+=== Date
+
+Type code: 11;
+
+Date, represented as a number of milliseconds elapsed since 00:00:00 1 Jan 1970 UTC. This format widely known as a Unix or POSIX time.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`msecs_since_epoch`|   8|   The value. Signed integer number in little-endian.
+|===
+
+=== Time
+
+Type code: 36;
+
+Time, represented as a number of milliseconds elapsed since midnight, i.e. 00:00:00 UTC.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value|   8|   Signed integer number in little-endian. Number of milliseconds elapsed since 00:00:00 UTC.
+
+|===
+
+=== Decimal
+
+Type code: 30;
+
+Numeric value of any desired precision and scale.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|scale|   4|   Signed integer number in little-endian. Effectively, a power of the ten, on which the unscaled value should be divided. For example, 42 with scale 3 is 0.042, 42 with scale -3 is 42000, and 42 with scale 1 is 42.
+|length|  4|   Signed integer number in little-endian. Length of the number in bytes.
+|data|    length|  First bit is the flag of negativity. If it's set to 1, then value is negative. Other bits form signed integer number of variable length in big-endian format.
+
+|===
+
+=== Enum
+
+Type code: 28;
+
+Value of an enumerable type. For such types defined only a finite number of named values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id| 4|   Signed integer number in little-endian. See <<Type ID>> for details.
+|ordinal| 4|   Signed integer number stored in little-endian. Enumeration value ordinal . Its position in its enum declaration, where the initial constant is assigned an ordinal of zero.
+
+|===
+
+== Arrays of primitives
+
+Arrays of this kind only contain payloads of values as elements. They all have similar format. See format description in a table below for details. Pay attention that array only contains payloads, not type codes.
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`length`|  4|   Signed integer number. Number of elements in the array.
+|`element_0_payload`|   Depends on the type.|    Payload of the value 0.
+|`element_1_payload`|   Depends on the type.|    Payload of the value 1.
+|... |... |...
+|`element_N_payload`|   Depends on the type. |   Payload of the value N.
+
+|===
+
+=== Byte array
+
+Type code: 12;
+
+Array of bytes. May be either a piece of raw data, or array of small signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    length|  Elements sequence. Every element is a payload of type "byte".
+
+|===
+
+Short array
+
+Type code: 13;
+
+Array of short signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 2`|  Elements sequence. Every element is a payload of type "short".
+
+|===
+
+=== Int array
+
+Type code: 14;
+
+Array of signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 4`|  Elements sequence. Every element is a payload of type "int".
+
+|===
+
+=== Long array
+
+Type code: 15;
+
+Array of long signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 8`|  Elements sequence. Every element is a payload of type "long".
+
+|===
+
+=== Float array
+
+Type code: 16;
+
+Array of floating point numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 4` | Elements sequence. Every element is a payload of type "float".
+
+|===
+
+=== Double array
+
+Type code: 17;
+
+Array of floating point numbers with double precision.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 8`|  Elements sequence. Every element is a payload of type "double".
+
+|===
+
+=== Char array
+
+Type code: 18;
+
+Array of UTF-16 code units. Unlike string, this type is not necessary contains valid UTF-16 text.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length | 4|   Signed integer number. Number of elements in the array.
+|elements|    length * 2|  Elements sequence. Every element is a payload of type "char".
+
+|===
+
+=== Bool array
+
+Type code: 19;
+
+Array of boolean values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    length|  Elements sequence. Every element is a payload of type "bool".
+
+|===
+
+== Arrays of standard objects
+
+Arrays of this kind contain full values as elements. It means, their elements contain type code as well as payload. This format allows for elements of such collections to be NULL values. That's why they are called "objects". They all have similar format. See format description in a table below for details.
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`length` | 4|   Signed integer number.  Number of elements in the array.
+|`element_0_full_value`|    Depends on value type.|  Full value of the element 0. Contains of type code and payload. Also, can be NULL.
+|`element_1_full_value`|    Depends on value type.|  Full value of the element 1 or NULL.
+|... |...| ...
+|`element_N_full_value`|    Depends on value type.|  Full value of the element N or NULL.
+
+|===
+
+=== String array
+
+Type code: 20;
+
+Array of UTF-8 string values.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Depends on every string length. Every element size is either `5 + value_length` for string, or 1 for `NULL`.|  Elements sequence. Every element is a full value of type "string", including type code, or `NULL`.
+
+|===
+
+=== UUID (Guid) array
+
+Type code: 21;
+
+Array of UUIDs (Guids).
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 17 for UUID, or 1 for NULL.|  Elements sequence. Every element is a full value of type "UUID", including type code, or NULL.
+
+|===
+
+=== Timestamp array
+
+Type code: 34;
+
+Array of timestamp values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 13 for Timestamp, or 1 for NULL.| Elements sequence. Every element is a full value of type "timestamp", including type code, or NULL.
+
+|===
+
+=== Date array
+
+Type code: 22;
+
+Array of dates.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 9 for Date, or 1 for NULL.|   Elements sequence. Every element is a full value of type "date", including type code, or NULL.
+
+|===
+
+=== Time array
+
+Type code: 37;
+
+Array of time values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements   | Variable. Every element size is either 9 for Time, or 1 for NULL.|   Elements sequence. Every element is a full value of type "time", including type code, or NULL.
+
+|===
+
+=== Decimal array
+
+Type code: 31;
+
+Array of decimal values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either `9 + value_length` for Decimal, or 1 for NULL.| Elements sequence. Every element is a full value of type "decimal", including type code, or NULL.
+
+|===
+
+== Object collections
+
+=== Object array
+
+Type code: 23;
+
+Array of objects of any type. Can contain objects of any type. This includes standard objects of any type, as well as complex objects of various types, NULL values and any combinations of them. This also means, that collections may contain other collections.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id |4|   Type identifier of the contained objects. For example, in Java this type is used to de-serialize to a Type[]. Obviously, all values in array should have Type as a parent. It is parent type of any object type. For example, in Java this always can be java.lang.Object. Type ID for such "root" object type is -1. See <<Type ID>> for details.
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Depends on sizes of the objects.|  Elements sequence. Every element is a full value of any type or NULL.
+
+|===
+
+=== Collection
+
+Type code: 24;
+
+General collection type. Just as an object array, contains objects, but unlike array, it have a hint for a deserialization to a platform-specific collection of a certain type, not just an array. There are following collection types:
+
+
+*  `USER_SET` = -1. This is a general set type, which can not be mapped to more specific set type. Still, it is known, that it is set. It makes sense to deserialize such a collection to the basic and most widely used set-like type on your platform, e.g. hash set.
+*    `USER_COL` = 0. This is a general collection type, which can not be mapped to any more specific collection type. It makes sense to deserialize such a collection to the basic and most widely used collection type on your platform, e.g. resizeable array.
+*    `ARR_LIST` = 1. This is in fact a resizeable array type.
+*    `LINKED_LIST` = 2. This is a linked list type.
+*    `HASH_SET` = 3. This is a basic hash set type.
+*    `LINKED_HASH_SET` = 4. This is a hash set type, which maintains element order.
+*    `SINGLETON_LIST` = 5. This is a collection that only contains a single element, but behaves as a collection. Could be used by platforms for optimization purposes. If not applicable, any collection type could be used.
+
+[NOTE]
+====
+Collection type byte is used as a hint by a certain platform to deserialize a collection to the most suitable type. For example, in Java HASH_SET deserialized to java.util.HashSet, while LINKED_HASH_SET deserialized to java.util.LinkedHashSet. It is recommended for a thin client implementation to try and use the most suitable collection type on serialization and deserialization. But still, it is only a hint, which user can ignore if it is not relevant or not applicable for the platform.
+====
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the collection.
+|type|    1|   Type of the collection. See description for details.
+elements  |  Variable. Depends on sizes of the objects.  Elements sequence. Every element is a full value of any type or NULL.
+
+|===
+
+=== Map
+
+Type code: 25;
+
+Map-like collection type. Contains pairs of key and value objects. Both key and value objects can be objects of a various types. It includes standard objects of various type, as well as complex objects of various types and any combinations of them. Have a hint for a deserialization to a map of a certain type. There are following map types:
+
+*   `HASH_MAP` = 1. This is a basic hash map.
+*   `LINKED_HASH_MAP` = 2. This is a hash map, which maintains element order.
+
+[NOTE]
+====
+Map type byte is used as a hint by a certain platform to deserialize a collection to the most suitable type. It is recommended for a thin client implementation to try and use the most suitable map type on serialization and deserialization. But still, it is only a hint, which user can ignore if it is not relevant or not applicable for the platform.
+====
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the collection.
+|type|    1|   Type of the collection. See description for details.
+|elements|    Variable. Depends on sizes of the objects.|  Elements sequence. Elements here are keys and values, followed one by one in pairs. Every element is a full value of any type or NULL.
+
+|===
+
+=== Enum array
+
+Type code: 29;
+
+Array of enumerable type value. Element could be either enumerable value or null. So, any element either occupies 9 bytes or 1 byte.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id| 4|   Type identifier of the contained objects. For example, in Java this type is used to de-serialize to a EnumType[]. Obviously, all values in array should have EnumType as a parent. It is parent type of any enumerable object type. See <<Type ID>> for details.
+|length|  4|   Signed integer number. Number of elements in the collection.
+|elements|    Variable. Depends on sizes of the objects. | Elements sequence. Every element is a full value of enum type or NULL.
+
+|===
+
+== Complex object
+
+Type code: 103;
+
+Complex object consist of a 24-byte header, set of fields (data objects), and a schema (field IDs and positions). Depending on an operation and your data model, a data object can be of a primitive type or complex type (set of fields).
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Optionality
+|`version`| 1|   Mandatory
+|`flags`|   2|   Mandatory
+|`type_id`| 4|   Mandatory
+|`hash_code`|   4|   Mandatory
+|`length`|  4|   Mandatory
+|`schema_id`|   4|   Mandatory
+|`object_fields`|   Variable| length.    Optional
+|`schema`|  Variable| length.    Optional
+|`raw_data_offset`| 4|   Optional
+
+|===
+
+
+== Version
+
+This is a field, indicating complex object layout version. It is needed for backward compatibility. Clients should check this field and indicate error to a user, if the object layout version is unknown to them, to prevent data corruption and unpredictable results of the de-serialization.
+
+== Flags
+
+This field is 16-bit long little-endian bitmask. Contains object flags, which indicate how the object instance should be handled by a reader. There are following flags:
+
+*    `USER_TYPE = 0x0001` - Indicates that type is a user type. Should be always set for any client type. Can be ignored on a de-serialization.
+*    `HAS_SCHEMA = 0x0002` - Indicates that object layout contains schema in the footer. See <<Schema>> for details.
+*    `HAS_RAW_DATA = 0x0004` - Indicating that object has raw data. See <<Raw data offset>> for details.
+*    `OFFSET_ONE_BYTE = 0x0008` - Indicating that schema field offset is one byte long. See <<Schema>> for details.
+*    `OFFSET_TWO_BYTES = 0x0010` - Indicating that schema field offset is two byte long. See <<Schema>> for details.
+*    `COMPACT_FOOTER = 0x0020` - Indicating that footer does not contain field IDs, only offsets. See <<Schema>> for details.
+
+== Type ID
+
+This field contains a unique type identifier. It is 4 bytes long and stored in little-endian. By default, Type ID is obtained as a Java-style hash code of the type name. Type ID evaluation algorithm should be the same across all platforms in the cluster for all platforms to be able to operate with objects of this type. Default type ID calculation algorithm, which is recommended for use by all thin clients, can be found below.
+
+[tabs]
+--
+
+tab:Java[]
+[source, java]
+----
+static int hashCode(String str) {
+  int len = str.length;
+
+  int h = 0;
+
+  for (int i = 0; i < len; i++) {
+    int c = str.charAt(i);
+
+    c = Character.toLowerCase(c);
+
+    h = 31 * h + c;
+  }
+
+  return h;
+}
+----
+
+tab:C[]
+
+[source, c]
+----
+int32_t HashCode(const char* val, size_t size)
+{
+  if (!val && size == 0)
+    return 0;
+
+  int32_t hash = 0;
+
+  for (size_t i = 0; i < size; ++i)
+  {
+    char c = val[i];
+
+    if ('A' <= c && c <= 'Z')
+      c |= 0x20;
+
+    hash = 31 * hash + c;
+  }
+
+  return hash;
+}
+----
+
+--
+
+
+
+
+
+== Hash code
+
+Hash code of the value. It is stored as a 4-byte long little-endian value and calculated as a Java-style hash of contents without header. Used by Ignite engine for comparisons, for example - to compare keys. Hash calculation algorithm can be found below.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+static int dataHashCode(byte[] data) {
+  int len = data.length;
+
+  int h = 0;
+
+  for (int i = 0; i < len; i++)
+    h = 31 * h + data[i];
+
+  return h;
+}
+----
+tab:C[]
+
+[source, c]
+----
+int32_t GetDataHashCode(const void* data, size_t size)
+{
+  if (!data)
+    return 0;
+
+  int32_t hash = 1;
+  const int8_t* bytes = static_cast<const int8_t*>(data);
+
+  for (int i = 0; i < size; ++i)
+    hash = 31 * hash + bytes[i];
+
+  return hash;
+}
+----
+
+--
+
+
+
+
+== Length
+
+This field contains full length of the object including header. It is stored as a 4-byte long little-endian integer number. Using this field you can easily skip the whole object by simply increasing current data stream position by the value of this field.
+
+== Schema ID
+
+Object schema identifier. It is stored as a 4-byte long little-endian value and calculated as a hash of all object field IDs. It is used for complex object size optimization. Ignite uses schema ID to avoid writing of the whole schema to the end of the every complex object value. Instead, it stores all schemas in the binary metadata store and only writes field offsets to the object. This optimization helps to significantly reduce size for the complex object containing a lot of short field [...]
+
+If the schema is missing (e.g. the whole object is written in raw mode, or have no fields at all), the schema ID field is 0.
+
+See <<Schema>> for details on schema structure.
+
+[NOTE]
+====
+Schema ID can not be determined using Type ID as objects of the same type (and thus, having the same Type ID) can have a multiple schemas, i.e. field sequence.
+====
+
+Schema ID calculation algorithm can be found below:
+
+[tabs]
+--
+
+tab:Java[]
+
+[source, java]
+----
+/** FNV1 hash offset basis. */
+private static final int FNV1_OFFSET_BASIS = 0x811C9DC5;
+
+/** FNV1 hash prime. */
+private static final int FNV1_PRIME = 0x01000193;
+
+static int calculateSchemaId(int fieldIds[])
+{
+  if (fieldIds == null || fieldIds.length == 0)
+    return 0;
+
+  int len = fieldIds.length;
+
+  int schemaId = FNV1_OFFSET_BASIS;
+
+  for (size_t i = 0; i < len; ++i)
+  {
+    fieldId = fieldIds[i];
+
+    schemaId = schemaId ^ (fieldId & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 8) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 16) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 24) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+  }
+}
+----
+
+
+tab:C[]
+
+[source, c]
+----
+/** FNV1 hash offset basis. */
+enum { FNV1_OFFSET_BASIS = 0x811C9DC5 };
+
+/** FNV1 hash prime. */
+enum { FNV1_PRIME = 0x01000193 };
+
+int32_t CalculateSchemaId(const int32_t* fieldIds, size_t num)
+{
+  if (!fieldIds || num == 0)
+    return 0;
+
+  int32_t schemaId = FNV1_OFFSET_BASIS;
+
+  for (size_t i = 0; i < num; ++i)
+  {
+    fieldId = fieldIds[i];
+
+    schemaId ^= fieldId & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 8) & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 16) & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 24) & 0xFF;
+    schemaId *= FNV1_PRIME;
+  }
+}
+----
+
+
+--
+
+
+
+== Object Fields
+
+Object fields. Every field is a binary object and could be either complex or standard type. Note that a complex object that has no fields at all is a valid object and may be encountered. Every field can have or not have a name. For named fields there is an offset written in the object schema, by which they can be located in object without de-serialization of the whole object. Fields without name are always stored after the named fields and are written in a so called "raw mode".
+
+Thus, fields that have been written in a raw mode can only be accessed by sequential read in the same order as they were written, while named fields can be read in a random order.
+
+== Schema
+
+Object schema. Any complex object may have or have no schema, so this field is optional. Schema is not present in object, if there is no named fields in object. It also includes cases, when the object does not have fields at all. You should check the HAS_SCHEMA object flag to determine if the object has schema.
+
+The main purpose of a schema is to allow for fast search of object fields. For this purpose, schema contains a sequence of offsets of object fields in the object payload. Field offsets themselves can be of a different size. The size of these fields determined on a write by a max offset value. If it is in the range of [24..255] bytes, then 1-byte offset is used, if it's in the range of [256..65535] bytes, then 2-byte offset is used. In all other cases 4-byte offsets are used. To determine [...]
+
+There are two formats of schema supported:
+
+* Full schema approach - simpler to implement but uses more resources.
+*  Compact footer approach - harder to implement, but provides better performance and reduces memory consumption; thus it is recommended for new clients to implement this approach.
+
+You can find more details on both formats below.
+
+Note that the flag COMPACT_FOOTER should be checked by clients to determine which approach is used in every specific object.
+
+=== Full schema approach
+
+When this approach is used, COMPACT_FOOTER flag is not set and the whole object schema is written to the footer of the object. In this case only complex object itself is needed for a de-serialization - schema_id field is ignored and no additional data is required. The structure of the schema field of the complex object in this case can be found below:
+
+[cols="1,1,2",opts="header"]
+|===
+|Field |  Size in bytes |  Description
+|`field_id_0`|  4|   ID of the field with the index 0. 4-byte long hash stored in little-endian. The Field ID calculated using field name the same way it is done for a <<Type ID>>.
+|`field_offset_0`|  Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian Offset of the field in object, starting from the very first byte of the full object value (i.e. type_code position).
+|`field_id_1`|  4|   4-byte long hash stored in little-endian. ID of the field with the index 1.
+|`field_offset_1` | Variable, depending on the size of the object: 1, 2 or 4.|   Unsigned integer number stored in little-endian. Offset of the field in object.
+|...| ...| ...
+|`field_id_N`|  4|   4-byte long hash stored in little-endian. ID of the field with the index N.
+|`field_offset_N`|  Variable, depending on the size of the object: 1, 2 or 4. |   Unsigned integer number stored in little-endian. Offset of the field in object.
+
+|===
+
+=== Compact footer approach
+
+In this approach, COMPACT_FOOTER flag is set and only field offset sequence is written to the object footer. In this case client uses schema_id field to search objects schema in a previously stored meta store to find out fields order and associate field with its offset.
+
+If this approach is used, client needs to keep schemas in a special meta store and send/retrieve them to Ignite servers. See link:check[Binary Types] for details.
+
+The structure of the schema in this case can be found below:
+
+[cols="1,1,2",opts="header"]
+|===
+|Field |  Size in bytes |  Description
+|`field_offset_0` | Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian. Offset of the field 0 in the object, starting from the very first byte of the full object value (i.e. type_code position).
+|`field_offset_1`|  Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian. Offset of the 1-st field in object.
+|...| ...| ...
+|`field_id_N`|  Variable, depending on the size of the object: 1, 2 or 4.  | Unsigned integer number stored in little-endian.
+Offset of the N-th field in object.
+
+|===
+
+== Raw data offset
+
+Optional field. Only present in object, if there is any fields, that have been written in a raw mode. In this case, HAS_RAW_DATA flag is set and the raw data offset field is present and is stored as an 4-byte long little-endian value, which points to the offset of the raw data in complex object, starting from the very first byte of the header (i.e. this field always greater than a header length).
+
+This field is used to position stream for user to start reading in a raw mode.
+
+== Special types
+
+=== Wrapped Data
+
+Type code: 27;
+
+One or more binary objects can be wrapped in an array. This allows reading, storing, passing and writing objects efficiently without understanding their contents, performing simple byte copy.
+All cache operations return complex objects inside a wrapper (but not primitives).
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size |    Description
+|length|  4|   Signed integer number stored in little-endian. Size of the wrapped data in bytes.
+|payload| length|  Payload.
+|offset|  4|   Signed integer number stored in little-endian. Offset of the object within an array. Array can contain an object graph, this offset points to the root object.
+
+|===
+
+=== Binary enum
+
+Type code: 38
+
+Wrapped enumerable type. This type can be returned by the engine in place of the ordinary enum type. Enums should be written in this form when Binary API is used.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |  Size  |  Description
+|type_id| 4|   Signed integer number in little-endian. See <<Type ID>> for details.
+|ordinal| 4|   Signed integer number stored in little-endian. Enumeration value ordinal . Its position in its enum declaration, where the initial constant is assigned an ordinal of zero.
+
+|===
+
+== Serialization and Deserialization examples
+
+=== Reading objects
+
+A code template below shows how to read data of various types from an input byte stream:
+
+
+[source, java]
+----
+private static Object readDataObject(DataInputStream in) throws IOException {
+  byte code = in.readByte();
+
+  switch (code) {
+    case 1:
+      return in.readByte();
+    case 2:
+      return readShortLittleEndian(in);
+    case 3:
+      return readIntLittleEndian(in);
+    case 4:
+      return readLongLittleEndian(in);
+    case 27: {
+      int len = readIntLittleEndian(in);
+      // Assume 0 offset for simplicity
+      Object res = readDataObject(in);
+      int offset = readIntLittleEndian(in);
+      return res;
+    }
+    case 103:
+      byte ver = in.readByte();
+      assert ver == 1; // version
+      short flags = readShortLittleEndian(in);
+      int typeId = readIntLittleEndian(in);
+      int hash = readIntLittleEndian(in);
+      int len = readIntLittleEndian(in);
+      int schemaId = readIntLittleEndian(in);
+      int schemaOffset = readIntLittleEndian(in);
+      byte[] data = new byte[len - 24];
+      in.read(data);
+      return "Binary Object: " + typeId;
+    default:
+      throw new Error("Unsupported type: " + code);
+  }
+}
+----
+
+=== Int
+
+The following code snippet shows how to write and read a data object of type int, using a socket based output/input stream.
+
+
+[source, java]
+----
+// Write int data object
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+int val = 11;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(val, out);
+
+// Read int data object
+DataInputStream in = new DataInputStream(socket.getInputStream());
+int typeCode = readByteLittleEndian(in);
+int val = readIntLittleEndian(in);
+----
+
+Refer to the link:example[example section] for implementation of `write...()` and `read..()` methods shown above.
+
+As another example, for String type, the structure would be:
+
+
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+| byte |    String type code, 9.
+|int | String length in UTF-8 bytes.
+|bytes |   Actual string.
+|===
+
+=== String
+
+The code snippet below shows how to write and read a String value following this format:
+
+
+[source, java]
+----
+private static void writeString (String str, DataOutputStream out) throws IOException {
+  writeByteLittleEndian(9, out); // type code for String
+
+  int strLen = str.getBytes("UTF-8").length; // length of the string
+  writeIntLittleEndian(strLen, out);
+
+  out.writeBytes(str);
+}
+
+private static String readString(DataInputStream in) throws IOException {
+  int type = readByteLittleEndian(in); // type code
+
+  int strLen = readIntLittleEndian(in); // length of the string
+
+  byte[] buf = new byte[strLen];
+
+  readFully(in, buf, 0, strLen);
+
+  return new String(buf);
+}
+----
+
+
+
+
+
diff --git a/docs/_docs/binary-client-protocol/key-value-queries.adoc b/docs/_docs/binary-client-protocol/key-value-queries.adoc
new file mode 100644
index 0000000..1acabc5
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/key-value-queries.adoc
@@ -0,0 +1,1416 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Key-Value Queries
+
+This page describes the key-value operations that you can perform with a cache. The key-value operations are equivalent to Ignite's native cache operations. Each operation has a link:binary-client-protocol/binary-client-protocol#standard-message-header[header] and operation-specific data.
+
+Refer to the Data Format page for a list of available data types and data format specification.
+
+== Operation Codes
+
+Upon successful handshake with an Ignite server node, a client can start performing various key-value operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+
+
+|Operation|   OP_CODE
+|OP_CACHE_GET|    1000
+|OP_CACHE_PUT|    1001
+|OP_CACHE_PUT_IF_ABSENT|  1002
+|OP_CACHE_GET_ALL|    1003
+|OP_CACHE_PUT_ALL|    1004
+|OP_CACHE_GET_AND_PUT|    1005
+|OP_CACHE_GET_AND_REPLACE|    1006
+|OP_CACHE_GET_AND_REMOVE| 1007
+|OP_CACHE_GET_AND_PUT_IF_ABSENT|  1008
+|OP_CACHE_REPLACE|    1009
+|OP_CACHE_REPLACE_IF_EQUALS|  1010
+|OP_CACHE_CONTAINS_KEY|   1011
+|OP_CACHE_CONTAINS_KEYS|  1012
+|OP_CACHE_CLEAR|  1013
+|OP_CACHE_CLEAR_KEY|  1014
+|OP_CACHE_CLEAR_KEYS| 1015
+|OP_CACHE_REMOVE_KEY| 1016
+|OP_CACHE_REMOVE_IF_EQUALS|   1017
+|OP_CACHE_REMOVE_KEYS|    1018
+|OP_CACHE_REMOVE_ALL| 1019
+|OP_CACHE_GET_SIZE|   1020
+
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+== OP_CACHE_GET
+
+Retrieves a value from a cache by key. If the cache does not contain the key, null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the cache entry to be returned.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|Data Object| The value that corresponds to the given key. null if the cache does not contain the key.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_ALL
+
+Retrieves multiple key-value pairs from a cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object| Key for the cache entry.
+
+Repeat for as many times as the key count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|int| Result count.
+|Key Data Object + Value Data Object| Resulting key-value pairs. Keys that are not present in the cache are not included.
+
+Repeat for as many times as the result count that is obtained in the previous parameter.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_GET_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Key count
+writeIntLittleEndian(2, out);
+
+// Data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Result count
+int resCount = readIntLittleEndian(in);
+
+for (int i = 0; i < resCount; i++) {
+  // Resulting data object
+  int resKeyTypeCode = readByteLittleEndian(in); // Integer type code
+  int resKey = readIntLittleEndian(in); // Cache key
+
+  // Resulting data object
+  int resValTypeCode = readByteLittleEndian(in); // Integer type code
+  int resValue = readIntLittleEndian(in); // Cache value
+}
+
+----
+--
+
+
+== OP_CACHE_PUT
+
+Puts a value with a given key to a cache (overwriting existing value if any).
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|Data Object| Value for the key.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response Header
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_PUT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_PUT_ALL
+
+Puts multiple key-value pairs to cache (overwriting existing associations if any).
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key-value pair count
+|Key Data Object + Value Data | Object Key-value pairs.
+
+Repeat for as many times as the key-value pair count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(29, OP_CACHE_PUT_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Entry Count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache value data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value1, out);   // Cache value
+
+// Cache key data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+
+// Cache value data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value2, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+----
+--
+
+
+== OP_CACHE_CONTAINS_KEY
+
+Returns a value indicating whether given key is present in cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header | Response header.
+|bool  |  True when key is present, false otherwise.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_CONTAINS_KEY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Result
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_CONTAINS_KEYS
+
+Returns a value indicating whether all given keys are present in cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object |Key obtained from cache.
+
+Repeat for as many times as the key count that is passed in the previous parameter.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    True when keys are present, false otherwise.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_CONTAINS_KEYS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+//Count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+int key1 = 11;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache key data object 2
+int key2 = 22;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_PUT
+
+Puts a key and an associated value into a cache and returns the previous value for that key. If the cache does not contain the key, a new entry is created and null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key to be updated.
+|Data Object| The new value for the specified key.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |  Description
+|Header|  Response header.
+|Data Object| The existing value associated with the specified key, or null.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_PUT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_REPLACE
+
+
+Replaces the value associated with the given key in the specified cache and returns the previous value. If the cache does not contain the key, the operation returns null without changing the cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key whose value is to be replaced.
+|Data Object| The new value to be associated with the specified key.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+| Response Type |  Description
+|Header|  Response header.
+|Data Object| The previous value associated with the given key, or null if the key does not exist.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_REPLACE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_REMOVE
+
+Removes a specific entry from a cache and returns the entry's value. If the key does not exist, null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key to be removed.
+
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|Data Object| The existing value associated with the specified key or null, if the key does not exist.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET_AND_REMOVE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByte(in);
+int value = readInt(in);
+
+----
+--
+
+
+== OP_CACHE_PUT_IF_ABSENT
+
+Puts an entry to a cache if that entry does not exist.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the entry to be added.
+|Data Object| The value of the key to be added.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|bool|    true if the new entry is created, false if the entry already exists.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_PUT_IF_ABSENT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache Value
+----
+
+tab:Response[]
+
... 69012 lines suppressed ...