You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by mc...@apache.org on 2022/01/05 23:21:13 UTC

[cassandra] branch cassandra-3.11 updated: Migrate documentation to AsciiDoc

This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-3.11 by this push:
     new 05b0eae  Migrate documentation to AsciiDoc
05b0eae is described below

commit 05b0eaecad5e40390352a4e182179a29ac784372
Author: Lorina Poland <lo...@gmail.com>
AuthorDate: Mon Jun 28 10:19:46 2021 -0700

    Migrate documentation to AsciiDoc
    
    This commit sets up a new documentation structure and format:
    
    * The directory struture changes from a Sphinx project to an Antora module layout.
    * The formatting of the content changes from reStructuredText to AsciiDoc.
    
    The documentation must now be built and published with Antora. Initially only from the cassandra-website repository.
    
    This change was done to make maintaining versioned website documentation easier. As Antora is designed to generate versioned documentation.
    
    The old directory structure was:
    
    <ROOT>
      - doc/
        - cql3/
          - CQL.css
          - CQL.textile
        - source/
          - _static/
          - _templates/
          - _theme/
          - _util/
          - <other directory sections>/
          - conf.py
          - index.rst
          - <other *.rst pages>
        - make.bat
        - Makefile
        - README.md
        - SASI.md
        - <*.spec files>
        - <generation scripts>
    
    The new directory structure organises the documentation into modules:
    
    <ROOT>
      - doc/
        - cql3
        - modules/
          - cassandra/
            - assets/
            - examples/
            - pages/
            - partials/
            - nav.adoc
          - ROOT/
            - pages/
            - nav.adoc
        - scripts/
        - antora.yaml
        - README.md
        - SASI.md
    
     patch by Lorina Poland, Anthony Grasso; reviewed by Anthony Grasso, Mick Semb Wever for CASSANDRA-16763
    
    Co-authored-by: Anthony Grasso <an...@thelastpickle.com>
---
 .build/build-rat.xml                               |    3 +
 .gitignore                                         |    4 +-
 build.xml                                          |   11 +-
 doc/Makefile                                       |  288 +-
 doc/README.md                                      |   36 +-
 doc/SASI.md                                        |  798 ----
 doc/antora.yml                                     |   18 +
 doc/make.bat                                       |  299 --
 doc/modules/ROOT/nav.adoc                          |    4 +
 doc/modules/ROOT/pages/index.adoc                  |   48 +
 .../cassandra/assets/images/Figure_1_backups.jpg   |  Bin 0 -> 38551 bytes
 .../assets/images/Figure_1_data_model.jpg          |  Bin 0 -> 17469 bytes
 .../assets/images/Figure_1_guarantees.jpg          |  Bin 0 -> 17993 bytes
 .../assets/images/Figure_1_read_repair.jpg         |  Bin 0 -> 36919 bytes
 .../assets/images/Figure_2_data_model.jpg          |  Bin 0 -> 20925 bytes
 .../assets/images/Figure_2_read_repair.jpg         |  Bin 0 -> 45595 bytes
 .../assets/images/Figure_3_read_repair.jpg         |  Bin 0 -> 43021 bytes
 .../assets/images/Figure_4_read_repair.jpg         |  Bin 0 -> 43021 bytes
 .../assets/images/Figure_5_read_repair.jpg         |  Bin 0 -> 42560 bytes
 .../assets/images/Figure_6_read_repair.jpg         |  Bin 0 -> 57489 bytes
 .../images/data_modeling_chebotko_logical.png      |  Bin 0 -> 87366 bytes
 .../images/data_modeling_chebotko_physical.png     |  Bin 0 -> 4553809 bytes
 .../images/data_modeling_hotel_bucketing.png       |  Bin 0 -> 22009 bytes
 .../assets/images/data_modeling_hotel_erd.png      |  Bin 0 -> 233309 bytes
 .../assets/images/data_modeling_hotel_logical.png  |  Bin 0 -> 116998 bytes
 .../assets/images/data_modeling_hotel_physical.png |  Bin 0 -> 119795 bytes
 .../assets/images/data_modeling_hotel_queries.png  |  Bin 0 -> 103940 bytes
 .../images/data_modeling_hotel_relational.png      |  Bin 0 -> 102656 bytes
 .../images/data_modeling_reservation_logical.png   |  Bin 0 -> 121750 bytes
 .../images/data_modeling_reservation_physical.png  |  Bin 0 -> 142416 bytes
 .../cassandra/assets/images/docs_commit.png        |  Bin 0 -> 104667 bytes
 .../cassandra/assets/images/docs_create_branch.png |  Bin 0 -> 181860 bytes
 .../cassandra/assets/images/docs_create_file.png   |  Bin 0 -> 209110 bytes
 .../cassandra/assets/images/docs_editor.png        |  Bin 0 -> 106175 bytes
 doc/modules/cassandra/assets/images/docs_fork.png  |  Bin 0 -> 76159 bytes
 doc/modules/cassandra/assets/images/docs_pr.png    |  Bin 0 -> 156081 bytes
 .../cassandra/assets/images/docs_preview.png       |  Bin 0 -> 123826 bytes
 .../cassandra/assets}/images/eclipse_debug0.png    |  Bin
 .../cassandra/assets}/images/eclipse_debug1.png    |  Bin
 .../cassandra/assets}/images/eclipse_debug2.png    |  Bin
 .../cassandra/assets}/images/eclipse_debug3.png    |  Bin
 .../cassandra/assets}/images/eclipse_debug4.png    |  Bin
 .../cassandra/assets}/images/eclipse_debug5.png    |  Bin
 .../cassandra/assets}/images/eclipse_debug6.png    |  Bin
 .../assets/images/example-stress-graph.png         |  Bin 0 -> 359103 bytes
 doc/modules/cassandra/assets/images/hints.svg      |    9 +
 doc/modules/cassandra/assets/images/ring.svg       |   11 +
 doc/modules/cassandra/assets/images/vnodes.svg     |   11 +
 .../cassandra/examples/BASH/add_repo_keys.sh       |    1 +
 .../cassandra/examples/BASH/apt-get_cass.sh        |    1 +
 .../cassandra/examples/BASH/apt-get_update.sh      |    1 +
 .../cassandra/examples/BASH/check_backups.sh       |    1 +
 .../cassandra/examples/BASH/cqlsh_localhost.sh     |    1 +
 .../cassandra/examples/BASH/curl_install.sh        |    1 +
 .../cassandra/examples/BASH/curl_verify_sha.sh     |    1 +
 .../cassandra/examples/BASH/docker_cqlsh.sh        |    1 +
 doc/modules/cassandra/examples/BASH/docker_pull.sh |    1 +
 .../cassandra/examples/BASH/docker_remove.sh       |    1 +
 doc/modules/cassandra/examples/BASH/docker_run.sh  |    1 +
 .../cassandra/examples/BASH/docker_run_qs.sh       |    3 +
 .../cassandra/examples/BASH/find_backups.sh        |    1 +
 .../cassandra/examples/BASH/find_snapshots.sh      |    1 +
 .../cassandra/examples/BASH/find_sstables.sh       |    1 +
 .../cassandra/examples/BASH/find_two_snapshots.sh  |    1 +
 .../cassandra/examples/BASH/flush_and_check.sh     |    2 +
 .../cassandra/examples/BASH/get_deb_package.sh     |    2 +
 doc/modules/cassandra/examples/BASH/java_verify.sh |    1 +
 .../examples/BASH/nodetool_clearsnapshot.sh        |    1 +
 .../examples/BASH/nodetool_clearsnapshot_all.sh    |    1 +
 .../cassandra/examples/BASH/nodetool_flush.sh      |    3 +
 .../examples/BASH/nodetool_flush_table.sh          |    1 +
 .../examples/BASH/nodetool_list_snapshots.sh       |    1 +
 .../cassandra/examples/BASH/nodetool_snapshot.sh   |    1 +
 .../cassandra/examples/BASH/nodetool_status.sh     |    1 +
 .../examples/BASH/nodetool_status_nobin.sh         |    1 +
 doc/modules/cassandra/examples/BASH/run_cqlsh.sh   |    1 +
 .../cassandra/examples/BASH/run_cqlsh_nobin.sh     |    1 +
 .../cassandra/examples/BASH/snapshot_backup2.sh    |    1 +
 .../examples/BASH/snapshot_both_backups.sh         |    1 +
 .../cassandra/examples/BASH/snapshot_files.sh      |    1 +
 .../cassandra/examples/BASH/snapshot_mult_ks.sh    |    1 +
 .../examples/BASH/snapshot_mult_tables.sh          |    1 +
 .../examples/BASH/snapshot_mult_tables_again.sh    |    1 +
 .../cassandra/examples/BASH/snapshot_one_table.sh  |    1 +
 .../cassandra/examples/BASH/snapshot_one_table2.sh |    1 +
 .../cassandra/examples/BASH/start_tarball.sh       |    1 +
 doc/modules/cassandra/examples/BASH/tail_syslog.sh |    1 +
 .../cassandra/examples/BASH/tail_syslog_package.sh |    1 +
 doc/modules/cassandra/examples/BASH/tarball.sh     |    1 +
 doc/modules/cassandra/examples/BASH/verify_gpg.sh  |    1 +
 doc/modules/cassandra/examples/BASH/yum_cass.sh    |    1 +
 doc/modules/cassandra/examples/BASH/yum_start.sh   |    1 +
 doc/modules/cassandra/examples/BASH/yum_update.sh  |    1 +
 .../cassandra/examples/BNF/aggregate_name.bnf      |    1 +
 doc/modules/cassandra/examples/BNF/alter_ks.bnf    |    2 +
 .../cassandra/examples/BNF/alter_mv_statement.bnf  |    1 +
 .../examples/BNF/alter_role_statement.bnf          |    1 +
 doc/modules/cassandra/examples/BNF/alter_table.bnf |    4 +
 .../cassandra/examples/BNF/alter_udt_statement.bnf |    3 +
 .../examples/BNF/alter_user_statement.bnf          |    1 +
 .../cassandra/examples/BNF/batch_statement.bnf     |    5 +
 .../cassandra/examples/BNF/collection_literal.bnf  |    4 +
 .../cassandra/examples/BNF/collection_type.bnf     |    3 +
 doc/modules/cassandra/examples/BNF/column.bnf      |    1 +
 doc/modules/cassandra/examples/BNF/constant.bnf    |    8 +
 .../cassandra/examples/BNF/cql_statement.bnf       |   48 +
 doc/modules/cassandra/examples/BNF/cql_type.bnf    |    1 +
 .../examples/BNF/create_aggregate_statement.bnf    |    6 +
 .../examples/BNF/create_function_statement.bnf     |    6 +
 .../examples/BNF/create_index_statement.bnf        |    5 +
 doc/modules/cassandra/examples/BNF/create_ks.bnf   |    2 +
 .../cassandra/examples/BNF/create_mv_statement.bnf |    4 +
 .../examples/BNF/create_role_statement.bnf         |    9 +
 .../cassandra/examples/BNF/create_table.bnf        |   12 +
 .../examples/BNF/create_trigger_statement.bnf      |    3 +
 doc/modules/cassandra/examples/BNF/create_type.bnf |    3 +
 .../examples/BNF/create_user_statement.bnf         |    4 +
 doc/modules/cassandra/examples/BNF/custom_type.bnf |    1 +
 .../cassandra/examples/BNF/delete_statement.bnf    |    5 +
 .../examples/BNF/describe_aggregate_statement.bnf  |    1 +
 .../examples/BNF/describe_aggregates_statement.bnf |    1 +
 .../examples/BNF/describe_cluster_statement.bnf    |    1 +
 .../examples/BNF/describe_function_statement.bnf   |    1 +
 .../examples/BNF/describe_functions_statement.bnf  |    1 +
 .../examples/BNF/describe_index_statement.bnf      |    1 +
 .../examples/BNF/describe_keyspace_statement.bnf   |    1 +
 .../examples/BNF/describe_keyspaces_statement.bnf  |    1 +
 .../BNF/describe_materialized_view_statement.bnf   |    1 +
 .../examples/BNF/describe_object_statement.bnf     |    1 +
 .../examples/BNF/describe_schema_statement.bnf     |    1 +
 .../examples/BNF/describe_table_statement.bnf      |    1 +
 .../examples/BNF/describe_tables_statement.bnf     |    1 +
 .../examples/BNF/describe_type_statement.bnf       |    1 +
 .../examples/BNF/describe_types_statement.bnf      |    1 +
 .../examples/BNF/drop_aggregate_statement.bnf      |    2 +
 .../examples/BNF/drop_function_statement.bnf       |    2 +
 .../examples/BNF/drop_index_statement.bnf          |    1 +
 doc/modules/cassandra/examples/BNF/drop_ks.bnf     |    1 +
 .../cassandra/examples/BNF/drop_mv_statement.bnf   |    1 +
 .../cassandra/examples/BNF/drop_role_statement.bnf |    1 +
 doc/modules/cassandra/examples/BNF/drop_table.bnf  |    1 +
 .../examples/BNF/drop_trigger_statement.bnf        |    1 +
 .../cassandra/examples/BNF/drop_udt_statement.bnf  |    1 +
 .../cassandra/examples/BNF/drop_user_statement.bnf |    1 +
 doc/modules/cassandra/examples/BNF/function.bnf    |    1 +
 .../examples/BNF/grant_permission_statement.bnf    |   12 +
 .../examples/BNF/grant_role_statement.bnf          |    1 +
 doc/modules/cassandra/examples/BNF/identifier.bnf  |    3 +
 doc/modules/cassandra/examples/BNF/index.bnf       |    1 +
 doc/modules/cassandra/examples/BNF/index_name.bnf  |    1 +
 .../cassandra/examples/BNF/insert_statement.bnf    |    6 +
 doc/modules/cassandra/examples/BNF/ks_table.bnf    |    5 +
 .../examples/BNF/list_permissions_statement.bnf    |    1 +
 .../examples/BNF/list_roles_statement.bnf          |    1 +
 .../examples/BNF/list_users_statement.bnf          |    1 +
 .../cassandra/examples/BNF/materialized_view.bnf   |    1 +
 doc/modules/cassandra/examples/BNF/native_type.bnf |    4 +
 doc/modules/cassandra/examples/BNF/options.bnf     |    4 +
 .../examples/BNF/revoke_permission_statement.bnf   |    1 +
 .../examples/BNF/revoke_role_statement.bnf         |    1 +
 doc/modules/cassandra/examples/BNF/role_name.bnf   |    1 +
 .../cassandra/examples/BNF/select_statement.bnf    |   21 +
 doc/modules/cassandra/examples/BNF/term.bnf        |    6 +
 .../cassandra/examples/BNF/trigger_name.bnf        |    1 +
 .../cassandra/examples/BNF/truncate_table.bnf      |    1 +
 doc/modules/cassandra/examples/BNF/tuple.bnf       |    2 +
 doc/modules/cassandra/examples/BNF/udt.bnf         |    2 +
 doc/modules/cassandra/examples/BNF/udt_literal.bnf |    1 +
 .../cassandra/examples/BNF/update_statement.bnf    |   13 +
 doc/modules/cassandra/examples/BNF/use_ks.bnf      |    1 +
 doc/modules/cassandra/examples/BNF/view_name.bnf   |    1 +
 .../cassandra/examples/CQL/allow_filtering.cql     |    9 +
 doc/modules/cassandra/examples/CQL/alter_ks.cql    |    2 +
 doc/modules/cassandra/examples/CQL/alter_role.cql  |    1 +
 .../examples/CQL/alter_table_add_column.cql        |    1 +
 .../examples/CQL/alter_table_spec_retry.cql        |    1 +
 .../CQL/alter_table_spec_retry_percent.cql         |    1 +
 .../examples/CQL/alter_table_with_comment.cql      |    2 +
 doc/modules/cassandra/examples/CQL/alter_user.cql  |    2 +
 doc/modules/cassandra/examples/CQL/as.cql          |   13 +
 .../examples/CQL/autoexpand_exclude_dc.cql         |    4 +
 .../cassandra/examples/CQL/autoexpand_ks.cql       |    4 +
 .../examples/CQL/autoexpand_ks_override.cql        |    4 +
 doc/modules/cassandra/examples/CQL/avg.cql         |    1 +
 .../cassandra/examples/CQL/batch_statement.cql     |    6 +
 .../cassandra/examples/CQL/caching_option.cql      |    6 +
 .../cassandra/examples/CQL/chunk_length.cql        |    6 +
 doc/modules/cassandra/examples/CQL/count.cql       |    2 +
 .../cassandra/examples/CQL/count_nonnull.cql       |    1 +
 .../cassandra/examples/CQL/create_function.cql     |   15 +
 .../cassandra/examples/CQL/create_index.cql        |    8 +
 doc/modules/cassandra/examples/CQL/create_ks.cql   |    6 +
 .../cassandra/examples/CQL/create_ks2_backup.cql   |    2 +
 .../cassandra/examples/CQL/create_ks_backup.cql    |    2 +
 .../examples/CQL/create_ks_trans_repl.cql          |    2 +
 .../cassandra/examples/CQL/create_mv_statement.cql |    5 +
 doc/modules/cassandra/examples/CQL/create_role.cql |    6 +
 .../examples/CQL/create_role_ifnotexists.cql       |    2 +
 .../examples/CQL/create_static_column.cql          |    7 +
 .../cassandra/examples/CQL/create_table.cql        |   23 +
 .../examples/CQL/create_table2_backup.cql          |   14 +
 .../cassandra/examples/CQL/create_table_backup.cql |   13 +
 .../examples/CQL/create_table_clustercolumn.cql    |    7 +
 .../examples/CQL/create_table_compound_pk.cql      |    7 +
 .../cassandra/examples/CQL/create_table_simple.cql |    4 +
 .../examples/CQL/create_table_single_pk.cql        |    1 +
 .../cassandra/examples/CQL/create_trigger.cql      |    1 +
 doc/modules/cassandra/examples/CQL/create_user.cql |    2 +
 .../cassandra/examples/CQL/create_user_role.cql    |   14 +
 doc/modules/cassandra/examples/CQL/currentdate.cql |    1 +
 .../cassandra/examples/CQL/datetime_arithmetic.cql |    1 +
 .../examples/CQL/delete_all_elements_list.cql      |    1 +
 .../cassandra/examples/CQL/delete_element_list.cql |    1 +
 doc/modules/cassandra/examples/CQL/delete_map.cql  |    2 +
 doc/modules/cassandra/examples/CQL/delete_set.cql  |    1 +
 .../cassandra/examples/CQL/delete_statement.cql    |    5 +
 .../cassandra/examples/CQL/drop_aggregate.cql      |    4 +
 .../cassandra/examples/CQL/drop_function.cql       |    4 +
 doc/modules/cassandra/examples/CQL/drop_ks.cql     |    1 +
 .../cassandra/examples/CQL/drop_trigger.cql        |    1 +
 .../cassandra/examples/CQL/function_dollarsign.cql |   15 +
 .../cassandra/examples/CQL/function_overload.cql   |    2 +
 .../cassandra/examples/CQL/function_udfcontext.cql |   11 +
 .../cassandra/examples/CQL/grant_describe.cql      |    1 +
 doc/modules/cassandra/examples/CQL/grant_drop.cql  |    1 +
 .../cassandra/examples/CQL/grant_execute.cql       |    1 +
 .../cassandra/examples/CQL/grant_modify.cql        |    1 +
 doc/modules/cassandra/examples/CQL/grant_perm.cql  |    1 +
 doc/modules/cassandra/examples/CQL/grant_role.cql  |    1 +
 .../cassandra/examples/CQL/insert_data2_backup.cql |    5 +
 .../cassandra/examples/CQL/insert_data_backup.cql  |    6 +
 .../cassandra/examples/CQL/insert_duration.cql     |    6 +
 doc/modules/cassandra/examples/CQL/insert_json.cql |    1 +
 .../cassandra/examples/CQL/insert_statement.cql    |    5 +
 .../cassandra/examples/CQL/insert_static_data.cql  |    2 +
 .../examples/CQL/insert_table_cc_addl.cql          |    1 +
 .../examples/CQL/insert_table_clustercolumn.cql    |    5 +
 .../examples/CQL/insert_table_clustercolumn2.cql   |    5 +
 .../examples/CQL/insert_table_compound_pk.cql      |    5 +
 doc/modules/cassandra/examples/CQL/insert_udt.cql  |   17 +
 doc/modules/cassandra/examples/CQL/list.cql        |   12 +
 .../cassandra/examples/CQL/list_all_perm.cql       |    1 +
 doc/modules/cassandra/examples/CQL/list_perm.cql   |    1 +
 doc/modules/cassandra/examples/CQL/list_roles.cql  |    1 +
 .../examples/CQL/list_roles_nonrecursive.cql       |    1 +
 .../cassandra/examples/CQL/list_roles_of.cql       |    1 +
 .../cassandra/examples/CQL/list_select_perm.cql    |    1 +
 doc/modules/cassandra/examples/CQL/map.cql         |   11 +
 doc/modules/cassandra/examples/CQL/min_max.cql     |    1 +
 .../cassandra/examples/CQL/mv_table_def.cql        |    8 +
 .../cassandra/examples/CQL/mv_table_error.cql      |   13 +
 .../cassandra/examples/CQL/mv_table_from_base.cql  |    9 +
 doc/modules/cassandra/examples/CQL/no_revoke.cql   |    5 +
 .../cassandra/examples/CQL/qs_create_ks.cql        |    2 +
 .../cassandra/examples/CQL/qs_create_table.cql     |    6 +
 .../cassandra/examples/CQL/qs_insert_data.cql      |    7 +
 .../examples/CQL/qs_insert_data_again.cql          |    1 +
 .../cassandra/examples/CQL/qs_select_data.cql      |    1 +
 .../examples/CQL/query_allow_filtering.cql         |    5 +
 .../examples/CQL/query_fail_allow_filtering.cql    |    1 +
 .../examples/CQL/query_nofail_allow_filtering.cql  |    1 +
 .../cassandra/examples/CQL/rename_udt_field.cql    |    1 +
 doc/modules/cassandra/examples/CQL/revoke_perm.cql |    5 +
 doc/modules/cassandra/examples/CQL/revoke_role.cql |    1 +
 doc/modules/cassandra/examples/CQL/role_error.cql  |    6 +
 .../cassandra/examples/CQL/select_data2_backup.cql |    2 +
 .../cassandra/examples/CQL/select_data_backup.cql  |    2 +
 .../cassandra/examples/CQL/select_range.cql        |    1 +
 .../cassandra/examples/CQL/select_statement.cql    |   11 +
 .../cassandra/examples/CQL/select_static_data.cql  |    1 +
 .../examples/CQL/select_table_clustercolumn.cql    |    1 +
 .../examples/CQL/select_table_compound_pk.cql      |    1 +
 doc/modules/cassandra/examples/CQL/set.cql         |   11 +
 .../cassandra/examples/CQL/spec_retry_values.cql   |    6 +
 doc/modules/cassandra/examples/CQL/sum.cql         |    1 +
 .../cassandra/examples/CQL/table_for_where.cql     |    9 +
 .../cassandra/examples/CQL/timeuuid_min_max.cql    |    3 +
 .../cassandra/examples/CQL/timeuuid_now.cql        |    1 +
 doc/modules/cassandra/examples/CQL/token.cql       |    2 +
 doc/modules/cassandra/examples/CQL/tuple.cql       |    6 +
 doc/modules/cassandra/examples/CQL/uda.cql         |   41 +
 doc/modules/cassandra/examples/CQL/udt.cql         |   16 +
 doc/modules/cassandra/examples/CQL/update_list.cql |    2 +
 doc/modules/cassandra/examples/CQL/update_map.cql  |    2 +
 .../CQL/update_particular_list_element.cql         |    1 +
 doc/modules/cassandra/examples/CQL/update_set.cql  |    1 +
 .../cassandra/examples/CQL/update_statement.cql    |   10 +
 .../cassandra/examples/CQL/update_ttl_map.cql      |    1 +
 doc/modules/cassandra/examples/CQL/use_ks.cql      |    1 +
 doc/modules/cassandra/examples/CQL/where.cql       |    4 +
 doc/modules/cassandra/examples/CQL/where_fail.cql  |    5 +
 .../examples/CQL/where_group_cluster_columns.cql   |    3 +
 .../cassandra/examples/CQL/where_in_tuple.cql      |    3 +
 .../CQL/where_no_group_cluster_columns.cql         |    4 +
 .../cassandra/examples/JAVA/udf_imports.java       |    8 +
 .../cassandra/examples/JAVA/udfcontext.java        |   11 +
 .../examples/RESULTS/add_repo_keys.result          |    4 +
 .../cassandra/examples/RESULTS/add_yum_repo.result |    6 +
 .../examples/RESULTS/autoexpand_exclude_dc.result  |    1 +
 .../examples/RESULTS/autoexpand_ks.result          |    1 +
 .../examples/RESULTS/autoexpand_ks_override.result |    1 +
 .../examples/RESULTS/cqlsh_localhost.result        |   11 +
 .../examples/RESULTS/curl_verify_sha.result        |    1 +
 .../cassandra/examples/RESULTS/find_backups.result |    4 +
 .../examples/RESULTS/find_backups_table.result     |    1 +
 .../examples/RESULTS/find_two_snapshots.result     |    3 +
 .../examples/RESULTS/flush_and_check.result        |    9 +
 .../examples/RESULTS/flush_and_check2.result       |   17 +
 .../examples/RESULTS/insert_data2_backup.result    |   13 +
 .../examples/RESULTS/insert_table_cc_addl.result   |    9 +
 .../cassandra/examples/RESULTS/java_verify.result  |    3 +
 .../cassandra/examples/RESULTS/no_bups.result      |    1 +
 .../RESULTS/nodetool_list_snapshots.result         |   13 +
 .../examples/RESULTS/nodetool_snapshot_help.result |   54 +
 .../examples/RESULTS/select_data2_backup.result    |   13 +
 .../examples/RESULTS/select_data_backup.result     |   15 +
 .../cassandra/examples/RESULTS/select_range.result |    6 +
 .../examples/RESULTS/select_static_data.result     |    4 +
 .../RESULTS/select_table_clustercolumn.result      |    9 +
 .../RESULTS/select_table_compound_pk.result        |    9 +
 .../cassandra/examples/RESULTS/snapshot_all.result |    4 +
 .../examples/RESULTS/snapshot_backup2.result       |    3 +
 .../examples/RESULTS/snapshot_backup2_find.result  |    2 +
 .../examples/RESULTS/snapshot_files.result         |   11 +
 .../examples/RESULTS/snapshot_mult_ks.result       |    3 +
 .../examples/RESULTS/snapshot_mult_tables.result   |    3 +
 .../RESULTS/snapshot_mult_tables_again.result      |    3 +
 .../examples/RESULTS/snapshot_one_table2.result    |    3 +
 .../cassandra/examples/RESULTS/tail_syslog.result  |    1 +
 .../cassandra/examples/RESULTS/verify_gpg.result   |    2 +
 .../examples/TEXT/tarball_install_dirs.txt         |   11 +
 .../cassandra/examples/YAML/auto_snapshot.yaml     |    1 +
 .../cassandra/examples/YAML/incremental_bups.yaml  |    1 +
 .../examples/YAML/snapshot_before_compaction.yaml  |    1 +
 .../cassandra/examples/YAML/stress-example.yaml    |   62 +
 .../examples/YAML/stress-lwt-example.yaml          |   88 +
 doc/modules/cassandra/nav.adoc                     |   97 +
 .../cassandra/pages/architecture/dynamo.adoc       |  531 +++
 .../cassandra/pages/architecture/guarantees.adoc   |  108 +
 .../cassandra/pages/architecture/images/ring.svg   |   11 +
 .../cassandra/pages/architecture/images/vnodes.svg |   11 +
 .../cassandra/pages/architecture/index.adoc        |    9 +
 .../cassandra/pages/architecture/overview.adoc     |  101 +
 .../cassandra/pages/architecture/snitch.adoc       |   74 +
 .../pages/architecture/storage_engine.adoc         |  225 ++
 .../pages/configuration/cass_cl_archive_file.adoc  |   48 +
 .../pages/configuration/cass_env_sh_file.adoc      |  162 +
 .../pages/configuration/cass_jvm_options_file.adoc |   22 +
 .../pages/configuration/cass_logback_xml_file.adoc |  166 +
 .../pages/configuration/cass_rackdc_file.adoc      |   79 +
 .../pages/configuration/cass_topo_file.adoc        |   53 +
 .../cassandra/pages/configuration/index.adoc       |   11 +
 doc/modules/cassandra/pages/cql/SASI.adoc          |  809 ++++
 doc/modules/cassandra/pages/cql/appendices.adoc    |  179 +
 doc/modules/cassandra/pages/cql/changes.adoc       |  215 ++
 .../cassandra/pages/cql/cql_singlefile.adoc        | 3904 ++++++++++++++++++++
 doc/modules/cassandra/pages/cql/ddl.adoc           |  799 ++++
 doc/modules/cassandra/pages/cql/definitions.adoc   |  187 +
 doc/modules/cassandra/pages/cql/dml.adoc           |  458 +++
 doc/modules/cassandra/pages/cql/functions.adoc     |  504 +++
 doc/modules/cassandra/pages/cql/index.adoc         |   24 +
 doc/modules/cassandra/pages/cql/indexes.adoc       |   63 +
 doc/modules/cassandra/pages/cql/json.adoc          |  125 +
 doc/modules/cassandra/pages/cql/mvs.adoc           |  158 +
 doc/modules/cassandra/pages/cql/operators.adoc     |   68 +
 doc/modules/cassandra/pages/cql/security.adoc      |  611 +++
 doc/modules/cassandra/pages/cql/triggers.adoc      |   50 +
 doc/modules/cassandra/pages/cql/types.adoc         |  539 +++
 .../data_modeling/data_modeling_conceptual.adoc    |   44 +
 .../pages/data_modeling/data_modeling_logical.adoc |  195 +
 .../data_modeling/data_modeling_physical.adoc      |   96 +
 .../pages/data_modeling/data_modeling_queries.adoc |   60 +
 .../pages/data_modeling/data_modeling_rdbms.adoc   |  144 +
 .../data_modeling/data_modeling_refining.adoc      |  201 +
 .../pages/data_modeling/data_modeling_schema.adoc  |  130 +
 .../pages/data_modeling/data_modeling_tools.adoc   |   44 +
 .../data_modeling/images/Figure_1_data_model.jpg   |  Bin 0 -> 17469 bytes
 .../data_modeling/images/Figure_2_data_model.jpg   |  Bin 0 -> 20925 bytes
 .../images/data_modeling_chebotko_logical.png      |  Bin 0 -> 87366 bytes
 .../images/data_modeling_chebotko_physical.png     |  Bin 0 -> 4553809 bytes
 .../images/data_modeling_hotel_bucketing.png       |  Bin 0 -> 22009 bytes
 .../images/data_modeling_hotel_erd.png             |  Bin 0 -> 233309 bytes
 .../images/data_modeling_hotel_logical.png         |  Bin 0 -> 116998 bytes
 .../images/data_modeling_hotel_physical.png        |  Bin 0 -> 119795 bytes
 .../images/data_modeling_hotel_queries.png         |  Bin 0 -> 103940 bytes
 .../images/data_modeling_hotel_relational.png      |  Bin 0 -> 102656 bytes
 .../images/data_modeling_reservation_logical.png   |  Bin 0 -> 121750 bytes
 .../images/data_modeling_reservation_physical.png  |  Bin 0 -> 142416 bytes
 .../cassandra/pages/data_modeling/index.adoc       |   11 +
 .../cassandra/pages/data_modeling/intro.adoc       |  220 ++
 doc/modules/cassandra/pages/faq/index.adoc         |  290 ++
 .../pages/getting_started/configuring.adoc         |   84 +
 .../cassandra/pages/getting_started/drivers.adoc   |   90 +
 .../cassandra/pages/getting_started/index.adoc     |   30 +
 .../pages/getting_started/installing.adoc          |  344 ++
 .../pages/getting_started/production.adoc          |  163 +
 .../cassandra/pages/getting_started/querying.adoc  |   31 +
 .../pages/getting_started/quickstart.adoc          |  100 +
 .../cassandra/pages/operating/audit_logging.adoc   |  224 ++
 doc/modules/cassandra/pages/operating/backups.adoc |  517 +++
 .../cassandra/pages/operating/bloom_filters.adoc   |   64 +
 .../cassandra/pages/operating/bulk_loading.adoc    |  842 +++++
 doc/modules/cassandra/pages/operating/cdc.adoc     |   86 +
 .../pages/operating/compaction/index.adoc          |  339 ++
 .../cassandra/pages/operating/compaction/lcs.adoc  |   81 +
 .../cassandra/pages/operating/compaction/stcs.adoc |   42 +
 .../cassandra/pages/operating/compaction/twcs.adoc |   75 +
 .../cassandra/pages/operating/compression.adoc     |  187 +
 .../cassandra/pages/operating/hardware.adoc        |  100 +
 doc/modules/cassandra/pages/operating/hints.adoc   |  248 ++
 doc/modules/cassandra/pages/operating/index.adoc   |   15 +
 doc/modules/cassandra/pages/operating/metrics.adoc | 1088 ++++++
 .../cassandra/pages/operating/read_repair.adoc     |  264 ++
 doc/modules/cassandra/pages/operating/repair.adoc  |  222 ++
 .../cassandra/pages/operating/security.adoc        |  527 +++
 .../cassandra/pages/operating/topo_changes.adoc    |  133 +
 doc/modules/cassandra/pages/plugins/index.adoc     |   36 +
 .../cassandra/pages/tools/cassandra_stress.adoc    |  326 ++
 doc/modules/cassandra/pages/tools/cqlsh.adoc       |  482 +++
 doc/modules/cassandra/pages/tools/index.adoc       |    9 +
 .../cassandra/pages/tools/sstable/index.adoc       |   20 +
 .../cassandra/pages/tools/sstable/sstabledump.adoc |  286 ++
 .../tools/sstable/sstableexpiredblockers.adoc      |   42 +
 .../pages/tools/sstable/sstablelevelreset.adoc     |   69 +
 .../pages/tools/sstable/sstableloader.adoc         |  316 ++
 .../pages/tools/sstable/sstablemetadata.adoc       |  320 ++
 .../pages/tools/sstable/sstableofflinerelevel.adoc |   94 +
 .../pages/tools/sstable/sstablerepairedset.adoc    |   83 +
 .../pages/tools/sstable/sstablescrub.adoc          |  102 +
 .../pages/tools/sstable/sstablesplit.adoc          |   96 +
 .../pages/tools/sstable/sstableupgrade.adoc        |  136 +
 .../cassandra/pages/tools/sstable/sstableutil.adoc |  102 +
 .../pages/tools/sstable/sstableverify.adoc         |   82 +
 .../pages/troubleshooting/finding_nodes.adoc       |  133 +
 .../cassandra/pages/troubleshooting/index.adoc     |   19 +
 .../pages/troubleshooting/reading_logs.adoc        |  247 ++
 .../pages/troubleshooting/use_nodetool.adoc        |  242 ++
 .../cassandra/pages/troubleshooting/use_tools.adoc |  578 +++
 doc/modules/cassandra/partials/java_version.adoc   |   23 +
 .../cassandra/partials/nodetool_and_cqlsh.adoc     |   21 +
 .../partials/nodetool_and_cqlsh_nobin.adoc         |   21 +
 .../cassandra/partials/package_versions.adoc       |    5 +
 doc/modules/cassandra/partials/tail_syslog.adoc    |   25 +
 .../convert_yaml_to_adoc.py}                       |   24 +-
 doc/scripts/gen-nodetool-docs.py                   |   83 +
 doc/source/_static/extra.css                       |   77 -
 doc/source/_templates/indexcontent.html            |   89 -
 doc/source/_theme/cassandra_theme/defindex.html    |   40 -
 doc/source/_theme/cassandra_theme/layout.html      |  108 -
 doc/source/_theme/cassandra_theme/search.html      |   67 -
 doc/source/_theme/cassandra_theme/theme.conf       |    3 -
 doc/source/_util/cql.py                            |  283 --
 doc/source/architecture/dynamo.rst                 |  139 -
 doc/source/architecture/guarantees.rst             |   20 -
 doc/source/architecture/index.rst                  |   29 -
 doc/source/architecture/overview.rst               |   20 -
 doc/source/architecture/storage_engine.rst         |  129 -
 doc/source/bugs.rst                                |   30 -
 doc/source/conf.py                                 |  441 ---
 doc/source/configuration/index.rst                 |   25 -
 doc/source/contactus.rst                           |   53 -
 doc/source/cql/appendices.rst                      |  333 --
 doc/source/cql/changes.rst                         |  204 -
 doc/source/cql/ddl.rst                             |  649 ----
 doc/source/cql/definitions.rst                     |  232 --
 doc/source/cql/dml.rst                             |  522 ---
 doc/source/cql/functions.rst                       |  558 ---
 doc/source/cql/index.rst                           |   47 -
 doc/source/cql/indexes.rst                         |   83 -
 doc/source/cql/json.rst                            |  115 -
 doc/source/cql/mvs.rst                             |  166 -
 doc/source/cql/security.rst                        |  502 ---
 doc/source/cql/triggers.rst                        |   63 -
 doc/source/cql/types.rst                           |  559 ---
 doc/source/data_modeling/index.rst                 |   20 -
 doc/source/development/code_style.rst              |   94 -
 doc/source/development/how_to_commit.rst           |  151 -
 doc/source/development/how_to_review.rst           |   71 -
 doc/source/development/ide.rst                     |  161 -
 doc/source/development/index.rst                   |   29 -
 doc/source/development/license_compliance.rst      |   37 -
 doc/source/development/patches.rst                 |  125 -
 doc/source/development/testing.rst                 |  170 -
 doc/source/faq/index.rst                           |  298 --
 doc/source/getting_started/configuring.rst         |   67 -
 doc/source/getting_started/drivers.rst             |  107 -
 doc/source/getting_started/index.rst               |   33 -
 doc/source/getting_started/installing.rst          |  106 -
 doc/source/getting_started/querying.rst            |   52 -
 doc/source/index.rst                               |   41 -
 doc/source/operating/backups.rst                   |   22 -
 doc/source/operating/bloom_filters.rst             |   65 -
 doc/source/operating/bulk_loading.rst              |   24 -
 doc/source/operating/cdc.rst                       |   89 -
 doc/source/operating/compaction.rst                |  443 ---
 doc/source/operating/compression.rst               |   94 -
 doc/source/operating/error_codes.txt               |   31 -
 doc/source/operating/hardware.rst                  |   87 -
 doc/source/operating/hints.rst                     |   22 -
 doc/source/operating/index.rst                     |   39 -
 doc/source/operating/metrics.rst                   |  710 ----
 doc/source/operating/read_repair.rst               |   22 -
 doc/source/operating/repair.rst                    |   22 -
 doc/source/operating/security.rst                  |  410 --
 doc/source/operating/snitch.rst                    |   78 -
 doc/source/operating/topo_changes.rst              |  124 -
 doc/source/tools/cqlsh.rst                         |  455 ---
 doc/source/tools/index.rst                         |   26 -
 doc/source/tools/nodetool.rst                      |   22 -
 doc/source/troubleshooting/index.rst               |   20 -
 510 files changed, 23216 insertions(+), 11347 deletions(-)

diff --git a/.build/build-rat.xml b/.build/build-rat.xml
index d8268e4..da9c13d 100644
--- a/.build/build-rat.xml
+++ b/.build/build-rat.xml
@@ -53,6 +53,7 @@
                  <exclude name="**/cassandra.yaml"/>
                  <exclude name="**/cassandra-murmur.yaml"/>
                  <exclude name="**/cassandra-seeds.yaml"/>
+                 <exclude NAME="**/doc/antora.yml"/>
                  <exclude name="**/test/conf/cassandra.yaml"/>
                  <exclude name="**/test/conf/cassandra_encryption.yaml"/>
                  <exclude name="**/test/conf/cdc.yaml"/>
@@ -67,6 +68,8 @@
                  <exclude name="**/tools/cqlstress-example.yaml"/>
                  <exclude name="**/tools/cqlstress-insanity-example.yaml"/>
                  <exclude name="**/tools/cqlstress-lwt-example.yaml"/>
+                 <!-- Documentation files -->
+                 <exclude NAME="**/doc/modules/**/*"/>
                  <!-- NOTICE files -->
                  <exclude NAME="**/NOTICE.md"/>
                  <!-- LICENSE files -->
diff --git a/.gitignore b/.gitignore
index 584ace1..9d9d4dc 100644
--- a/.gitignore
+++ b/.gitignore
@@ -68,7 +68,9 @@ Thumbs.db
 .ant_targets
 
 # Generated files from the documentation
-doc/source/configuration/cassandra_config_file.rst
+doc/modules/cassandra/pages/configuration/cass_yaml_file.adoc
+doc/modules/cassandra/pages/tools/nodetool/
+doc/modules/cassandra/examples/TEXT/NODETOOL/
 
 # Python virtual environment
 venv/
diff --git a/build.xml b/build.xml
index 23fe5b0..9b4b086 100644
--- a/build.xml
+++ b/build.xml
@@ -254,13 +254,14 @@
         </wikitext-to-html>
     </target>
 
-    <target name="gen-doc" description="Generate documentation" depends="jar" unless="ant.gen-doc.skip">
+    <target name="gen-asciidoc" description="Generate dynamic asciidoc pages" depends="jar" unless="ant.gen-doc.skip">
         <exec executable="make" osfamily="unix" dir="${doc.dir}">
-            <arg value="html"/>
+            <arg value="gen-asciidoc"/>
         </exec>
-        <exec executable="cmd" osfamily="dos" dir="${doc.dir}">
-            <arg value="/c"/>
-            <arg value="make.bat"/>
+    </target>
+
+    <target name="gen-doc" description="Generate documentation" depends="gen-asciidoc,generate-cql-html" unless="ant.gen-doc.skip">
+        <exec executable="make" osfamily="unix" dir="${doc.dir}">
             <arg value="html"/>
         </exec>
     </target>
diff --git a/doc/Makefile b/doc/Makefile
index c6632a5..43acc1e 100644
--- a/doc/Makefile
+++ b/doc/Makefile
@@ -1,268 +1,26 @@
-# Makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS    =
-SPHINXBUILD   = sphinx-build
-PAPER         =
-BUILDDIR      = build
-
-# Internal variables.
-PAPEROPT_a4     = -D latex_paper_size=a4
-PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
-# the i18n builder cannot share the environment and doctrees with the others
-I18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
-
-YAML_DOC_INPUT=../conf/cassandra.yaml
-YAML_DOC_OUTPUT=source/configuration/cassandra_config_file.rst
-
-MAKE_CASSANDRA_YAML = python convert_yaml_to_rst.py $(YAML_DOC_INPUT) $(YAML_DOC_OUTPUT)
-
-WEB_SITE_PRESENCE_FILE='source/.build_for_website'
-
-.PHONY: help
-help:
-	@echo "Please use \`make <target>' where <target> is one of"
-	@echo "  html       to make standalone HTML files"
-	@echo "  website    to make HTML files for the Cassandra website"
-	@echo "  dirhtml    to make HTML files named index.html in directories"
-	@echo "  singlehtml to make a single large HTML file"
-	@echo "  pickle     to make pickle files"
-	@echo "  json       to make JSON files"
-	@echo "  htmlhelp   to make HTML files and a HTML help project"
-	@echo "  qthelp     to make HTML files and a qthelp project"
-	@echo "  applehelp  to make an Apple Help Book"
-	@echo "  devhelp    to make HTML files and a Devhelp project"
-	@echo "  epub       to make an epub"
-	@echo "  epub3      to make an epub3"
-	@echo "  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
-	@echo "  latexpdf   to make LaTeX files and run them through pdflatex"
-	@echo "  latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
-	@echo "  text       to make text files"
-	@echo "  man        to make manual pages"
-	@echo "  texinfo    to make Texinfo files"
-	@echo "  info       to make Texinfo files and run them through makeinfo"
-	@echo "  gettext    to make PO message catalogs"
-	@echo "  changes    to make an overview of all changed/added/deprecated items"
-	@echo "  xml        to make Docutils-native XML files"
-	@echo "  pseudoxml  to make pseudoxml-XML files for display purposes"
-	@echo "  linkcheck  to check all external links for integrity"
-	@echo "  doctest    to run all doctests embedded in the documentation (if enabled)"
-	@echo "  coverage   to run coverage check of the documentation (if enabled)"
-	@echo "  dummy      to check syntax errors of document sources"
-
-.PHONY: clean
-clean:
-	rm -rf $(BUILDDIR)/*
-	rm -f $(YAML_DOC_OUTPUT)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+#  contributor license agreements.  See the NOTICE file distributed with
+#  this work for additional information regarding copyright ownership.
+#  The ASF licenses this file to You under the Apache License, Version 2.0
+#  (the "License"); you may not use this file except in compliance with
+#  the License.  You may obtain a copy of the License at
+#      http://www.apache.org/licenses/LICENSE-2.0
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+
+GENERATE_NODETOOL_DOCS = ./scripts/gen-nodetool-docs.py
+MAKE_CASSANDRA_YAML = ./scripts/convert_yaml_to_adoc.py ../conf/cassandra.yaml ./modules/cassandra/pages/configuration/cass_yaml_file.adoc
 
 .PHONY: html
 html:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
-	@echo
-	@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
-
-.PHONY: website
-website: clean
-	@touch $(WEB_SITE_PRESENCE_FILE)
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
-	@rm $(WEB_SITE_PRESENCE_FILE)
-	@echo
-	@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
-
-.PHONY: dirhtml
-dirhtml:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
-	@echo
-	@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
-
-.PHONY: singlehtml
-singlehtml:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
-	@echo
-	@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
-
-.PHONY: pickle
-pickle:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
-	@echo
-	@echo "Build finished; now you can process the pickle files."
-
-.PHONY: json
-json:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
-	@echo
-	@echo "Build finished; now you can process the JSON files."
-
-.PHONY: htmlhelp
-htmlhelp:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
-	@echo
-	@echo "Build finished; now you can run HTML Help Workshop with the" \
-	      ".hhp project file in $(BUILDDIR)/htmlhelp."
-
-.PHONY: qthelp
-qthelp:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
-	@echo
-	@echo "Build finished; now you can run "qcollectiongenerator" with the" \
-	      ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
-	@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/ApacheCassandraDocumentation.qhcp"
-	@echo "To view the help file:"
-	@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/ApacheCassandraDocumentation.qhc"
-
-.PHONY: applehelp
-applehelp:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
-	@echo
-	@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
-	@echo "N.B. You won't be able to view it unless you put it in" \
-	      "~/Library/Documentation/Help or install it in your application" \
-	      "bundle."
-
-.PHONY: devhelp
-devhelp:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
-	@echo
-	@echo "Build finished."
-	@echo "To view the help file:"
-	@echo "# mkdir -p $$HOME/.local/share/devhelp/ApacheCassandraDocumentation"
-	@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/ApacheCassandraDocumentation"
-	@echo "# devhelp"
-
-.PHONY: epub
-epub:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
-	@echo
-	@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
-
-.PHONY: epub3
-epub3:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
-	@echo
-	@echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
-
-.PHONY: latex
-latex:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo
-	@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
-	@echo "Run \`make' in that directory to run these through (pdf)latex" \
-	      "(use \`make latexpdf' here to do that automatically)."
-
-.PHONY: latexpdf
-latexpdf:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo "Running LaTeX files through pdflatex..."
-	$(MAKE) -C $(BUILDDIR)/latex all-pdf
-	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-
-.PHONY: latexpdfja
-latexpdfja:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo "Running LaTeX files through platex and dvipdfmx..."
-	$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
-	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-
-.PHONY: text
-text:
-	$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
-	@echo
-	@echo "Build finished. The text files are in $(BUILDDIR)/text."
-
-.PHONY: man
-man:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
-	@echo
-	@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
-
-.PHONY: texinfo
-texinfo:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
-	@echo
-	@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
-	@echo "Run \`make' in that directory to run these through makeinfo" \
-	      "(use \`make info' here to do that automatically)."
-
-.PHONY: info
-info:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
-	@echo "Running Texinfo files through makeinfo..."
-	make -C $(BUILDDIR)/texinfo info
-	@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
-
-.PHONY: gettext
-gettext:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
-	@echo
-	@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
-
-.PHONY: changes
-changes:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
-	@echo
-	@echo "The overview file is in $(BUILDDIR)/changes."
-
-.PHONY: linkcheck
-linkcheck:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
-	@echo
-	@echo "Link check complete; look for any errors in the above output " \
-	      "or in $(BUILDDIR)/linkcheck/output.txt."
-
-.PHONY: doctest
-doctest:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
-	@echo "Testing of doctests in the sources finished, look at the " \
-	      "results in $(BUILDDIR)/doctest/output.txt."
-
-.PHONY: coverage
-coverage:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
-	@echo "Testing of coverage in the sources finished, look at the " \
-	      "results in $(BUILDDIR)/coverage/python.txt."
-
-.PHONY: xml
-xml:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
-	@echo
-	@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
-
-.PHONY: pseudoxml
-pseudoxml:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
-	@echo
-	@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
-
-.PHONY: dummy
-dummy:
-	$(MAKE_CASSANDRA_YAML)
-	$(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
-	@echo
-	@echo "Build finished. Dummy builder generates no files."
+	@# hack until a local basic antora build is put in
+
+.PHONY: gen-asciidoc
+gen-asciidoc:
+	@mkdir -p modules/cassandra/pages/tools/nodetool
+	@mkdir -p modules/cassandra/examples/TEXT/NODETOOL
+	python3 $(GENERATE_NODETOOL_DOCS)
+	python3 $(MAKE_CASSANDRA_YAML)
diff --git a/doc/README.md b/doc/README.md
index 931db7d..608d236 100644
--- a/doc/README.md
+++ b/doc/README.md
@@ -23,29 +23,39 @@ Apache Cassandra documentation directory
 
 This directory contains the documentation maintained in-tree for Apache
 Cassandra. This directory contains the following documents:
-- The source of the official Cassandra documentation, in the `source/`
+- The source of the official Cassandra documentation, in the `source/modules`
   subdirectory. See below for more details on how to edit/build that
   documentation.
 - The specification(s) for the supported versions of native transport protocol.
-- Additional documentation on the SASI implementation (`SASI.md`). TODO: we
-  should probably move the first half of that documentation to the general
-  documentation, and the implementation explanation parts into the wiki.
 
 
 Official documentation
 ----------------------
 
 The source for the official documentation for Apache Cassandra can be found in
-the `source` subdirectory. The documentation uses [sphinx](http://www.sphinx-doc.org/)
-and is thus written in [reStructuredText](http://docutils.sourceforge.net/rst.html).
+the `modules/cassandra/pages` subdirectory. The documentation uses [antora](http://www.antora.org/)
+and is thus written in [asciidoc](http://asciidoc.org).
 
-To build the HTML documentation, you will need to first install sphinx and the
-[sphinx ReadTheDocs theme](the https://pypi.python.org/pypi/sphinx_rtd_theme), which
-on unix you can do with:
+To generate the asciidoc files for cassandra.yaml and the nodetool commands, run (from project root):
+```bash
+ant gen-asciidoc
 ```
-pip install sphinx sphinx_rtd_theme
+or (from this directory):
+
+```bash
+make gen-asciidoc
+```
+
+
+(The following has not yet been implemented, for now see the build instructions in the [cassandra-website](https://github.com/apache/cassandra-website) repo.)
+To build the documentation, run (from project root):
+
+```bash
+ant gen-doc
+```
+or (from this directory):
+
+```bash
+make html
 ```
 
-The documentation can then be built from this directory by calling `make html`
-(or `make.bat html` on windows). Alternatively, the top-level `ant gen-doc`
-target can be used.
diff --git a/doc/SASI.md b/doc/SASI.md
deleted file mode 100644
index a2fa717..0000000
--- a/doc/SASI.md
+++ /dev/null
@@ -1,798 +0,0 @@
-<!--
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
--->
-
-# SASIIndex
-
-[`SASIIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/SASIIndex.java),
-or "SASI" for short, is an implementation of Cassandra's
-`Index` interface that can be used as an alternative to the
-existing implementations. SASI's indexing and querying improves on
-existing implementations by tailoring it specifically to Cassandra's
-needs. SASI has superior performance in cases where queries would
-previously require filtering. In achieving this performance, SASI aims
-to be significantly less resource intensive than existing
-implementations, in memory, disk, and CPU usage. In addition, SASI
-supports prefix and contains queries on strings (similar to SQL's
-`LIKE = "foo*"` or `LIKE = "*foo*"'`).
-
-The following goes on describe how to get up and running with SASI,
-demonstrates usage with examples, and provides some details on its
-implementation.
-
-## Using SASI
-
-The examples below walk through creating a table and indexes on its
-columns, and performing queries on some inserted data. The patchset in
-this repository includes support for the Thrift and CQL3 interfaces.
-
-The examples below assume the `demo` keyspace has been created and is
-in use.
-
-```
-cqlsh> CREATE KEYSPACE demo WITH replication = {
-   ... 'class': 'SimpleStrategy',
-   ... 'replication_factor': '1'
-   ... };
-cqlsh> USE demo;
-```
-
-All examples are performed on the `sasi` table:
-
-```
-cqlsh:demo> CREATE TABLE sasi (id uuid, first_name text, last_name text,
-        ... age int, height int, created_at bigint, primary key (id));
-```
-
-#### Creating Indexes
-
-To create SASI indexes use CQLs `CREATE CUSTOM INDEX` statement:
-
-```
-cqlsh:demo> CREATE CUSTOM INDEX ON sasi (first_name) USING 'org.apache.cassandra.index.sasi.SASIIndex'
-        ... WITH OPTIONS = {
-        ... 'analyzer_class':
-        ...   'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer',
-        ... 'case_sensitive': 'false'
-        ... };
-
-cqlsh:demo> CREATE CUSTOM INDEX ON sasi (last_name) USING 'org.apache.cassandra.index.sasi.SASIIndex'
-        ... WITH OPTIONS = {'mode': 'CONTAINS'};
-
-cqlsh:demo> CREATE CUSTOM INDEX ON sasi (age) USING 'org.apache.cassandra.index.sasi.SASIIndex';
-
-cqlsh:demo> CREATE CUSTOM INDEX ON sasi (created_at) USING 'org.apache.cassandra.index.sasi.SASIIndex'
-        ...  WITH OPTIONS = {'mode': 'SPARSE'};
-```
-
-The indexes created have some options specified that customize their
-behaviour and potentially performance. The index on `first_name` is
-case-insensitive. The analyzers are discussed more in a subsequent
-example. The `NonTokenizingAnalyzer` performs no analysis on the
-text. Each index has a mode: `PREFIX`, `CONTAINS`, or `SPARSE`, the
-first being the default. The `last_name` index is created with the
-mode `CONTAINS` which matches terms on suffixes instead of prefix
-only. Examples of this are available below and more detail can be
-found in the section on
-[OnDiskIndex](#ondiskindexbuilder).The
-`created_at` column is created with its mode set to `SPARSE`, which is
-meant to improve performance of querying large, dense number ranges
-like timestamps for data inserted every millisecond. Details of the
-`SPARSE` implementation can also be found in the section on the
-[OnDiskIndex](#ondiskindexbuilder). The `age`
-index is created with the default `PREFIX` mode and no
-case-sensitivity or text analysis options are specified since the
-field is numeric.
-
-After inserting the following data and performing a `nodetool flush`,
-SASI performing index flushes to disk can be seen in Cassandra's logs
--- although the direct call to flush is not required (see
-[IndexMemtable](#indexmemtable) for more details).
-
-```
-cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
-        ... VALUES (556ebd54-cbe5-4b75-9aae-bf2a31a24500, 'Pavel', 'Yaskevich', 27, 181, 1442959315018);
-
-cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
-        ... VALUES (5770382a-c56f-4f3f-b755-450e24d55217, 'Jordan', 'West', 26, 173, 1442959315019);
-
-cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
-        ... VALUES (96053844-45c3-4f15-b1b7-b02c441d3ee1, 'Mikhail', 'Stepura', 36, 173, 1442959315020);
-
-cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
-        ... VALUES (f5dfcabe-de96-4148-9b80-a1c41ed276b4, 'Michael', 'Kjellman', 26, 180, 1442959315021);
-
-cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
-        ... VALUES (2970da43-e070-41a8-8bcb-35df7a0e608a, 'Johnny', 'Zhang', 32, 175, 1442959315022);
-
-cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
-        ... VALUES (6b757016-631d-4fdb-ac62-40b127ccfbc7, 'Jason', 'Brown', 40, 182, 1442959315023);
-
-cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
-        ... VALUES (8f909e8a-008e-49dd-8d43-1b0df348ed44, 'Vijay', 'Parthasarathy', 34, 183, 1442959315024);
-
-cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi;
-
- first_name | last_name     | age | height | created_at
-------------+---------------+-----+--------+---------------
-    Michael |      Kjellman |  26 |    180 | 1442959315021
-    Mikhail |       Stepura |  36 |    173 | 1442959315020
-      Jason |         Brown |  40 |    182 | 1442959315023
-      Pavel |     Yaskevich |  27 |    181 | 1442959315018
-      Vijay | Parthasarathy |  34 |    183 | 1442959315024
-     Jordan |          West |  26 |    173 | 1442959315019
-     Johnny |         Zhang |  32 |    175 | 1442959315022
-
-(7 rows)
-```
-
-#### Equality & Prefix Queries
-
-SASI supports all queries already supported by CQL, including LIKE statement
-for PREFIX, CONTAINS and SUFFIX searches.
-
-```
-cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
-        ... WHERE first_name = 'Pavel';
-
-  first_name | last_name | age | height | created_at
--------------+-----------+-----+--------+---------------
-       Pavel | Yaskevich |  27 |    181 | 1442959315018
-
-(1 rows)
-```
-
-```
-cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
-       ... WHERE first_name = 'pavel';
-
-  first_name | last_name | age | height | created_at
--------------+-----------+-----+--------+---------------
-       Pavel | Yaskevich |  27 |    181 | 1442959315018
-
-(1 rows)
-```
-
-```
-cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
-        ... WHERE first_name LIKE 'M%';
-
- first_name | last_name | age | height | created_at
-------------+-----------+-----+--------+---------------
-    Michael |  Kjellman |  26 |    180 | 1442959315021
-    Mikhail |   Stepura |  36 |    173 | 1442959315020
-
-(2 rows)
-```
-
-Of course, the case of the query does not matter for the `first_name`
-column because of the options provided at index creation time.
-
-```
-cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
-        ... WHERE first_name LIKE 'm%';
-
- first_name | last_name | age | height | created_at
-------------+-----------+-----+--------+---------------
-    Michael |  Kjellman |  26 |    180 | 1442959315021
-    Mikhail |   Stepura |  36 |    173 | 1442959315020
-
-(2 rows)
-```
-
-#### Compound Queries
-
-SASI supports queries with multiple predicates, however, due to the
-nature of the default indexing implementation, CQL requires the user
-to specify `ALLOW FILTERING` to opt-in to the potential performance
-pitfalls of such a query. With SASI, while the requirement to include
-`ALLOW FILTERING` remains, to reduce modifications to the grammar, the
-performance pitfalls do not exist because filtering is not
-performed. Details on how SASI joins data from multiple predicates is
-available below in the
-[Implementation Details](#implementation-details)
-section.
-
-```
-cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
-        ... WHERE first_name LIKE 'M%' and age < 30 ALLOW FILTERING;
-
- first_name | last_name | age | height | created_at
-------------+-----------+-----+--------+---------------
-    Michael |  Kjellman |  26 |    180 | 1442959315021
-
-(1 rows)
-```
-
-#### Suffix Queries
-
-The next example demonstrates `CONTAINS` mode on the `last_name`
-column. By using this mode predicates can search for any strings
-containing the search string as a sub-string. In this case the strings
-containing "a" or "an".
-
-```
-cqlsh:demo> SELECT * FROM sasi WHERE last_name LIKE '%a%';
-
- id                                   | age | created_at    | first_name | height | last_name
---------------------------------------+-----+---------------+------------+--------+---------------
- f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 | 1442959315021 |    Michael |    180 |      Kjellman
- 96053844-45c3-4f15-b1b7-b02c441d3ee1 |  36 | 1442959315020 |    Mikhail |    173 |       Stepura
- 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | 1442959315018 |      Pavel |    181 |     Yaskevich
- 8f909e8a-008e-49dd-8d43-1b0df348ed44 |  34 | 1442959315024 |      Vijay |    183 | Parthasarathy
- 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 | 1442959315022 |     Johnny |    175 |         Zhang
-
-(5 rows)
-
-cqlsh:demo> SELECT * FROM sasi WHERE last_name LIKE '%an%';
-
- id                                   | age | created_at    | first_name | height | last_name
---------------------------------------+-----+---------------+------------+--------+-----------
- f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 | 1442959315021 |    Michael |    180 |  Kjellman
- 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 | 1442959315022 |     Johnny |    175 |     Zhang
-
-(2 rows)
-```
-
-#### Expressions on Non-Indexed Columns
-
-SASI also supports filtering on non-indexed columns like `height`. The
-expression can only narrow down an existing query using `AND`.
-
-```
-cqlsh:demo> SELECT * FROM sasi WHERE last_name LIKE '%a%' AND height >= 175 ALLOW FILTERING;
-
- id                                   | age | created_at    | first_name | height | last_name
---------------------------------------+-----+---------------+------------+--------+---------------
- f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 | 1442959315021 |    Michael |    180 |      Kjellman
- 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | 1442959315018 |      Pavel |    181 |     Yaskevich
- 8f909e8a-008e-49dd-8d43-1b0df348ed44 |  34 | 1442959315024 |      Vijay |    183 | Parthasarathy
- 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 | 1442959315022 |     Johnny |    175 |         Zhang
-
-(4 rows)
-```
-
-#### Text Analysis (Tokenization and Stemming)
-
-Lastly, to demonstrate text analysis an additional column is needed on
-the table. Its definition, index, and statements to update rows are shown below.
-
-```
-cqlsh:demo> ALTER TABLE sasi ADD bio text;
-cqlsh:demo> CREATE CUSTOM INDEX ON sasi (bio) USING 'org.apache.cassandra.index.sasi.SASIIndex'
-        ... WITH OPTIONS = {
-        ... 'analyzer_class': 'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer',
-        ... 'tokenization_enable_stemming': 'true',
-        ... 'analyzed': 'true',
-        ... 'tokenization_normalize_lowercase': 'true',
-        ... 'tokenization_locale': 'en'
-        ... };
-cqlsh:demo> UPDATE sasi SET bio = 'Software Engineer, who likes distributed systems, doesnt like to argue.' WHERE id = 5770382a-c56f-4f3f-b755-450e24d55217;
-cqlsh:demo> UPDATE sasi SET bio = 'Software Engineer, works on the freight distribution at nights and likes arguing' WHERE id = 556ebd54-cbe5-4b75-9aae-bf2a31a24500;
-cqlsh:demo> SELECT * FROM sasi;
-
- id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
---------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+---------------
- f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 |                                                                             null | 1442959315021 |    Michael |    180 |      Kjellman
- 96053844-45c3-4f15-b1b7-b02c441d3ee1 |  36 |                                                                             null | 1442959315020 |    Mikhail |    173 |       Stepura
- 6b757016-631d-4fdb-ac62-40b127ccfbc7 |  40 |                                                                             null | 1442959315023 |      Jason |    182 |         Brown
- 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 |     Yaskevich
- 8f909e8a-008e-49dd-8d43-1b0df348ed44 |  34 |                                                                             null | 1442959315024 |      Vijay |    183 | Parthasarathy
- 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |          West
- 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 |                                                                             null | 1442959315022 |     Johnny |    175 |         Zhang
-
-(7 rows)
-```
-
-Index terms and query search strings are stemmed for the `bio` column
-because it was configured to use the
-[`StandardAnalyzer`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/analyzer/StandardAnalyzer.java)
-and `analyzed` is set to `true`. The
-`tokenization_normalize_lowercase` is similar to the `case_sensitive`
-property but for the
-[`StandardAnalyzer`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/analyzer/StandardAnalyzer.java). These
-query demonstrates the stemming applied by [`StandardAnalyzer`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/analyzer/StandardAnalyzer.java).
-
-```
-cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'distributing';
-
- id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
---------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
- 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
- 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |      West
-
-(2 rows)
-
-cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'they argued';
-
- id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
---------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
- 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
- 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |      West
-
-(2 rows)
-
-cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'working at the company';
-
- id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
---------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
- 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
-
-(1 rows)
-
-cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'soft eng';
-
- id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
---------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
- 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
- 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |      West
-
-(2 rows)
-```
-
-## Implementation Details
-
-While SASI, at the surface, is simply an implementation of the
-`Index` interface, at its core there are several data
-structures and algorithms used to satisfy it. These are described
-here. Additionally, the changes internal to Cassandra to support SASIs
-integration are described.
-
-The `Index` interface divides responsibility of the
-implementer into two parts: Indexing and Querying. Further, Cassandra
-makes it possible to divide those responsibilities into the memory and
-disk components. SASI takes advantage of Cassandra's write-once,
-immutable, ordered data model to build indexes along with the flushing
-of the memtable to disk -- this is the origin of the name "SSTable
-Attached Secondary Index".
-
-The SASI index data structures are built in memory as the SSTable is
-being written and they are flushed to disk before the writing of the
-SSTable completes. The writing of each index file only requires
-sequential writes to disk. In some cases, partial flushes are
-performed, and later stitched back together, to reduce memory
-usage. These data structures are optimized for this use case.
-
-Taking advantage of Cassandra's ordered data model, at query time,
-candidate indexes are narrowed down for searching minimize the amount
-of work done. Searching is then performed using an efficient method
-that streams data off disk as needed.
-
-### Indexing
-
-Per SSTable, SASI writes an index file for each indexed column. The
-data for these files is built in memory using the
-[`OnDiskIndexBuilder`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndexBuilder.java). Once
-flushed to disk, the data is read using the
-[`OnDiskIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java)
-class. These are composed of bytes representing indexed terms,
-organized for efficient writing or searching respectively. The keys
-and values they hold represent tokens and positions in an SSTable and
-these are stored per-indexed term in
-[`TokenTreeBuilder`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTreeBuilder.java)s
-for writing, and
-[`TokenTree`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java)s
-for querying. These index files are memory mapped after being written
-to disk, for quicker access. For indexing data in the memtable SASI
-uses its
-[`IndexMemtable`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/IndexMemtable.java)
-class.
-
-#### OnDiskIndex(Builder)
-
-Each
-[`OnDiskIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java)
-is an instance of a modified
-[Suffix Array](https://en.wikipedia.org/wiki/Suffix_array) data
-structure. The
-[`OnDiskIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java)
-is comprised of page-size blocks of sorted terms and pointers to the
-terms' associated data, as well as the data itself, stored also in one
-or more page-sized blocks. The
-[`OnDiskIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java)
-is structured as a tree of arrays, where each level describes the
-terms in the level below, the final level being the terms
-themselves. The `PointerLevel`s and their `PointerBlock`s contain
-terms and pointers to other blocks that *end* with those terms. The
-`DataLevel`, the final level, and its `DataBlock`s contain terms and
-point to the data itself, contained in [`TokenTree`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java)s.
-
-The terms written to the
-[`OnDiskIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java)
-vary depending on its "mode": either `PREFIX`, `CONTAINS`, or
-`SPARSE`. In the `PREFIX` and `SPARSE` cases terms exact values are
-written exactly once per `OnDiskIndex`. For example, a `PREFIX` index
-with terms `Jason`, `Jordan`, `Pavel`, all three will be included in
-the index. A `CONTAINS` index writes additional terms for each suffix of
-each term recursively. Continuing with the example, a `CONTAINS` index
-storing the previous terms would also store `ason`, `ordan`, `avel`,
-`son`, `rdan`, `vel`, etc. This allows for queries on the suffix of
-strings. The `SPARSE` mode differs from `PREFIX` in that for every 64
-blocks of terms a
-[`TokenTree`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java)
-is built merging all the
-[`TokenTree`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java)s
-for each term into a single one. This copy of the data is used for
-efficient iteration of large ranges of e.g. timestamps. The index
-"mode" is configurable per column at index creation time.
-
-#### TokenTree(Builder)
-
-The
-[`TokenTree`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java)
-is an implementation of the well-known
-[B+-tree](https://en.wikipedia.org/wiki/B%2B_tree) that has been
-modified to optimize for its use-case. In particular, it has been
-optimized to associate tokens, longs, with a set of positions in an
-SSTable, also longs. Allowing the set of long values accommodates
-the possibility of a hash collision in the token, but the data
-structure is optimized for the unlikely possibility of such a
-collision.
-
-To optimize for its write-once environment the
-[`TokenTreeBuilder`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTreeBuilder.java)
-completely loads its interior nodes as the tree is built and it uses
-the well-known algorithm optimized for bulk-loading the data
-structure.
-
-[`TokenTree`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java)s provide the means to iterate a tokens, and file
-positions, that match a given term, and to skip forward in that
-iteration, an operation used heavily at query time.
-
-#### IndexMemtable
-
-The
-[`IndexMemtable`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/IndexMemtable.java)
-handles indexing the in-memory data held in the memtable. The
-[`IndexMemtable`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/IndexMemtable.java)
-in turn manages either a
-[`TrieMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java)
-or a
-[`SkipListMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java)
-per-column. The choice of which index type is used is data
-dependent. The
-[`TrieMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java)
-is used for literal types. `AsciiType` and `UTF8Type` are literal
-types by defualt but any column can be configured as a literal type
-using the `is_literal` option at index creation time. For non-literal
-types the
-[`SkipListMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java)
-is used. The
-[`TrieMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java)
-is an implementation that can efficiently support prefix queries on
-character-like data. The
-[`SkipListMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java),
-conversely, is better suited for Cassandra other data types like
-numbers.
-
-The
-[`TrieMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java)
-is built using either the `ConcurrentRadixTree` or
-`ConcurrentSuffixTree` from the `com.goooglecode.concurrenttrees`
-package. The choice between the two is made based on the indexing
-mode, `PREFIX` or other modes, and `CONTAINS` mode, respectively.
-
-The
-[`SkipListMemIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java)
-is built on top of `java.util.concurrent.ConcurrentSkipListSet`.
-
-### Querying
-
-Responsible for converting the internal `IndexExpression`
-representation into SASI's
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)
-and
-[`Expression`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java)
-tree, optimizing the tree to reduce the amount of work done, and
-driving the query itself the
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-is the work horse of SASI's querying implementation. To efficiently
-perform union and intersection operations SASI provides several
-iterators similar to Cassandra's `MergeIterator` but tailored
-specifically for SASIs use, and with more features. The
-[`RangeUnionIterator`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java),
-like its name suggests, performs set union over sets of tokens/keys
-matching the query, only reading as much data as it needs from each
-set to satisfy the query. The
-[`RangeIntersectionIterator`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java),
-similar to its counterpart, performs set intersection over its data.
-
-#### QueryPlan
-
-The
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-instantiated per search query is at the core of SASIs querying
-implementation. Its work can be divided in two stages: analysis and
-execution.
-
-During the analysis phase,
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-converts from Cassandra's internal representation of
-`IndexExpression`s, which has also been modified to support encoding
-queries that contain ORs and groupings of expressions using
-parentheses (see the
-[Cassandra Internal Changes](#cassandra-internal-changes)
-section below for more details). This process produces a tree of
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)s, which in turn may contain [`Expression`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java)s, all of which
-provide an alternative, more efficient, representation of the query.
-
-During execution the
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-uses the `DecoratedKey`-generating iterator created from the
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java) tree. These keys are read from disk and a final check to
-ensure they satisfy the query is made, once again using the
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java) tree. At the point the desired amount of matching data has
-been found, or there is no more matching data, the result set is
-returned to the coordinator through the existing internal components.
-
-The number of queries (total/failed/timed-out), and their latencies,
-are maintined per-table/column family.
-
-SASI also supports concurrently iterating terms for the same index
-accross SSTables. The concurrency factor is controlled by the
-`cassandra.search_concurrency_factor` system property. The default is
-`1`.
-
-##### QueryController
-
-Each
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-references a
-[`QueryController`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java)
-used throughout the execution phase. The
-[`QueryController`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java)
-has two responsibilities: to manage and ensure the proper cleanup of
-resources (indexes), and to strictly enforce the time bound for query,
-specified by the user via the range slice timeout. All indexes are
-accessed via the
-[`QueryController`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java)
-so that they can be safely released by it later. The
-[`QueryController`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java)'s
-`checkpoint` function is called in specific places in the execution
-path to ensure the time-bound is enforced.
-
-##### QueryPlan Optimizations
-
-While in the analysis phase, the
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-performs several potential optimizations to the query. The goal of
-these optimizations is to reduce the amount of work performed during
-the execution phase.
-
-The simplest optimization performed is compacting multiple expressions
-joined by logical intersection (`AND`) into a single [`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java) with
-three or more [`Expression`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java)s. For example, the query `WHERE age < 100 AND
-fname = 'p*' AND first_name != 'pa*' AND age > 21` would,
-without modification, have the following tree:
-
-                          ┌───────┐
-                 ┌────────│  AND  │──────┐
-                 │        └───────┘      │
-                 ▼                       ▼
-              ┌───────┐             ┌──────────┐
-        ┌─────│  AND  │─────┐       │age < 100 │
-        │     └───────┘     │       └──────────┘
-        ▼                   ▼
-    ┌──────────┐          ┌───────┐
-    │ fname=p* │        ┌─│  AND  │───┐
-    └──────────┘        │ └───────┘   │
-                        ▼             ▼
-                    ┌──────────┐  ┌──────────┐
-                    │fname!=pa*│  │ age > 21 │
-                    └──────────┘  └──────────┘
-
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-will remove the redundant right branch whose root is the final `AND`
-and has leaves `fname != pa*` and `age > 21`. These [`Expression`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java)s will
-be compacted into the parent `AND`, a safe operation due to `AND`
-being associative and commutative. The resulting tree looks like the
-following:
-
-                                  ┌───────┐
-                         ┌────────│  AND  │──────┐
-                         │        └───────┘      │
-                         ▼                       ▼
-                      ┌───────┐             ┌──────────┐
-          ┌───────────│  AND  │────────┐    │age < 100 │
-          │           └───────┘        │    └──────────┘
-          ▼               │            ▼
-    ┌──────────┐          │      ┌──────────┐
-    │ fname=p* │          ▼      │ age > 21 │
-    └──────────┘    ┌──────────┐ └──────────┘
-                    │fname!=pa*│
-                    └──────────┘
-
-When excluding results from the result set, using `!=`, the
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-determines the best method for handling it. For range queries, for
-example, it may be optimal to divide the range into multiple parts
-with a hole for the exclusion. For string queries, such as this one,
-it is more optimal, however, to simply note which data to skip, or
-exclude, while scanning the index. Following this optimization the
-tree looks like this:
-
-                                   ┌───────┐
-                          ┌────────│  AND  │──────┐
-                          │        └───────┘      │
-                          ▼                       ▼
-                       ┌───────┐             ┌──────────┐
-               ┌───────│  AND  │────────┐    │age < 100 │
-               │       └───────┘        │    └──────────┘
-               ▼                        ▼
-        ┌──────────────────┐         ┌──────────┐
-        │     fname=p*     │         │ age > 21 │
-        │ exclusions=[pa*] │         └──────────┘
-        └──────────────────┘
-
-The last type of optimization applied, for this query, is to merge
-range expressions across branches of the tree -- without modifying the
-meaning of the query, of course. In this case, because the query
-contains all `AND`s the `age` expressions can be collapsed. Along with
-this optimization, the initial collapsing of unneeded `AND`s can also
-be applied once more to result in this final tree using to execute the
-query:
-
-                            ┌───────┐
-                     ┌──────│  AND  │───────┐
-                     │      └───────┘       │
-                     ▼                      ▼
-           ┌──────────────────┐    ┌────────────────┐
-           │     fname=p*     │    │ 21 < age < 100 │
-           │ exclusions=[pa*] │    └────────────────┘
-           └──────────────────┘
-
-#### Operations and Expressions
-
-As discussed, the
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-optimizes a tree represented by
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)s
-as interior nodes, and
-[`Expression`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java)s
-as leaves. The
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)
-class, more specifically, can have zero, one, or two
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)s
-as children and an unlimited number of expressions. The iterators used
-to perform the queries, discussed below in the
-"Range(Union|Intersection)Iterator" section, implement the necessary
-logic to merge results transparently regardless of the
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)s
-children.
-
-Besides participating in the optimizations performed by the
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java),
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)
-is also responsible for taking a row that has been returned by the
-query and making a final validation that it in fact does match. This
-`satisfiesBy` operation is performed recursively from the root of the
-[`Operation`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java)
-tree for a given query. These checks are performed directly on the
-data in a given row. For more details on how `satisfiesBy` works see
-the documentation
-[in the code](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java#L87-L123).
-
-#### Range(Union|Intersection)Iterator
-
-The abstract `RangeIterator` class provides a unified interface over
-the two main operations performed by SASI at various layers in the
-execution path: set intersection and union. These operations are
-performed in a iterated, or "streaming", fashion to prevent unneeded
-reads of elements from either set. In both the intersection and union
-cases the algorithms take advantage of the data being pre-sorted using
-the same sort order, e.g. term or token order.
-
-The
-[`RangeUnionIterator`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java)
-performs the "Merge-Join" portion of the
-[Sort-Merge-Join](https://en.wikipedia.org/wiki/Sort-merge_join)
-algorithm, with the properties of an outer-join, or union. It is
-implemented with several optimizations to improve its performance over
-a large number of iterators -- sets to union. Specifically, the
-iterator exploits the likely case of the data having many sub-groups
-of overlapping ranges and the unlikely case that all ranges will
-overlap each other. For more details see the
-[javadoc](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java#L9-L21).
-
-The
-[`RangeIntersectionIterator`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java)
-itself is not a subclass of `RangeIterator`. It is a container for
-several classes, one of which, `AbstractIntersectionIterator`,
-sub-classes `RangeIterator`. SASI supports two methods of performing
-the intersection operation, and the ability to be adaptive in choosing
-between them based on some properties of the data.
-
-`BounceIntersectionIterator`, and the `BOUNCE` strategy, works like
-the
-[`RangeUnionIterator`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java)
-in that it performs a "Merge-Join", however, its nature is similar to
-a inner-join, where like values are merged by a data-specific merge
-function (e.g. merging two tokens in a list to lookup in a SSTable
-later). See the
-[javadoc](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java#L88-L101)
-for more details on its implementation.
-
-`LookupIntersectionIterator`, and the `LOOKUP` strategy, performs a
-different operation, more similar to a lookup in an associative data
-structure, or "hash lookup" in database terminology. Once again,
-details on the implementation can be found in the
-[javadoc](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java#L199-L208).
-
-The choice between the two iterators, or the `ADAPTIVE` strategy, is
-based upon the ratio of data set sizes of the minimum and maximum
-range of the sets being intersected. If the number of the elements in
-minimum range divided by the number of elements is the maximum range
-is less than or equal to `0.01`, then the `ADAPTIVE` strategy chooses
-the `LookupIntersectionIterator`, otherwise the
-`BounceIntersectionIterator` is chosen.
-
-### The SASIIndex Class
-
-The above components are glued together by the
-[`SASIIndex`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/SASIIndex.java)
-class which implements `Index`, and is instantiated
-per-table containing SASI indexes. It manages all indexes for a table
-via the
-[`sasi.conf.DataTracker`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/conf/DataTracker.java)
-and
-[`sasi.conf.view.View`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/conf/view/View.java)
-components, controls writing of all indexes for an SSTable via its
-[`PerSSTableIndexWriter`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/PerSSTableIndexWriter.java), and initiates searches with
-`Searcher`. These classes glue the previously
-mentioned indexing components together with Cassandra's SSTable
-life-cycle ensuring indexes are not only written when Memtable's flush
-but also as SSTable's are compacted. For querying, the
-`Searcher` does little but defer to
-[`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java)
-and update e.g. latency metrics exposed by SASI.
-
-### Cassandra Internal Changes
-
-To support the above changes and integrate them into Cassandra a few
-minor internal changes were made to Cassandra itself. These are
-described here.
-
-#### SSTable Write Life-cycle Notifications
-
-The `SSTableFlushObserver` is an observer pattern-like interface,
-whose sub-classes can register to be notified about events in the
-life-cycle of writing out a SSTable. Sub-classes can be notified when a
-flush begins and ends, as well as when each next row is about to be
-written, and each next column. SASI's `PerSSTableIndexWriter`,
-discussed above, is the only current subclass.
-
-### Limitations and Caveats
-
-The following are items that can be addressed in future updates but are not
-available in this repository or are not currently implemented.
-
-* The cluster must be configured to use a partitioner that produces
-  `LongToken`s, e.g. `Murmur3Partitioner`. Other existing partitioners which
-  don't produce LongToken e.g. `ByteOrderedPartitioner` and `RandomPartitioner`
-  will not work with SASI.
-* Not Equals and OR support have been removed in this release while
-  changes are made to Cassandra itself to support them.
-
-### Contributors
-
-* [Pavel Yaskevich](https://github.com/xedin)
-* [Jordan West](https://github.com/jrwest)
-* [Michael Kjellman](https://github.com/mkjellman)
-* [Jason Brown](https://github.com/jasobrown)
-* [Mikhail Stepura](https://github.com/mishail)
diff --git a/doc/antora.yml b/doc/antora.yml
new file mode 100644
index 0000000..4ce17d5
--- /dev/null
+++ b/doc/antora.yml
@@ -0,0 +1,18 @@
+name: Cassandra
+title: Cassandra
+version: '3.11'
+display_version: '3.11'
+asciidoc:
+  attributes:
+    sectanchors: ''
+    sectlinks: ''
+    cass_url: 'http://cassandra.apache.org/'
+    cass-docker-tag-3x: latest
+    cass-tag-3x: '3.11'
+    311_version: '3.11.10'
+    30_version: '3.0.24'
+    22_version: '2.2.19'
+    21_version: '2.1.22'
+nav:
+- modules/ROOT/nav.adoc
+- modules/cassandra/nav.adoc
diff --git a/doc/make.bat b/doc/make.bat
deleted file mode 100644
index cbd1d1d..0000000
--- a/doc/make.bat
+++ /dev/null
@@ -1,299 +0,0 @@
-@ECHO OFF
-
-REM
-REM Licensed to the Apache Software Foundation (ASF) under one
-REM or more contributor license agreements.  See the NOTICE file
-REM distributed with this work for additional information
-REM regarding copyright ownership.  The ASF licenses this file
-REM to you under the Apache License, Version 2.0 (the
-REM "License"); you may not use this file except in compliance
-REM with the License.  You may obtain a copy of the License at
-REM
-REM     http://www.apache.org/licenses/LICENSE-2.0
-REM
-REM Unless required by applicable law or agreed to in writing, software
-REM distributed under the License is distributed on an "AS IS" BASIS,
-REM WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-REM See the License for the specific language governing permissions and
-REM limitations under the License.
-REM
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
-	set SPHINXBUILD=sphinx-build
-)
-set BUILDDIR=build
-set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
-set I18NSPHINXOPTS=%SPHINXOPTS% .
-if NOT "%PAPER%" == "" (
-	set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
-	set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
-)
-
-if "%1" == "" goto help
-
-if "%1" == "help" (
-	:help
-	echo.Please use `make ^<target^>` where ^<target^> is one of
-	echo.  html       to make standalone HTML files
-	echo.  dirhtml    to make HTML files named index.html in directories
-	echo.  singlehtml to make a single large HTML file
-	echo.  pickle     to make pickle files
-	echo.  json       to make JSON files
-	echo.  htmlhelp   to make HTML files and a HTML help project
-	echo.  qthelp     to make HTML files and a qthelp project
-	echo.  devhelp    to make HTML files and a Devhelp project
-	echo.  epub       to make an epub
-	echo.  epub3      to make an epub3
-	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter
-	echo.  text       to make text files
-	echo.  man        to make manual pages
-	echo.  texinfo    to make Texinfo files
-	echo.  gettext    to make PO message catalogs
-	echo.  changes    to make an overview over all changed/added/deprecated items
-	echo.  xml        to make Docutils-native XML files
-	echo.  pseudoxml  to make pseudoxml-XML files for display purposes
-	echo.  linkcheck  to check all external links for integrity
-	echo.  doctest    to run all doctests embedded in the documentation if enabled
-	echo.  coverage   to run coverage check of the documentation if enabled
-	echo.  dummy      to check syntax errors of document sources
-	goto end
-)
-
-if "%1" == "clean" (
-	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
-	del /q /s %BUILDDIR%\*
-	goto end
-)
-
-
-REM Check if sphinx-build is available and fallback to Python version if any
-%SPHINXBUILD% 1>NUL 2>NUL
-if errorlevel 9009 goto sphinx_python
-goto sphinx_ok
-
-:sphinx_python
-
-set SPHINXBUILD=python -m sphinx.__init__
-%SPHINXBUILD% 2> nul
-if errorlevel 9009 (
-	echo.
-	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
-	echo.installed, then set the SPHINXBUILD environment variable to point
-	echo.to the full path of the 'sphinx-build' executable. Alternatively you
-	echo.may add the Sphinx directory to PATH.
-	echo.
-	echo.If you don't have Sphinx installed, grab it from
-	echo.http://sphinx-doc.org/
-	exit /b 1
-)
-
-:sphinx_ok
-
-
-if "%1" == "html" (
-	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/html.
-	goto end
-)
-
-if "%1" == "dirhtml" (
-	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
-	goto end
-)
-
-if "%1" == "singlehtml" (
-	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
-	goto end
-)
-
-if "%1" == "pickle" (
-	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the pickle files.
-	goto end
-)
-
-if "%1" == "json" (
-	%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the JSON files.
-	goto end
-)
-
-if "%1" == "htmlhelp" (
-	%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run HTML Help Workshop with the ^
-.hhp project file in %BUILDDIR%/htmlhelp.
-	goto end
-)
-
-if "%1" == "qthelp" (
-	%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run "qcollectiongenerator" with the ^
-.qhcp project file in %BUILDDIR%/qthelp, like this:
-	echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Foo.qhcp
-	echo.To view the help file:
-	echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Foo.ghc
-	goto end
-)
-
-if "%1" == "devhelp" (
-	%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished.
-	goto end
-)
-
-if "%1" == "epub" (
-	%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The epub file is in %BUILDDIR%/epub.
-	goto end
-)
-
-if "%1" == "epub3" (
-	%SPHINXBUILD% -b epub3 %ALLSPHINXOPTS% %BUILDDIR%/epub3
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The epub3 file is in %BUILDDIR%/epub3.
-	goto end
-)
-
-if "%1" == "latex" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "latexpdf" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	cd %BUILDDIR%/latex
-	make all-pdf
-	cd %~dp0
-	echo.
-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "latexpdfja" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	cd %BUILDDIR%/latex
-	make all-pdf-ja
-	cd %~dp0
-	echo.
-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "text" (
-	%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The text files are in %BUILDDIR%/text.
-	goto end
-)
-
-if "%1" == "man" (
-	%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The manual pages are in %BUILDDIR%/man.
-	goto end
-)
-
-if "%1" == "texinfo" (
-	%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
-	goto end
-)
-
-if "%1" == "gettext" (
-	%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
-	goto end
-)
-
-if "%1" == "changes" (
-	%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.The overview file is in %BUILDDIR%/changes.
-	goto end
-)
-
-if "%1" == "linkcheck" (
-	%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Link check complete; look for any errors in the above output ^
-or in %BUILDDIR%/linkcheck/output.txt.
-	goto end
-)
-
-if "%1" == "doctest" (
-	%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Testing of doctests in the sources finished, look at the ^
-results in %BUILDDIR%/doctest/output.txt.
-	goto end
-)
-
-if "%1" == "coverage" (
-	%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Testing of coverage in the sources finished, look at the ^
-results in %BUILDDIR%/coverage/python.txt.
-	goto end
-)
-
-if "%1" == "xml" (
-	%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The XML files are in %BUILDDIR%/xml.
-	goto end
-)
-
-if "%1" == "pseudoxml" (
-	%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
-	goto end
-)
-
-if "%1" == "dummy" (
-	%SPHINXBUILD% -b dummy %ALLSPHINXOPTS% %BUILDDIR%/dummy
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. Dummy builder generates no files.
-	goto end
-)
-
-:end
diff --git a/doc/modules/ROOT/nav.adoc b/doc/modules/ROOT/nav.adoc
new file mode 100644
index 0000000..4c80eca
--- /dev/null
+++ b/doc/modules/ROOT/nav.adoc
@@ -0,0 +1,4 @@
+* xref:index.adoc[Main]
+** xref:master@_:ROOT:glossary.adoc[Glossary]
+** xref:master@_:ROOT:bugs.adoc[How to report bugs]
+** xref:master@_:ROOT:contactus.adoc[Contact us]
\ No newline at end of file
diff --git a/doc/modules/ROOT/pages/index.adoc b/doc/modules/ROOT/pages/index.adoc
new file mode 100644
index 0000000..183bf9e
--- /dev/null
+++ b/doc/modules/ROOT/pages/index.adoc
@@ -0,0 +1,48 @@
+= Welcome to Apache Cassandra's documentation!
+
+:description: Starting page for Apache Cassandra documentation.
+:keywords: Apache, Cassandra, NoSQL, database
+:cass-url: http://cassandra.apache.org
+:cass-contrib-url: https://wiki.apache.org/cassandra/HowToContribute
+
+This is the official documentation for {cass-url}[Apache Cassandra]. 
+If you would like to contribute to this documentation, you are welcome 
+to do so by submitting your contribution like any other patch following
+{cass-contrib-url}[these instructions].
+
+== Main documentation
+
+[cols="a,a"]
+|===
+
+| xref:cassandra:getting_started/index.adoc[Getting started] | Newbie starting point
+
+| xref:cassandra:architecture/index.adoc[Architecture] | Cassandra's big picture
+
+| xref:cassandra:data_modeling/index.adoc[Data modeling] | Hint: it's not relational
+
+| xref:cassandra:cql/index.adoc[Cassandra Query Language (CQL)] | CQL reference documentation
+
+| xref:cassandra:configuration/index.adoc[Configuration] | Cassandra's handles and knobs
+
+| xref:cassandra:operating/index.adoc[Operation] | The operator's corner
+
+| xref:cassandra:tools/index.adoc[Tools] | cqlsh, nodetool, and others
+
+| xref:cassandra:troubleshooting/index.adoc[Troubleshooting] | What to look for when you have a problem
+
+| xref:cassandra:faq/index.adoc[FAQ] | Frequently asked questions
+
+| xref:cassandra:plugins/index.adoc[Plug-ins] | Third-party plug-ins
+
+| xref:master@_:ROOT:native_protocol.adoc[Native Protocols] | Native Cassandra protocol specifications
+
+|===
+
+== Meta information
+* xref:master@_:ROOT:bugs.adoc[Reporting bugs]
+* xref:master@_:ROOT:contactus.adoc[Contact us]
+* xref:master@_:ROOT:development/index.adoc[Contributing code]
+* xref:master@_:ROOT:docdev/index.adoc[Contributing to the docs]
+* xref:master@_:ROOT:community.adoc[Community]
+* xref:master@_:ROOT:download.adoc[Download]
diff --git a/doc/modules/cassandra/assets/images/Figure_1_backups.jpg b/doc/modules/cassandra/assets/images/Figure_1_backups.jpg
new file mode 100644
index 0000000..160013d
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_1_backups.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_1_data_model.jpg b/doc/modules/cassandra/assets/images/Figure_1_data_model.jpg
new file mode 100644
index 0000000..a3b330e
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_1_data_model.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_1_guarantees.jpg b/doc/modules/cassandra/assets/images/Figure_1_guarantees.jpg
new file mode 100644
index 0000000..859342d
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_1_guarantees.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_1_read_repair.jpg b/doc/modules/cassandra/assets/images/Figure_1_read_repair.jpg
new file mode 100644
index 0000000..d771550
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_1_read_repair.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_2_data_model.jpg b/doc/modules/cassandra/assets/images/Figure_2_data_model.jpg
new file mode 100644
index 0000000..7acdeac
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_2_data_model.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_2_read_repair.jpg b/doc/modules/cassandra/assets/images/Figure_2_read_repair.jpg
new file mode 100644
index 0000000..29a912b
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_2_read_repair.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_3_read_repair.jpg b/doc/modules/cassandra/assets/images/Figure_3_read_repair.jpg
new file mode 100644
index 0000000..f5cc189
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_3_read_repair.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_4_read_repair.jpg b/doc/modules/cassandra/assets/images/Figure_4_read_repair.jpg
new file mode 100644
index 0000000..25bdb34
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_4_read_repair.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_5_read_repair.jpg b/doc/modules/cassandra/assets/images/Figure_5_read_repair.jpg
new file mode 100644
index 0000000..d9c0485
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_5_read_repair.jpg differ
diff --git a/doc/modules/cassandra/assets/images/Figure_6_read_repair.jpg b/doc/modules/cassandra/assets/images/Figure_6_read_repair.jpg
new file mode 100644
index 0000000..6bb4d1e
Binary files /dev/null and b/doc/modules/cassandra/assets/images/Figure_6_read_repair.jpg differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_chebotko_logical.png b/doc/modules/cassandra/assets/images/data_modeling_chebotko_logical.png
new file mode 100755
index 0000000..e54b5f2
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_chebotko_logical.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_chebotko_physical.png b/doc/modules/cassandra/assets/images/data_modeling_chebotko_physical.png
new file mode 100644
index 0000000..bfdaec5
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_chebotko_physical.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_hotel_bucketing.png b/doc/modules/cassandra/assets/images/data_modeling_hotel_bucketing.png
new file mode 100644
index 0000000..8b53e38
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_hotel_bucketing.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_hotel_erd.png b/doc/modules/cassandra/assets/images/data_modeling_hotel_erd.png
new file mode 100755
index 0000000..e86fe68
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_hotel_erd.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_hotel_logical.png b/doc/modules/cassandra/assets/images/data_modeling_hotel_logical.png
new file mode 100755
index 0000000..e920f12
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_hotel_logical.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_hotel_physical.png b/doc/modules/cassandra/assets/images/data_modeling_hotel_physical.png
new file mode 100644
index 0000000..2d20a6d
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_hotel_physical.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_hotel_queries.png b/doc/modules/cassandra/assets/images/data_modeling_hotel_queries.png
new file mode 100755
index 0000000..2434db3
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_hotel_queries.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_hotel_relational.png b/doc/modules/cassandra/assets/images/data_modeling_hotel_relational.png
new file mode 100755
index 0000000..43e784e
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_hotel_relational.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_reservation_logical.png b/doc/modules/cassandra/assets/images/data_modeling_reservation_logical.png
new file mode 100755
index 0000000..0460633
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_reservation_logical.png differ
diff --git a/doc/modules/cassandra/assets/images/data_modeling_reservation_physical.png b/doc/modules/cassandra/assets/images/data_modeling_reservation_physical.png
new file mode 100755
index 0000000..1e6e76c
Binary files /dev/null and b/doc/modules/cassandra/assets/images/data_modeling_reservation_physical.png differ
diff --git a/doc/modules/cassandra/assets/images/docs_commit.png b/doc/modules/cassandra/assets/images/docs_commit.png
new file mode 100644
index 0000000..d90d96a
Binary files /dev/null and b/doc/modules/cassandra/assets/images/docs_commit.png differ
diff --git a/doc/modules/cassandra/assets/images/docs_create_branch.png b/doc/modules/cassandra/assets/images/docs_create_branch.png
new file mode 100644
index 0000000..a04cb54
Binary files /dev/null and b/doc/modules/cassandra/assets/images/docs_create_branch.png differ
diff --git a/doc/modules/cassandra/assets/images/docs_create_file.png b/doc/modules/cassandra/assets/images/docs_create_file.png
new file mode 100644
index 0000000..b51e370
Binary files /dev/null and b/doc/modules/cassandra/assets/images/docs_create_file.png differ
diff --git a/doc/modules/cassandra/assets/images/docs_editor.png b/doc/modules/cassandra/assets/images/docs_editor.png
new file mode 100644
index 0000000..5b9997b
Binary files /dev/null and b/doc/modules/cassandra/assets/images/docs_editor.png differ
diff --git a/doc/modules/cassandra/assets/images/docs_fork.png b/doc/modules/cassandra/assets/images/docs_fork.png
new file mode 100644
index 0000000..20a592a
Binary files /dev/null and b/doc/modules/cassandra/assets/images/docs_fork.png differ
diff --git a/doc/modules/cassandra/assets/images/docs_pr.png b/doc/modules/cassandra/assets/images/docs_pr.png
new file mode 100644
index 0000000..211eb25
Binary files /dev/null and b/doc/modules/cassandra/assets/images/docs_pr.png differ
diff --git a/doc/modules/cassandra/assets/images/docs_preview.png b/doc/modules/cassandra/assets/images/docs_preview.png
new file mode 100644
index 0000000..207f0ac
Binary files /dev/null and b/doc/modules/cassandra/assets/images/docs_preview.png differ
diff --git a/doc/source/development/images/eclipse_debug0.png b/doc/modules/cassandra/assets/images/eclipse_debug0.png
similarity index 100%
rename from doc/source/development/images/eclipse_debug0.png
rename to doc/modules/cassandra/assets/images/eclipse_debug0.png
diff --git a/doc/source/development/images/eclipse_debug1.png b/doc/modules/cassandra/assets/images/eclipse_debug1.png
similarity index 100%
rename from doc/source/development/images/eclipse_debug1.png
rename to doc/modules/cassandra/assets/images/eclipse_debug1.png
diff --git a/doc/source/development/images/eclipse_debug2.png b/doc/modules/cassandra/assets/images/eclipse_debug2.png
similarity index 100%
rename from doc/source/development/images/eclipse_debug2.png
rename to doc/modules/cassandra/assets/images/eclipse_debug2.png
diff --git a/doc/source/development/images/eclipse_debug3.png b/doc/modules/cassandra/assets/images/eclipse_debug3.png
similarity index 100%
rename from doc/source/development/images/eclipse_debug3.png
rename to doc/modules/cassandra/assets/images/eclipse_debug3.png
diff --git a/doc/source/development/images/eclipse_debug4.png b/doc/modules/cassandra/assets/images/eclipse_debug4.png
similarity index 100%
rename from doc/source/development/images/eclipse_debug4.png
rename to doc/modules/cassandra/assets/images/eclipse_debug4.png
diff --git a/doc/source/development/images/eclipse_debug5.png b/doc/modules/cassandra/assets/images/eclipse_debug5.png
similarity index 100%
rename from doc/source/development/images/eclipse_debug5.png
rename to doc/modules/cassandra/assets/images/eclipse_debug5.png
diff --git a/doc/source/development/images/eclipse_debug6.png b/doc/modules/cassandra/assets/images/eclipse_debug6.png
similarity index 100%
rename from doc/source/development/images/eclipse_debug6.png
rename to doc/modules/cassandra/assets/images/eclipse_debug6.png
diff --git a/doc/modules/cassandra/assets/images/example-stress-graph.png b/doc/modules/cassandra/assets/images/example-stress-graph.png
new file mode 100644
index 0000000..a65b08b
Binary files /dev/null and b/doc/modules/cassandra/assets/images/example-stress-graph.png differ
diff --git a/doc/modules/cassandra/assets/images/hints.svg b/doc/modules/cassandra/assets/images/hints.svg
new file mode 100644
index 0000000..5e952e7
--- /dev/null
+++ b/doc/modules/cassandra/assets/images/hints.svg
@@ -0,0 +1,9 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="661.2000122070312" height="422.26666259765625" style="
+        width:661.2000122070312px;
+        height:422.26666259765625px;
+        background: transparent;
+        fill: none;
+">
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M40,60 C40,43.43 53.43,30 70,30 C86.57,30 100,43.43 100,60 C100,76.57 86.57,90 70,90 C53.43,90 40,76.57 40,60 Z" style="stroke-width: 1px; stroke: rgb(0, 0, 0); fill: none;"/></g><g class="arrow-line"><path class="connection real" stroke-dasharray="" d="  M70,300 L70,387" style="stroke: rgb(0, 0, 0); s [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="660" height="421.066650390625" style="width:660px;height:421.066650390625px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g style="transform:matrix(1,0,0,1,47.266693115234375,65.81666564941406);"><path d="M342 330L365 330C373 395 380 432 389 458C365 473 330 482 293 482C248 483 175 463 118 400C64 352 25 241 25 136C25 40 67 -11 147 -11C201 -11 249 9 304 54L354 95L346 115L331 105C259 57 221 40 186 40C130 40 101 80 101 15 [...]
+</svg>
diff --git a/doc/modules/cassandra/assets/images/ring.svg b/doc/modules/cassandra/assets/images/ring.svg
new file mode 100644
index 0000000..d0db8c5
--- /dev/null
+++ b/doc/modules/cassandra/assets/images/ring.svg
@@ -0,0 +1,11 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="651" height="709.4583740234375" style="
+        width:651px;
+        height:709.4583740234375px;
+        background: transparent;
+        fill: none;
+">
+        
+        
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M223.5,655 C223.5,634.84 239.84,618.5 260,618.5 C280.16,618.5 296.5,634.84 296.5,655 C296.5,675.16 280.16,691.5 260,691.5 C239.84,691.5 223.5,675.16 223.5,655 Z" style="stroke-width: 1; stroke: rgb(103, 148, 135); fill: rgb(103, 148, 135);"/></g><g class="composite-shape"><path class="real" d=" M229.26 [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="649" height="707.4583740234375" style="width:649px;height:707.4583740234375px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g><g><g><g style="transform:matrix(1,0,0,1,12.171875,40.31333587646485);"><path d="M175 386L316 386L316 444L175 444L175 571L106 571L106 444L19 444L19 386L103 386L103 119C103 59 117 -11 186 -11C256 -11 307 14 332 27L316 86C290 65 258 53 226 53C189 53 175 83 175 136ZM829 220C829 354 729 461 610 461 [...]
+</svg>
diff --git a/doc/modules/cassandra/assets/images/vnodes.svg b/doc/modules/cassandra/assets/images/vnodes.svg
new file mode 100644
index 0000000..71b4fa2
--- /dev/null
+++ b/doc/modules/cassandra/assets/images/vnodes.svg
@@ -0,0 +1,11 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="651" height="384.66668701171875" style="
+        width:651px;
+        height:384.66668701171875px;
+        background: transparent;
+        fill: none;
+">
+        
+        
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M40.4,190 C40.4,107.38 107.38,40.4 190,40.4 C272.62,40.4 339.6,107.38 339.6,190 C339.6,272.62 272.62,339.6 190,339.6 C107.38,339.6 40.4,272.62 40.4,190 Z" style="stroke-width: 1; stroke: rgba(0, 0, 0, 0.52); fill: none; stroke-dasharray: 1.125, 3.35;"/></g><g class="composite-shape"><path class="real"  [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="649" height="382.66668701171875" style="width:649px;height:382.66668701171875px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g><g><g><g style="transform:matrix(1,0,0,1,178.65625,348.9985620117188);"><path d="M125 390L69 107C68 99 56 61 56 31C56 6 67 -9 86 -9C121 -9 156 11 234 74L265 99L255 117L210 86C181 66 161 56 150 56C141 56 136 64 136 76C136 102 150 183 179 328L192 390L299 390L310 440C272 436 238 434 200 434C216  [...]
+</svg>
diff --git a/doc/modules/cassandra/examples/BASH/add_repo_keys.sh b/doc/modules/cassandra/examples/BASH/add_repo_keys.sh
new file mode 100644
index 0000000..cdb5881
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/add_repo_keys.sh
@@ -0,0 +1 @@
+$ curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
diff --git a/doc/modules/cassandra/examples/BASH/apt-get_cass.sh b/doc/modules/cassandra/examples/BASH/apt-get_cass.sh
new file mode 100644
index 0000000..9614b29
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/apt-get_cass.sh
@@ -0,0 +1 @@
+$ sudo apt-get install cassandra
diff --git a/doc/modules/cassandra/examples/BASH/apt-get_update.sh b/doc/modules/cassandra/examples/BASH/apt-get_update.sh
new file mode 100644
index 0000000..b50b7ac
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/apt-get_update.sh
@@ -0,0 +1 @@
+$ sudo apt-get update
diff --git a/doc/modules/cassandra/examples/BASH/check_backups.sh b/doc/modules/cassandra/examples/BASH/check_backups.sh
new file mode 100644
index 0000000..212c3d2
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/check_backups.sh
@@ -0,0 +1 @@
+$ cd ./cassandra/data/data/cqlkeyspace/t-d132e240c21711e9bbee19821dcea330/backups && ls -l
diff --git a/doc/modules/cassandra/examples/BASH/cqlsh_localhost.sh b/doc/modules/cassandra/examples/BASH/cqlsh_localhost.sh
new file mode 100644
index 0000000..7bc1c39
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/cqlsh_localhost.sh
@@ -0,0 +1 @@
+$ bin/cqlsh localhost
diff --git a/doc/modules/cassandra/examples/BASH/curl_install.sh b/doc/modules/cassandra/examples/BASH/curl_install.sh
new file mode 100644
index 0000000..23e7c01
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/curl_install.sh
@@ -0,0 +1 @@
+$ curl -OL http://apache.mirror.digitalpacific.com.au/cassandra/{cass-tag-3x}/apache-cassandra-{cass-tag-3x}-bin.tar.gz
diff --git a/doc/modules/cassandra/examples/BASH/curl_verify_sha.sh b/doc/modules/cassandra/examples/BASH/curl_verify_sha.sh
new file mode 100644
index 0000000..bde80ca
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/curl_verify_sha.sh
@@ -0,0 +1 @@
+$ curl -L https://downloads.apache.org/cassandra/{cass-tag-3x}/apache-cassandra-{cass-tag-3x}-bin.tar.gz.sha256
diff --git a/doc/modules/cassandra/examples/BASH/docker_cqlsh.sh b/doc/modules/cassandra/examples/BASH/docker_cqlsh.sh
new file mode 100644
index 0000000..92a4a8f
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/docker_cqlsh.sh
@@ -0,0 +1 @@
+docker exec -it cass_cluster cqlsh
diff --git a/doc/modules/cassandra/examples/BASH/docker_pull.sh b/doc/modules/cassandra/examples/BASH/docker_pull.sh
new file mode 100644
index 0000000..67e5e22
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/docker_pull.sh
@@ -0,0 +1 @@
+docker pull cassandra:{cass-docker-tag-3x}
diff --git a/doc/modules/cassandra/examples/BASH/docker_remove.sh b/doc/modules/cassandra/examples/BASH/docker_remove.sh
new file mode 100644
index 0000000..bf95630
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/docker_remove.sh
@@ -0,0 +1 @@
+docker rm cassandra
diff --git a/doc/modules/cassandra/examples/BASH/docker_run.sh b/doc/modules/cassandra/examples/BASH/docker_run.sh
new file mode 100644
index 0000000..bb4ecdb
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/docker_run.sh
@@ -0,0 +1 @@
+docker run --name cass_cluster cassandra:{cass-docker-tag-3x}
diff --git a/doc/modules/cassandra/examples/BASH/docker_run_qs.sh b/doc/modules/cassandra/examples/BASH/docker_run_qs.sh
new file mode 100644
index 0000000..7416f5d
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/docker_run_qs.sh
@@ -0,0 +1,3 @@
+docker run --rm -it -v /<currentdir>/scripts:/scripts  \
+-v /<currentdir/cqlshrc:/.cassandra/cqlshrc  \
+--env CQLSH_HOST=host.docker.internal --env CQLSH_PORT=9042  nuvo/docker-cqlsh
diff --git a/doc/modules/cassandra/examples/BASH/find_backups.sh b/doc/modules/cassandra/examples/BASH/find_backups.sh
new file mode 100644
index 0000000..56744bb
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/find_backups.sh
@@ -0,0 +1 @@
+$ find -name backups
diff --git a/doc/modules/cassandra/examples/BASH/find_snapshots.sh b/doc/modules/cassandra/examples/BASH/find_snapshots.sh
new file mode 100644
index 0000000..7abae2b4
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/find_snapshots.sh
@@ -0,0 +1 @@
+$ find -name snapshots
diff --git a/doc/modules/cassandra/examples/BASH/find_sstables.sh b/doc/modules/cassandra/examples/BASH/find_sstables.sh
new file mode 100644
index 0000000..5156903
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/find_sstables.sh
@@ -0,0 +1 @@
+find /var/lib/cassandra/data/ -type f | grep -v -- -ib- | grep -v "/snapshots"
diff --git a/doc/modules/cassandra/examples/BASH/find_two_snapshots.sh b/doc/modules/cassandra/examples/BASH/find_two_snapshots.sh
new file mode 100644
index 0000000..6e97b4b
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/find_two_snapshots.sh
@@ -0,0 +1 @@
+$ cd ./cassandra/data/data/catalogkeyspace/journal-296a2d30c22a11e9b1350d927649052c/snapshots && ls -l
diff --git a/doc/modules/cassandra/examples/BASH/flush_and_check.sh b/doc/modules/cassandra/examples/BASH/flush_and_check.sh
new file mode 100644
index 0000000..5f966e3
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/flush_and_check.sh
@@ -0,0 +1,2 @@
+$ nodetool flush cqlkeyspace t
+$ cd ./cassandra/data/data/cqlkeyspace/t-d132e240c21711e9bbee19821dcea330/backups && ls -l
diff --git a/doc/modules/cassandra/examples/BASH/get_deb_package.sh b/doc/modules/cassandra/examples/BASH/get_deb_package.sh
new file mode 100644
index 0000000..f52e72c
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/get_deb_package.sh
@@ -0,0 +1,2 @@
+$ echo "deb http://www.apache.org/dist/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
+deb http://www.apache.org/dist/cassandra/debian 311x main
diff --git a/doc/modules/cassandra/examples/BASH/java_verify.sh b/doc/modules/cassandra/examples/BASH/java_verify.sh
new file mode 100644
index 0000000..da7832f
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/java_verify.sh
@@ -0,0 +1 @@
+$ java -version
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_clearsnapshot.sh b/doc/modules/cassandra/examples/BASH/nodetool_clearsnapshot.sh
new file mode 100644
index 0000000..a327ad1
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_clearsnapshot.sh
@@ -0,0 +1 @@
+$ nodetool clearsnapshot -t magazine cqlkeyspace
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_clearsnapshot_all.sh b/doc/modules/cassandra/examples/BASH/nodetool_clearsnapshot_all.sh
new file mode 100644
index 0000000..a22841d
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_clearsnapshot_all.sh
@@ -0,0 +1 @@
+$ nodetool clearsnapshot -all cqlkeyspace
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_flush.sh b/doc/modules/cassandra/examples/BASH/nodetool_flush.sh
new file mode 100644
index 0000000..960b852
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_flush.sh
@@ -0,0 +1,3 @@
+$ nodetool flush cqlkeyspace t
+$ nodetool flush cqlkeyspace t2
+$ nodetool flush catalogkeyspace journal magazine
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_flush_table.sh b/doc/modules/cassandra/examples/BASH/nodetool_flush_table.sh
new file mode 100644
index 0000000..2c236de
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_flush_table.sh
@@ -0,0 +1 @@
+$ nodetool flush cqlkeyspace t
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_list_snapshots.sh b/doc/modules/cassandra/examples/BASH/nodetool_list_snapshots.sh
new file mode 100644
index 0000000..76633f0
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_list_snapshots.sh
@@ -0,0 +1 @@
+$ nodetool listsnapshots
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_snapshot.sh b/doc/modules/cassandra/examples/BASH/nodetool_snapshot.sh
new file mode 100644
index 0000000..c74e467
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_snapshot.sh
@@ -0,0 +1 @@
+$ nodetool help snapshot
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_status.sh b/doc/modules/cassandra/examples/BASH/nodetool_status.sh
new file mode 100644
index 0000000..a9b768d
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_status.sh
@@ -0,0 +1 @@
+$ bin/nodetool status
diff --git a/doc/modules/cassandra/examples/BASH/nodetool_status_nobin.sh b/doc/modules/cassandra/examples/BASH/nodetool_status_nobin.sh
new file mode 100644
index 0000000..d7adbd3
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/nodetool_status_nobin.sh
@@ -0,0 +1 @@
+$ nodetool status
diff --git a/doc/modules/cassandra/examples/BASH/run_cqlsh.sh b/doc/modules/cassandra/examples/BASH/run_cqlsh.sh
new file mode 100644
index 0000000..ae8cbbd
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/run_cqlsh.sh
@@ -0,0 +1 @@
+$ bin/cqlsh
diff --git a/doc/modules/cassandra/examples/BASH/run_cqlsh_nobin.sh b/doc/modules/cassandra/examples/BASH/run_cqlsh_nobin.sh
new file mode 100644
index 0000000..5517fbf
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/run_cqlsh_nobin.sh
@@ -0,0 +1 @@
+$ cqlsh
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_backup2.sh b/doc/modules/cassandra/examples/BASH/snapshot_backup2.sh
new file mode 100644
index 0000000..6d29f0a
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_backup2.sh
@@ -0,0 +1 @@
+$ nodetool snapshot --tag catalog-ks catalogkeyspace
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_both_backups.sh b/doc/modules/cassandra/examples/BASH/snapshot_both_backups.sh
new file mode 100644
index 0000000..0966070
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_both_backups.sh
@@ -0,0 +1 @@
+$ nodetool snapshot --tag catalog-cql-ks catalogkeyspace, cqlkeyspace
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_files.sh b/doc/modules/cassandra/examples/BASH/snapshot_files.sh
new file mode 100644
index 0000000..916f0e5
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_files.sh
@@ -0,0 +1 @@
+$ cd catalog-ks && ls -l
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_mult_ks.sh b/doc/modules/cassandra/examples/BASH/snapshot_mult_ks.sh
new file mode 100644
index 0000000..fed3d3c
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_mult_ks.sh
@@ -0,0 +1 @@
+$ nodetool snapshot --kt-list catalogkeyspace.journal,cqlkeyspace.t --tag multi-ks
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_mult_tables.sh b/doc/modules/cassandra/examples/BASH/snapshot_mult_tables.sh
new file mode 100644
index 0000000..ad3a0d2
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_mult_tables.sh
@@ -0,0 +1 @@
+$ nodetool snapshot --kt-list cqlkeyspace.t,cqlkeyspace.t2 --tag multi-table
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_mult_tables_again.sh b/doc/modules/cassandra/examples/BASH/snapshot_mult_tables_again.sh
new file mode 100644
index 0000000..f676f5b
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_mult_tables_again.sh
@@ -0,0 +1 @@
+$ nodetool snapshot --kt-list cqlkeyspace.t, cqlkeyspace.t2 --tag multi-table-2
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_one_table.sh b/doc/modules/cassandra/examples/BASH/snapshot_one_table.sh
new file mode 100644
index 0000000..05484a9
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_one_table.sh
@@ -0,0 +1 @@
+$ nodetool snapshot --tag <tag> --table <table>  --<keyspace>
diff --git a/doc/modules/cassandra/examples/BASH/snapshot_one_table2.sh b/doc/modules/cassandra/examples/BASH/snapshot_one_table2.sh
new file mode 100644
index 0000000..7387710
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/snapshot_one_table2.sh
@@ -0,0 +1 @@
+$ nodetool snapshot --tag magazine --table magazine  catalogkeyspace
diff --git a/doc/modules/cassandra/examples/BASH/start_tarball.sh b/doc/modules/cassandra/examples/BASH/start_tarball.sh
new file mode 100644
index 0000000..6331270
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/start_tarball.sh
@@ -0,0 +1 @@
+$ cd apache-cassandra-{cass-tag-3x}/ && bin/cassandra
diff --git a/doc/modules/cassandra/examples/BASH/tail_syslog.sh b/doc/modules/cassandra/examples/BASH/tail_syslog.sh
new file mode 100644
index 0000000..b475750
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/tail_syslog.sh
@@ -0,0 +1 @@
+$ tail -f logs/system.log
diff --git a/doc/modules/cassandra/examples/BASH/tail_syslog_package.sh b/doc/modules/cassandra/examples/BASH/tail_syslog_package.sh
new file mode 100644
index 0000000..c9f00ed
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/tail_syslog_package.sh
@@ -0,0 +1 @@
+$ tail -f /var/log/cassandra/system.log
diff --git a/doc/modules/cassandra/examples/BASH/tarball.sh b/doc/modules/cassandra/examples/BASH/tarball.sh
new file mode 100644
index 0000000..0ef448a
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/tarball.sh
@@ -0,0 +1 @@
+$ tar xzvf apache-cassandra-{cass-tag-3x}-bin.tar.gz
diff --git a/doc/modules/cassandra/examples/BASH/verify_gpg.sh b/doc/modules/cassandra/examples/BASH/verify_gpg.sh
new file mode 100644
index 0000000..9a503da
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/verify_gpg.sh
@@ -0,0 +1 @@
+$ gpg --print-md SHA256 apache-cassandra-{cass-tag-3x}-bin.tar.gz
diff --git a/doc/modules/cassandra/examples/BASH/yum_cass.sh b/doc/modules/cassandra/examples/BASH/yum_cass.sh
new file mode 100644
index 0000000..cd8217b
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/yum_cass.sh
@@ -0,0 +1 @@
+$ sudo yum install cassandra
diff --git a/doc/modules/cassandra/examples/BASH/yum_start.sh b/doc/modules/cassandra/examples/BASH/yum_start.sh
new file mode 100644
index 0000000..4930d1a
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/yum_start.sh
@@ -0,0 +1 @@
+$ sudo service cassandra start
diff --git a/doc/modules/cassandra/examples/BASH/yum_update.sh b/doc/modules/cassandra/examples/BASH/yum_update.sh
new file mode 100644
index 0000000..2e815b2
--- /dev/null
+++ b/doc/modules/cassandra/examples/BASH/yum_update.sh
@@ -0,0 +1 @@
+$ sudo yum update
diff --git a/doc/modules/cassandra/examples/BNF/aggregate_name.bnf b/doc/modules/cassandra/examples/BNF/aggregate_name.bnf
new file mode 100644
index 0000000..a7ccdc3
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/aggregate_name.bnf
@@ -0,0 +1 @@
+aggregate_name::= [keyspace_name '.' ] name
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/alter_ks.bnf b/doc/modules/cassandra/examples/BNF/alter_ks.bnf
new file mode 100644
index 0000000..5f82d34
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/alter_ks.bnf
@@ -0,0 +1,2 @@
+alter_keyspace_statement::= ALTER KEYSPACE keyspace_name
+	WITH options
diff --git a/doc/modules/cassandra/examples/BNF/alter_mv_statement.bnf b/doc/modules/cassandra/examples/BNF/alter_mv_statement.bnf
new file mode 100644
index 0000000..ff97edb
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/alter_mv_statement.bnf
@@ -0,0 +1 @@
+alter_materialized_view_statement::= ALTER MATERIALIZED VIEW view_name WITH table_options
diff --git a/doc/modules/cassandra/examples/BNF/alter_role_statement.bnf b/doc/modules/cassandra/examples/BNF/alter_role_statement.bnf
new file mode 100644
index 0000000..36958d7
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/alter_role_statement.bnf
@@ -0,0 +1 @@
+alter_role_statement ::= ALTER ROLE role_name WITH role_options
diff --git a/doc/modules/cassandra/examples/BNF/alter_table.bnf b/doc/modules/cassandra/examples/BNF/alter_table.bnf
new file mode 100644
index 0000000..bf1b4b7
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/alter_table.bnf
@@ -0,0 +1,4 @@
+alter_table_statement::= ALTER TABLE table_name alter_table_instruction 
+alter_table_instruction::= ADD column_name cql_type ( ',' column_name cql_type )* 
+	| DROP column_name ( column_name )*  
+	| WITH options
diff --git a/doc/modules/cassandra/examples/BNF/alter_udt_statement.bnf b/doc/modules/cassandra/examples/BNF/alter_udt_statement.bnf
new file mode 100644
index 0000000..4f409e6
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/alter_udt_statement.bnf
@@ -0,0 +1,3 @@
+alter_type_statement::= ALTER TYPE udt_name alter_type_modification
+alter_type_modification::= ADD field_definition
+        | RENAME identifier TO identifier( identifier TO identifier )*
diff --git a/doc/modules/cassandra/examples/BNF/alter_user_statement.bnf b/doc/modules/cassandra/examples/BNF/alter_user_statement.bnf
new file mode 100644
index 0000000..129607c
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/alter_user_statement.bnf
@@ -0,0 +1 @@
+alter_user_statement ::= ALTER USER role_name [ WITH PASSWORD string] [ user_option]
diff --git a/doc/modules/cassandra/examples/BNF/batch_statement.bnf b/doc/modules/cassandra/examples/BNF/batch_statement.bnf
new file mode 100644
index 0000000..2cc2559
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/batch_statement.bnf
@@ -0,0 +1,5 @@
+batch_statement ::=     BEGIN [ UNLOGGED | COUNTER ] BATCH
+                        [ USING update_parameter( AND update_parameter)* ]
+                        modification_statement ( ';' modification_statement )*
+                        APPLY BATCH
+modification_statement ::= insert_statement | update_statement | delete_statement
diff --git a/doc/modules/cassandra/examples/BNF/collection_literal.bnf b/doc/modules/cassandra/examples/BNF/collection_literal.bnf
new file mode 100644
index 0000000..83a46a2
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/collection_literal.bnf
@@ -0,0 +1,4 @@
+collection_literal::= map_literal | set_literal | list_literal 
+map_literal::= '\{' [ term ':' term (',' term : term)* ] '}' 
+set_literal::= '\{' [ term (',' term)* ] '}' 
+list_literal::= '[' [ term (',' term)* ] ']'
diff --git a/doc/modules/cassandra/examples/BNF/collection_type.bnf b/doc/modules/cassandra/examples/BNF/collection_type.bnf
new file mode 100644
index 0000000..37e6cd1
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/collection_type.bnf
@@ -0,0 +1,3 @@
+collection_type::= MAP '<' cql_type',' cql_type'>' 
+	| SET '<' cql_type '>' 
+	| LIST '<' cql_type'>'
diff --git a/doc/modules/cassandra/examples/BNF/column.bnf b/doc/modules/cassandra/examples/BNF/column.bnf
new file mode 100644
index 0000000..136a45c
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/column.bnf
@@ -0,0 +1 @@
+column_name::= identifier
diff --git a/doc/modules/cassandra/examples/BNF/constant.bnf b/doc/modules/cassandra/examples/BNF/constant.bnf
new file mode 100644
index 0000000..4a2953a
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/constant.bnf
@@ -0,0 +1,8 @@
+constant::= string | integer | float | boolean | uuid | blob | NULL
+string::= ''' (any character where ' can appear if doubled)+ ''' : '$$' (any character other than '$$') '$$'
+integer::= re('-?[0-9]+')
+float::= re('-?[0-9]+(.[0-9]*)?([eE][+-]?[0-9+])?') | NAN | INFINITY
+boolean::= TRUE | FALSE
+uuid::= hex\{8}-hex\{4}-hex\{4}-hex\{4}-hex\{12}
+hex::= re("[0-9a-fA-F]")
+blob::= '0' ('x' | 'X') hex+
diff --git a/doc/modules/cassandra/examples/BNF/cql_statement.bnf b/doc/modules/cassandra/examples/BNF/cql_statement.bnf
new file mode 100644
index 0000000..8d4ae21
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/cql_statement.bnf
@@ -0,0 +1,48 @@
+cql_statement::= statement [ ';' ]
+statement:=: ddl_statement :
+        | dml_statement
+        | secondary_index_statement
+        | materialized_view_statement
+        | role_or_permission_statement
+        | udf_statement
+        | udt_statement
+        | trigger_statement
+ddl_statement::= use_statement
+        | create_keyspace_statement
+        | alter_keyspace_statement
+        | drop_keyspace_statement
+        | create_table_statement
+        | alter_table_statement
+        | drop_table_statement
+        | truncate_statement
+dml_statement::= select_statement
+        | insert_statement
+        | update_statement
+        | delete_statement
+        | batch_statement
+secondary_index_statement::= create_index_statement
+        | drop_index_statement
+materialized_view_statement::= create_materialized_view_statement
+        | drop_materialized_view_statement
+role_or_permission_statement::= create_role_statement
+        | alter_role_statement
+        | drop_role_statement
+        | grant_role_statement
+        | revoke_role_statement
+        | list_roles_statement
+        | grant_permission_statement
+        | revoke_permission_statement
+        | list_permissions_statement
+        | create_user_statement
+        | alter_user_statement
+        | drop_user_statement
+        | list_users_statement
+udf_statement::= create_function_statement
+        | drop_function_statement
+        | create_aggregate_statement
+        | drop_aggregate_statement
+udt_statement::= create_type_statement
+        | alter_type_statement
+        | drop_type_statement
+trigger_statement::= create_trigger_statement
+        | drop_trigger_statement
diff --git a/doc/modules/cassandra/examples/BNF/cql_type.bnf b/doc/modules/cassandra/examples/BNF/cql_type.bnf
new file mode 100644
index 0000000..4e2e5d1
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/cql_type.bnf
@@ -0,0 +1 @@
+cql_type::= native_type| collection_type| user_defined_type | tuple_type | custom_type
diff --git a/doc/modules/cassandra/examples/BNF/create_aggregate_statement.bnf b/doc/modules/cassandra/examples/BNF/create_aggregate_statement.bnf
new file mode 100644
index 0000000..c0126a2
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_aggregate_statement.bnf
@@ -0,0 +1,6 @@
+create_aggregate_statement ::= CREATE [ OR REPLACE ] AGGREGATE [ IF NOT EXISTS ]
+                                function_name '(' arguments_signature')'
+                                SFUNC function_name
+                                STYPE cql_type:
+                                [ FINALFUNC function_name]
+                                [ INITCOND term ]
diff --git a/doc/modules/cassandra/examples/BNF/create_function_statement.bnf b/doc/modules/cassandra/examples/BNF/create_function_statement.bnf
new file mode 100644
index 0000000..0da769a
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_function_statement.bnf
@@ -0,0 +1,6 @@
+create_function_statement::= CREATE [ OR REPLACE ] FUNCTION [ IF NOT EXISTS] 
+	function_name '(' arguments_declaration ')' 
+	[ CALLED | RETURNS NULL ] ON NULL INPUT 
+	RETURNS cql_type 
+	LANGUAGE identifier 
+	AS string arguments_declaration: identifier cql_type ( ',' identifier cql_type )*
diff --git a/doc/modules/cassandra/examples/BNF/create_index_statement.bnf b/doc/modules/cassandra/examples/BNF/create_index_statement.bnf
new file mode 100644
index 0000000..6e76947
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_index_statement.bnf
@@ -0,0 +1,5 @@
+create_index_statement::= CREATE [ CUSTOM ] INDEX [ IF NOT EXISTS ] [ index_name ] 
+	ON table_name '(' index_identifier ')' 
+	[ USING string [ WITH OPTIONS = map_literal ] ] 
+index_identifier::= column_name 
+	| ( KEYS | VALUES | ENTRIES | FULL ) '(' column_name ')'
diff --git a/doc/modules/cassandra/examples/BNF/create_ks.bnf b/doc/modules/cassandra/examples/BNF/create_ks.bnf
new file mode 100644
index 0000000..ba3e240
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_ks.bnf
@@ -0,0 +1,2 @@
+create_keyspace_statement::= CREATE KEYSPACE [ IF NOT EXISTS ] keyspace_name 
+	WITH options
diff --git a/doc/modules/cassandra/examples/BNF/create_mv_statement.bnf b/doc/modules/cassandra/examples/BNF/create_mv_statement.bnf
new file mode 100644
index 0000000..9bdb60d
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_mv_statement.bnf
@@ -0,0 +1,4 @@
+create_materialized_view_statement::= CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] view_name
+	AS select_statement
+	PRIMARY KEY '(' primary_key')' 
+	WITH table_options
diff --git a/doc/modules/cassandra/examples/BNF/create_role_statement.bnf b/doc/modules/cassandra/examples/BNF/create_role_statement.bnf
new file mode 100644
index 0000000..bc93fbc
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_role_statement.bnf
@@ -0,0 +1,9 @@
+create_role_statement ::= CREATE ROLE [ IF NOT EXISTS ] role_name
+                          [ WITH role_options# ]
+role_options ::= role_option ( AND role_option)*
+role_option ::= PASSWORD '=' string
+                | LOGIN '=' boolean
+                | SUPERUSER '=' boolean
+                | OPTIONS '=' map_literal
+                | ACCESS TO DATACENTERS set_literal
+                | ACCESS TO ALL DATACENTERS
diff --git a/doc/modules/cassandra/examples/BNF/create_table.bnf b/doc/modules/cassandra/examples/BNF/create_table.bnf
new file mode 100644
index 0000000..840573c
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_table.bnf
@@ -0,0 +1,12 @@
+create_table_statement::= CREATE TABLE [ IF NOT EXISTS ] table_name '(' 
+	column_definition  ( ',' column_definition )*  
+	[ ',' PRIMARY KEY '(' primary_key ')' ] 
+	 ')' [ WITH table_options ] 
+column_definition::= column_name cql_type [ STATIC ] [ PRIMARY KEY] 
+primary_key::= partition_key [ ',' clustering_columns ] 
+partition_key::= column_name  | '(' column_name ( ',' column_name )* ')' 
+clustering_columns::= column_name ( ',' column_name )* 
+table_options:=: COMPACT STORAGE [ AND table_options ]  
+	| CLUSTERING ORDER BY '(' clustering_order ')' 
+	[ AND table_options ]  | options
+clustering_order::= column_name (ASC | DESC) ( ',' column_name (ASC | DESC) )*
diff --git a/doc/modules/cassandra/examples/BNF/create_trigger_statement.bnf b/doc/modules/cassandra/examples/BNF/create_trigger_statement.bnf
new file mode 100644
index 0000000..f7442da
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_trigger_statement.bnf
@@ -0,0 +1,3 @@
+create_trigger_statement ::= CREATE TRIGGER [ IF NOT EXISTS ] trigger_name
+	ON table_name
+	USING string
diff --git a/doc/modules/cassandra/examples/BNF/create_type.bnf b/doc/modules/cassandra/examples/BNF/create_type.bnf
new file mode 100644
index 0000000..aebe9eb
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_type.bnf
@@ -0,0 +1,3 @@
+create_type_statement::= CREATE TYPE [ IF NOT EXISTS ] udt_name
+        '(' field_definition ( ',' field_definition)* ')'
+field_definition::= identifier cql_type
diff --git a/doc/modules/cassandra/examples/BNF/create_user_statement.bnf b/doc/modules/cassandra/examples/BNF/create_user_statement.bnf
new file mode 100644
index 0000000..19f9903
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/create_user_statement.bnf
@@ -0,0 +1,4 @@
+create_user_statement ::= CREATE USER [ IF NOT EXISTS ] role_name
+                          [ WITH PASSWORD string ]
+                          [ user_option ]
+user_option: SUPERUSER | NOSUPERUSER
diff --git a/doc/modules/cassandra/examples/BNF/custom_type.bnf b/doc/modules/cassandra/examples/BNF/custom_type.bnf
new file mode 100644
index 0000000..ce4890f
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/custom_type.bnf
@@ -0,0 +1 @@
+custom_type::= string
diff --git a/doc/modules/cassandra/examples/BNF/delete_statement.bnf b/doc/modules/cassandra/examples/BNF/delete_statement.bnf
new file mode 100644
index 0000000..5f456ba
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/delete_statement.bnf
@@ -0,0 +1,5 @@
+delete_statement::= DELETE [ simple_selection ( ',' simple_selection ) ] 
+	FROM table_name 
+	[ USING update_parameter ( AND update_parameter# )* ] 
+	WHERE where_clause 
+	[ IF ( EXISTS | condition ( AND condition)*) ]
diff --git a/doc/modules/cassandra/examples/BNF/describe_aggregate_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_aggregate_statement.bnf
new file mode 100644
index 0000000..b94526b
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_aggregate_statement.bnf
@@ -0,0 +1 @@
+describe_aggregate_statement::= DESCRIBE AGGREGATE aggregate_name;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_aggregates_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_aggregates_statement.bnf
new file mode 100644
index 0000000..049afef
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_aggregates_statement.bnf
@@ -0,0 +1 @@
+describe_aggregates_statement::= DESCRIBE AGGREGATES;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_cluster_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_cluster_statement.bnf
new file mode 100644
index 0000000..8f58ac8
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_cluster_statement.bnf
@@ -0,0 +1 @@
+describe_cluster_statement::= DESCRIBE CLUSTER;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_function_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_function_statement.bnf
new file mode 100644
index 0000000..9145e92
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_function_statement.bnf
@@ -0,0 +1 @@
+describe_function_statement::= DESCRIBE FUNCTION function_name;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_functions_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_functions_statement.bnf
new file mode 100644
index 0000000..4e3b822
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_functions_statement.bnf
@@ -0,0 +1 @@
+describe_functions_statement::= DESCRIBE FUNCTIONS;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_index_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_index_statement.bnf
new file mode 100644
index 0000000..907c175
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_index_statement.bnf
@@ -0,0 +1 @@
+describe_index_statement::= DESCRIBE INDEX index;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_keyspace_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_keyspace_statement.bnf
new file mode 100644
index 0000000..771e755
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_keyspace_statement.bnf
@@ -0,0 +1 @@
+describe_keyspace_statement::= DESCRIBE [ONLY] KEYSPACE [keyspace_name] [WITH INTERNALS];
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_keyspaces_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_keyspaces_statement.bnf
new file mode 100644
index 0000000..51b3c26
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_keyspaces_statement.bnf
@@ -0,0 +1 @@
+describe_keyspaces_statement::= DESCRIBE KEYSPACES;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_materialized_view_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_materialized_view_statement.bnf
new file mode 100644
index 0000000..3297c0e
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_materialized_view_statement.bnf
@@ -0,0 +1 @@
+describe_materialized_view_statement::= DESCRIBE MATERIALIZED VIEW materialized_view;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_object_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_object_statement.bnf
new file mode 100644
index 0000000..d8addae
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_object_statement.bnf
@@ -0,0 +1 @@
+describe_object_statement::= DESCRIBE object_name [WITH INTERNALS];
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_schema_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_schema_statement.bnf
new file mode 100644
index 0000000..7344081
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_schema_statement.bnf
@@ -0,0 +1 @@
+describe_schema_statement::= DESCRIBE [FULL] SCHEMA [WITH INTERNALS];
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_table_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_table_statement.bnf
new file mode 100644
index 0000000..1a0cd73
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_table_statement.bnf
@@ -0,0 +1 @@
+describe_table_statement::= DESCRIBE TABLE table_name [WITH INTERNALS];
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_tables_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_tables_statement.bnf
new file mode 100644
index 0000000..061452c
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_tables_statement.bnf
@@ -0,0 +1 @@
+describe_tables_statement::= DESCRIBE TABLES;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_type_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_type_statement.bnf
new file mode 100644
index 0000000..f592af4
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_type_statement.bnf
@@ -0,0 +1 @@
+describe_type_statement::= DESCRIBE TYPE udt_name;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/describe_types_statement.bnf b/doc/modules/cassandra/examples/BNF/describe_types_statement.bnf
new file mode 100644
index 0000000..73f2827
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/describe_types_statement.bnf
@@ -0,0 +1 @@
+describe_types_statement::= DESCRIBE TYPES;
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/drop_aggregate_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_aggregate_statement.bnf
new file mode 100644
index 0000000..28e8a4f
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_aggregate_statement.bnf
@@ -0,0 +1,2 @@
+drop_aggregate_statement::= DROP AGGREGATE [ IF EXISTS ] function_name[ '(' arguments_signature ')'
+]
diff --git a/doc/modules/cassandra/examples/BNF/drop_function_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_function_statement.bnf
new file mode 100644
index 0000000..2639bd0
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_function_statement.bnf
@@ -0,0 +1,2 @@
+drop_function_statement::= DROP FUNCTION [ IF EXISTS ] function_name [ '(' arguments_signature ')' ] 
+arguments_signature::= cql_type ( ',' cql_type )*
diff --git a/doc/modules/cassandra/examples/BNF/drop_index_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_index_statement.bnf
new file mode 100644
index 0000000..49f36d1
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_index_statement.bnf
@@ -0,0 +1 @@
+drop_index_statement::= DROP INDEX [ IF EXISTS ] index_name
diff --git a/doc/modules/cassandra/examples/BNF/drop_ks.bnf b/doc/modules/cassandra/examples/BNF/drop_ks.bnf
new file mode 100644
index 0000000..4e21b7b
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_ks.bnf
@@ -0,0 +1 @@
+drop_keyspace_statement::= DROP KEYSPACE [ IF EXISTS ] keyspace_name
diff --git a/doc/modules/cassandra/examples/BNF/drop_mv_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_mv_statement.bnf
new file mode 100644
index 0000000..1a9d8dc
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_mv_statement.bnf
@@ -0,0 +1 @@
+drop_materialized_view_statement::= DROP MATERIALIZED VIEW [ IF EXISTS ] view_name;
diff --git a/doc/modules/cassandra/examples/BNF/drop_role_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_role_statement.bnf
new file mode 100644
index 0000000..15e1791
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_role_statement.bnf
@@ -0,0 +1 @@
+drop_role_statement ::= DROP ROLE [ IF EXISTS ] role_name
diff --git a/doc/modules/cassandra/examples/BNF/drop_table.bnf b/doc/modules/cassandra/examples/BNF/drop_table.bnf
new file mode 100644
index 0000000..cabd17a
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_table.bnf
@@ -0,0 +1 @@
+drop_table_statement::= DROP TABLE [ IF EXISTS ] table_name
diff --git a/doc/modules/cassandra/examples/BNF/drop_trigger_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_trigger_statement.bnf
new file mode 100644
index 0000000..c1d3e59
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_trigger_statement.bnf
@@ -0,0 +1 @@
+drop_trigger_statement ::= DROP TRIGGER [ IF EXISTS ] trigger_nameON table_name
diff --git a/doc/modules/cassandra/examples/BNF/drop_udt_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_udt_statement.bnf
new file mode 100644
index 0000000..276b57c
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_udt_statement.bnf
@@ -0,0 +1 @@
+drop_type_statement::= DROP TYPE [ IF EXISTS ] udt_name
diff --git a/doc/modules/cassandra/examples/BNF/drop_user_statement.bnf b/doc/modules/cassandra/examples/BNF/drop_user_statement.bnf
new file mode 100644
index 0000000..9b22608
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/drop_user_statement.bnf
@@ -0,0 +1 @@
+drop_user_statement ::= DROP USER [ IF EXISTS ] role_name
diff --git a/doc/modules/cassandra/examples/BNF/function.bnf b/doc/modules/cassandra/examples/BNF/function.bnf
new file mode 100644
index 0000000..7e05430
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/function.bnf
@@ -0,0 +1 @@
+function_name ::= [ keyspace_name'.' ] name
diff --git a/doc/modules/cassandra/examples/BNF/grant_permission_statement.bnf b/doc/modules/cassandra/examples/BNF/grant_permission_statement.bnf
new file mode 100644
index 0000000..40f1df3
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/grant_permission_statement.bnf
@@ -0,0 +1,12 @@
+grant_permission_statement ::= GRANT permissions ON resource TO role_name
+permissions ::= ALL [ PERMISSIONS ] | permission [ PERMISSION ]
+permission ::= CREATE | ALTER | DROP | SELECT | MODIFY | AUTHORIZE | DESCRIBE | EXECUTE
+resource ::=    ALL KEYSPACES
+                | KEYSPACE keyspace_name
+                | [ TABLE ] table_name
+                | ALL ROLES
+                | ROLE role_name
+                | ALL FUNCTIONS [ IN KEYSPACE keyspace_name ]
+                | FUNCTION function_name '(' [ cql_type( ',' cql_type )* ] ')'
+                | ALL MBEANS
+                | ( MBEAN | MBEANS ) string
diff --git a/doc/modules/cassandra/examples/BNF/grant_role_statement.bnf b/doc/modules/cassandra/examples/BNF/grant_role_statement.bnf
new file mode 100644
index 0000000..d965cc2
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/grant_role_statement.bnf
@@ -0,0 +1 @@
+grant_role_statement ::= GRANT role_name TO role_name
diff --git a/doc/modules/cassandra/examples/BNF/identifier.bnf b/doc/modules/cassandra/examples/BNF/identifier.bnf
new file mode 100644
index 0000000..7bc3431
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/identifier.bnf
@@ -0,0 +1,3 @@
+identifier::= unquoted_identifier | quoted_identifier
+unquoted_identifier::= re('[a-zA-Z][link:[a-zA-Z0-9]]*')
+quoted_identifier::= '"' (any character where " can appear if doubled)+ '"'
diff --git a/doc/modules/cassandra/examples/BNF/index.bnf b/doc/modules/cassandra/examples/BNF/index.bnf
new file mode 100644
index 0000000..7083501
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/index.bnf
@@ -0,0 +1 @@
+index::= [keyspace_name '.' ] index_name
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/index_name.bnf b/doc/modules/cassandra/examples/BNF/index_name.bnf
new file mode 100644
index 0000000..c322755
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/index_name.bnf
@@ -0,0 +1 @@
+index_name::= re('[a-zA-Z_0-9]+')
diff --git a/doc/modules/cassandra/examples/BNF/insert_statement.bnf b/doc/modules/cassandra/examples/BNF/insert_statement.bnf
new file mode 100644
index 0000000..ed80c3e
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/insert_statement.bnf
@@ -0,0 +1,6 @@
+insert_statement::= INSERT INTO table_name ( names_values | json_clause ) 
+	[ IF NOT EXISTS ] 
+	[ USING update_parameter ( AND update_parameter )* ] 
+names_values::= names VALUES tuple_literal 
+json_clause::= JSON string [ DEFAULT ( NULL | UNSET ) ] 
+names::= '(' column_name ( ',' column_name )* ')'
diff --git a/doc/modules/cassandra/examples/BNF/ks_table.bnf b/doc/modules/cassandra/examples/BNF/ks_table.bnf
new file mode 100644
index 0000000..20ee6da
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/ks_table.bnf
@@ -0,0 +1,5 @@
+keyspace_name::= name
+table_name::= [keyspace_name '.' ] name
+name::= unquoted_name | quoted_name
+unquoted_name::= re('[a-zA-Z_0-9]\{1, 48}')
+quoted_name::= '"' unquoted_name '"'
diff --git a/doc/modules/cassandra/examples/BNF/list_permissions_statement.bnf b/doc/modules/cassandra/examples/BNF/list_permissions_statement.bnf
new file mode 100644
index 0000000..a11e2cc
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/list_permissions_statement.bnf
@@ -0,0 +1 @@
+list_permissions_statement ::= LIST permissions [ ON resource] [ OF role_name[ NORECURSIVE ] ]
diff --git a/doc/modules/cassandra/examples/BNF/list_roles_statement.bnf b/doc/modules/cassandra/examples/BNF/list_roles_statement.bnf
new file mode 100644
index 0000000..bbe3d9b
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/list_roles_statement.bnf
@@ -0,0 +1 @@
+list_roles_statement ::= LIST ROLES [ OF role_name] [ NORECURSIVE ]
diff --git a/doc/modules/cassandra/examples/BNF/list_users_statement.bnf b/doc/modules/cassandra/examples/BNF/list_users_statement.bnf
new file mode 100644
index 0000000..5750de6
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/list_users_statement.bnf
@@ -0,0 +1 @@
+list_users_statement::= LIST USERS
diff --git a/doc/modules/cassandra/examples/BNF/materialized_view.bnf b/doc/modules/cassandra/examples/BNF/materialized_view.bnf
new file mode 100644
index 0000000..48543a3
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/materialized_view.bnf
@@ -0,0 +1 @@
+materialized_view::= [keyspace_name '.' ] view_name
\ No newline at end of file
diff --git a/doc/modules/cassandra/examples/BNF/native_type.bnf b/doc/modules/cassandra/examples/BNF/native_type.bnf
new file mode 100644
index 0000000..c4e9c26
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/native_type.bnf
@@ -0,0 +1,4 @@
+native_type::= ASCII | BIGINT | BLOB | BOOLEAN | COUNTER | DATE
+| DECIMAL | DOUBLE | DURATION | FLOAT | INET | INT |
+SMALLINT | TEXT | TIME | TIMESTAMP | TIMEUUID | TINYINT |
+UUID | VARCHAR | VARINT
diff --git a/doc/modules/cassandra/examples/BNF/options.bnf b/doc/modules/cassandra/examples/BNF/options.bnf
new file mode 100644
index 0000000..9887165
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/options.bnf
@@ -0,0 +1,4 @@
+options::= option ( AND option )* 
+option::= identifier '=' ( identifier 
+	| constant 
+	| map_literal )
diff --git a/doc/modules/cassandra/examples/BNF/revoke_permission_statement.bnf b/doc/modules/cassandra/examples/BNF/revoke_permission_statement.bnf
new file mode 100644
index 0000000..fd061f9
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/revoke_permission_statement.bnf
@@ -0,0 +1 @@
+revoke_permission_statement ::= REVOKE permissions ON resource FROM role_name
diff --git a/doc/modules/cassandra/examples/BNF/revoke_role_statement.bnf b/doc/modules/cassandra/examples/BNF/revoke_role_statement.bnf
new file mode 100644
index 0000000..c344eb0
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/revoke_role_statement.bnf
@@ -0,0 +1 @@
+revoke_role_statement ::= REVOKE role_name FROM role_name
diff --git a/doc/modules/cassandra/examples/BNF/role_name.bnf b/doc/modules/cassandra/examples/BNF/role_name.bnf
new file mode 100644
index 0000000..103f84b
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/role_name.bnf
@@ -0,0 +1 @@
+role_name ::= identifier | string
diff --git a/doc/modules/cassandra/examples/BNF/select_statement.bnf b/doc/modules/cassandra/examples/BNF/select_statement.bnf
new file mode 100644
index 0000000..f53da41
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/select_statement.bnf
@@ -0,0 +1,21 @@
+select_statement::= SELECT [ JSON | DISTINCT ] ( select_clause | '*' ) 
+	FROM `table_name`  
+	[ WHERE `where_clause` ] 
+	[ GROUP BY `group_by_clause` ]  
+	[ ORDER BY `ordering_clause` ]  
+	[ PER PARTITION LIMIT (`integer` | `bind_marker`) ]  
+	[ LIMIT (`integer` | `bind_marker`) ]  
+	[ ALLOW FILTERING ]
+select_clause::= `selector` [ AS `identifier` ] ( ',' `selector` [ AS `identifier` ] ) 
+selector::== `column_name` 
+	| `term`  
+	| CAST '(' `selector` AS `cql_type` ')' 
+	| `function_name` '(' [ `selector` ( ',' `selector` )_ ] ')'  
+	| COUNT '(' '_' ')' 
+where_clause::= `relation` ( AND `relation` )*
+relation::= column_name operator term
+	'(' column_name ( ',' column_name )* ')' operator tuple_literal 
+	TOKEN '(' column_name# ( ',' column_name )* ')' operator term 
+operator::= '=' | '<' | '>' | '<=' | '>=' | '!=' | IN | CONTAINS | CONTAINS KEY 
+group_by_clause::= column_name ( ',' column_name )* 
+ordering_clause::= column_name [ ASC | DESC ] ( ',' column_name [ ASC | DESC ] )*
diff --git a/doc/modules/cassandra/examples/BNF/term.bnf b/doc/modules/cassandra/examples/BNF/term.bnf
new file mode 100644
index 0000000..504c4c4
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/term.bnf
@@ -0,0 +1,6 @@
+term::= constant | literal | function_call | arithmetic_operation | type_hint | bind_marker
+literal::= collection_literal | udt_literal | tuple_literal
+function_call::= identifier '(' [ term (',' term)* ] ')'
+arithmetic_operation::= '-' term | term ('+' | '-' | '*' | '/' | '%') term
+type_hint::= '(' cql_type ')' term
+bind_marker::= '?' | ':' identifier
diff --git a/doc/modules/cassandra/examples/BNF/trigger_name.bnf b/doc/modules/cassandra/examples/BNF/trigger_name.bnf
new file mode 100644
index 0000000..18a4a7e
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/trigger_name.bnf
@@ -0,0 +1 @@
+trigger_name ::= identifier
diff --git a/doc/modules/cassandra/examples/BNF/truncate_table.bnf b/doc/modules/cassandra/examples/BNF/truncate_table.bnf
new file mode 100644
index 0000000..9c7d301
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/truncate_table.bnf
@@ -0,0 +1 @@
+truncate_statement::= TRUNCATE [ TABLE ] table_name
diff --git a/doc/modules/cassandra/examples/BNF/tuple.bnf b/doc/modules/cassandra/examples/BNF/tuple.bnf
new file mode 100644
index 0000000..f339d57
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/tuple.bnf
@@ -0,0 +1,2 @@
+tuple_type::= TUPLE '<' cql_type( ',' cql_type)* '>'
+tuple_literal::= '(' term( ',' term )* ')'
diff --git a/doc/modules/cassandra/examples/BNF/udt.bnf b/doc/modules/cassandra/examples/BNF/udt.bnf
new file mode 100644
index 0000000..c06a5f6
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/udt.bnf
@@ -0,0 +1,2 @@
+user_defined_type::= udt_name
+udt_name::= [ keyspace_name '.' ] identifier
diff --git a/doc/modules/cassandra/examples/BNF/udt_literal.bnf b/doc/modules/cassandra/examples/BNF/udt_literal.bnf
new file mode 100644
index 0000000..8c996e5
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/udt_literal.bnf
@@ -0,0 +1 @@
+udt_literal::= '{' identifier ':' term ( ',' identifier ':' term)* '}'
diff --git a/doc/modules/cassandra/examples/BNF/update_statement.bnf b/doc/modules/cassandra/examples/BNF/update_statement.bnf
new file mode 100644
index 0000000..1a9bdb4
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/update_statement.bnf
@@ -0,0 +1,13 @@
+update_statement ::=    UPDATE table_name
+                        [ USING update_parameter ( AND update_parameter )* ]
+                        SET assignment( ',' assignment )*
+                        WHERE where_clause
+                        [ IF ( EXISTS | condition ( AND condition)*) ]
+update_parameter ::= ( TIMESTAMP | TTL ) ( integer | bind_marker )
+assignment: simple_selection'=' term
+                `| column_name'=' column_name ( '+' | '-' ) term
+                | column_name'=' list_literal'+' column_name
+simple_selection ::= column_name
+                        | column_name '[' term']'
+                        | column_name'.' field_name
+condition ::= `simple_selection operator term
diff --git a/doc/modules/cassandra/examples/BNF/use_ks.bnf b/doc/modules/cassandra/examples/BNF/use_ks.bnf
new file mode 100644
index 0000000..0347e52
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/use_ks.bnf
@@ -0,0 +1 @@
+use_statement::= USE keyspace_name
diff --git a/doc/modules/cassandra/examples/BNF/view_name.bnf b/doc/modules/cassandra/examples/BNF/view_name.bnf
new file mode 100644
index 0000000..6925367
--- /dev/null
+++ b/doc/modules/cassandra/examples/BNF/view_name.bnf
@@ -0,0 +1 @@
+view_name::= re('[a-zA-Z_0-9]+')
diff --git a/doc/modules/cassandra/examples/CQL/allow_filtering.cql b/doc/modules/cassandra/examples/CQL/allow_filtering.cql
new file mode 100644
index 0000000..c3bf3c6
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/allow_filtering.cql
@@ -0,0 +1,9 @@
+CREATE TABLE users (
+    username text PRIMARY KEY,
+    firstname text,
+    lastname text,
+    birth_year int,
+    country text
+);
+
+CREATE INDEX ON users(birth_year);
diff --git a/doc/modules/cassandra/examples/CQL/alter_ks.cql b/doc/modules/cassandra/examples/CQL/alter_ks.cql
new file mode 100644
index 0000000..319ed24
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/alter_ks.cql
@@ -0,0 +1,2 @@
+ALTER KEYSPACE excelsior
+    WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 4};
diff --git a/doc/modules/cassandra/examples/CQL/alter_role.cql b/doc/modules/cassandra/examples/CQL/alter_role.cql
new file mode 100644
index 0000000..c5f7d3d
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/alter_role.cql
@@ -0,0 +1 @@
+ALTER ROLE bob WITH PASSWORD = 'PASSWORD_B' AND SUPERUSER = false;
diff --git a/doc/modules/cassandra/examples/CQL/alter_table_add_column.cql b/doc/modules/cassandra/examples/CQL/alter_table_add_column.cql
new file mode 100644
index 0000000..e7703ed
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/alter_table_add_column.cql
@@ -0,0 +1 @@
+ALTER TABLE addamsFamily ADD gravesite varchar;
diff --git a/doc/modules/cassandra/examples/CQL/alter_table_spec_retry.cql b/doc/modules/cassandra/examples/CQL/alter_table_spec_retry.cql
new file mode 100644
index 0000000..bb9aa61
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/alter_table_spec_retry.cql
@@ -0,0 +1 @@
+ALTER TABLE users WITH speculative_retry = '10ms';
diff --git a/doc/modules/cassandra/examples/CQL/alter_table_spec_retry_percent.cql b/doc/modules/cassandra/examples/CQL/alter_table_spec_retry_percent.cql
new file mode 100644
index 0000000..a5351c6
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/alter_table_spec_retry_percent.cql
@@ -0,0 +1 @@
+ALTER TABLE users WITH speculative_retry = '99PERCENTILE';
diff --git a/doc/modules/cassandra/examples/CQL/alter_table_with_comment.cql b/doc/modules/cassandra/examples/CQL/alter_table_with_comment.cql
new file mode 100644
index 0000000..9b82d72
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/alter_table_with_comment.cql
@@ -0,0 +1,2 @@
+ALTER TABLE addamsFamily
+   WITH comment = 'A most excellent and useful table';
diff --git a/doc/modules/cassandra/examples/CQL/alter_user.cql b/doc/modules/cassandra/examples/CQL/alter_user.cql
new file mode 100644
index 0000000..97de7ba
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/alter_user.cql
@@ -0,0 +1,2 @@
+ALTER USER alice WITH PASSWORD 'PASSWORD_A';
+ALTER USER bob SUPERUSER;
diff --git a/doc/modules/cassandra/examples/CQL/as.cql b/doc/modules/cassandra/examples/CQL/as.cql
new file mode 100644
index 0000000..a8b9f03
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/as.cql
@@ -0,0 +1,13 @@
+// Without alias
+SELECT intAsBlob(4) FROM t;
+
+//  intAsBlob(4)
+// --------------
+//  0x00000004
+
+// With alias
+SELECT intAsBlob(4) AS four FROM t;
+
+//  four
+// ------------
+//  0x00000004
diff --git a/doc/modules/cassandra/examples/CQL/autoexpand_exclude_dc.cql b/doc/modules/cassandra/examples/CQL/autoexpand_exclude_dc.cql
new file mode 100644
index 0000000..c320c52
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/autoexpand_exclude_dc.cql
@@ -0,0 +1,4 @@
+CREATE KEYSPACE excalibur
+   WITH replication = {'class': 'NetworkTopologyStrategy', 'replication_factor' : 3, 'DC2': 0};
+
+DESCRIBE KEYSPACE excalibur;
diff --git a/doc/modules/cassandra/examples/CQL/autoexpand_ks.cql b/doc/modules/cassandra/examples/CQL/autoexpand_ks.cql
new file mode 100644
index 0000000..d5bef55
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/autoexpand_ks.cql
@@ -0,0 +1,4 @@
+CREATE KEYSPACE excalibur
+    WITH replication = {'class': 'NetworkTopologyStrategy', 'replication_factor' : 3};
+
+DESCRIBE KEYSPACE excalibur;
diff --git a/doc/modules/cassandra/examples/CQL/autoexpand_ks_override.cql b/doc/modules/cassandra/examples/CQL/autoexpand_ks_override.cql
new file mode 100644
index 0000000..d6800fb
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/autoexpand_ks_override.cql
@@ -0,0 +1,4 @@
+CREATE KEYSPACE excalibur
+   WITH replication = {'class': 'NetworkTopologyStrategy', 'replication_factor' : 3, 'DC2': 2};
+
+DESCRIBE KEYSPACE excalibur;
diff --git a/doc/modules/cassandra/examples/CQL/avg.cql b/doc/modules/cassandra/examples/CQL/avg.cql
new file mode 100644
index 0000000..2882327
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/avg.cql
@@ -0,0 +1 @@
+SELECT AVG (players) FROM plays;
diff --git a/doc/modules/cassandra/examples/CQL/batch_statement.cql b/doc/modules/cassandra/examples/CQL/batch_statement.cql
new file mode 100644
index 0000000..e9148e8
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/batch_statement.cql
@@ -0,0 +1,6 @@
+BEGIN BATCH
+   INSERT INTO users (userid, password, name) VALUES ('user2', 'ch@ngem3b', 'second user');
+   UPDATE users SET password = 'ps22dhds' WHERE userid = 'user3';
+   INSERT INTO users (userid, password) VALUES ('user4', 'ch@ngem3c');
+   DELETE name FROM users WHERE userid = 'user1';
+APPLY BATCH;
diff --git a/doc/modules/cassandra/examples/CQL/caching_option.cql b/doc/modules/cassandra/examples/CQL/caching_option.cql
new file mode 100644
index 0000000..b48b171
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/caching_option.cql
@@ -0,0 +1,6 @@
+CREATE TABLE simple (
+id int,
+key text,
+value text,
+PRIMARY KEY (key, value)
+) WITH caching = {'keys': 'ALL', 'rows_per_partition': 10};
diff --git a/doc/modules/cassandra/examples/CQL/chunk_length.cql b/doc/modules/cassandra/examples/CQL/chunk_length.cql
new file mode 100644
index 0000000..b3504fe
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/chunk_length.cql
@@ -0,0 +1,6 @@
+CREATE TABLE simple (
+   id int,
+   key text,
+   value text,
+   PRIMARY KEY (key, value)
+) WITH compression = {'class': 'LZ4Compressor', 'chunk_length_in_kb': 4};
diff --git a/doc/modules/cassandra/examples/CQL/count.cql b/doc/modules/cassandra/examples/CQL/count.cql
new file mode 100644
index 0000000..1993c0e
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/count.cql
@@ -0,0 +1,2 @@
+SELECT COUNT (*) FROM plays;
+SELECT COUNT (1) FROM plays;
diff --git a/doc/modules/cassandra/examples/CQL/count_nonnull.cql b/doc/modules/cassandra/examples/CQL/count_nonnull.cql
new file mode 100644
index 0000000..6543b99
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/count_nonnull.cql
@@ -0,0 +1 @@
+SELECT COUNT (scores) FROM plays;
diff --git a/doc/modules/cassandra/examples/CQL/create_function.cql b/doc/modules/cassandra/examples/CQL/create_function.cql
new file mode 100644
index 0000000..e7d5823
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_function.cql
@@ -0,0 +1,15 @@
+CREATE OR REPLACE FUNCTION somefunction(somearg int, anotherarg text, complexarg frozen<someUDT>, listarg list)
+    RETURNS NULL ON NULL INPUT
+    RETURNS text
+    LANGUAGE java
+    AS $$
+        // some Java code
+    $$;
+
+CREATE FUNCTION IF NOT EXISTS akeyspace.fname(someArg int)
+    CALLED ON NULL INPUT
+    RETURNS text
+    LANGUAGE java
+    AS $$
+        // some Java code
+    $$;
diff --git a/doc/modules/cassandra/examples/CQL/create_index.cql b/doc/modules/cassandra/examples/CQL/create_index.cql
new file mode 100644
index 0000000..f84452a
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_index.cql
@@ -0,0 +1,8 @@
+CREATE INDEX userIndex ON NerdMovies (user);
+CREATE INDEX ON Mutants (abilityId);
+CREATE INDEX ON users (keys(favs));
+CREATE CUSTOM INDEX ON users (email) 
+   USING 'path.to.the.IndexClass';
+CREATE CUSTOM INDEX ON users (email) 
+   USING 'path.to.the.IndexClass' 
+   WITH OPTIONS = {'storage': '/mnt/ssd/indexes/'};
diff --git a/doc/modules/cassandra/examples/CQL/create_ks.cql b/doc/modules/cassandra/examples/CQL/create_ks.cql
new file mode 100644
index 0000000..e81d7f7
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_ks.cql
@@ -0,0 +1,6 @@
+CREATE KEYSPACE excelsior
+   WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3};
+
+CREATE KEYSPACE excalibur
+   WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1' : 1, 'DC2' : 3}
+   AND durable_writes = false;
diff --git a/doc/modules/cassandra/examples/CQL/create_ks2_backup.cql b/doc/modules/cassandra/examples/CQL/create_ks2_backup.cql
new file mode 100644
index 0000000..52f9308
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_ks2_backup.cql
@@ -0,0 +1,2 @@
+CREATE KEYSPACE catalogkeyspace
+   WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3};
diff --git a/doc/modules/cassandra/examples/CQL/create_ks_backup.cql b/doc/modules/cassandra/examples/CQL/create_ks_backup.cql
new file mode 100644
index 0000000..5934904
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_ks_backup.cql
@@ -0,0 +1,2 @@
+CREATE KEYSPACE cqlkeyspace
+   WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3};
diff --git a/doc/modules/cassandra/examples/CQL/create_ks_trans_repl.cql b/doc/modules/cassandra/examples/CQL/create_ks_trans_repl.cql
new file mode 100644
index 0000000..afff433
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_ks_trans_repl.cql
@@ -0,0 +1,2 @@
+CREATE KEYSPACE some_keyspace
+   WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1' : '3/1'', 'DC2' : '5/2'};
diff --git a/doc/modules/cassandra/examples/CQL/create_mv_statement.cql b/doc/modules/cassandra/examples/CQL/create_mv_statement.cql
new file mode 100644
index 0000000..0792c3e
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_mv_statement.cql
@@ -0,0 +1,5 @@
+CREATE MATERIALIZED VIEW monkeySpecies_by_population AS
+   SELECT * FROM monkeySpecies
+   WHERE population IS NOT NULL AND species IS NOT NULL
+   PRIMARY KEY (population, species)
+   WITH comment='Allow query by population instead of species';
diff --git a/doc/modules/cassandra/examples/CQL/create_role.cql b/doc/modules/cassandra/examples/CQL/create_role.cql
new file mode 100644
index 0000000..c8d0d64
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_role.cql
@@ -0,0 +1,6 @@
+CREATE ROLE new_role;
+CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true;
+CREATE ROLE bob WITH PASSWORD = 'password_b' AND LOGIN = true AND SUPERUSER = true;
+CREATE ROLE carlos WITH OPTIONS = { 'custom_option1' : 'option1_value', 'custom_option2' : 99 };
+CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND ACCESS TO DATACENTERS {'DC1', 'DC3'};
+CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND ACCESS TO ALL DATACENTERS;
diff --git a/doc/modules/cassandra/examples/CQL/create_role_ifnotexists.cql b/doc/modules/cassandra/examples/CQL/create_role_ifnotexists.cql
new file mode 100644
index 0000000..0b9600f
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_role_ifnotexists.cql
@@ -0,0 +1,2 @@
+CREATE ROLE other_role;
+CREATE ROLE IF NOT EXISTS other_role;
diff --git a/doc/modules/cassandra/examples/CQL/create_static_column.cql b/doc/modules/cassandra/examples/CQL/create_static_column.cql
new file mode 100644
index 0000000..95e8ff2
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_static_column.cql
@@ -0,0 +1,7 @@
+CREATE TABLE t (
+    pk int,
+    t int,
+    v text,
+    s text static,
+    PRIMARY KEY (pk, t)
+);
diff --git a/doc/modules/cassandra/examples/CQL/create_table.cql b/doc/modules/cassandra/examples/CQL/create_table.cql
new file mode 100644
index 0000000..57b557d
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_table.cql
@@ -0,0 +1,23 @@
+CREATE TABLE monkey_species (
+    species text PRIMARY KEY,
+    common_name text,
+    population varint,
+    average_size int
+) WITH comment='Important biological records';
+
+CREATE TABLE timeline (
+    userid uuid,
+    posted_month int,
+    posted_time uuid,
+    body text,
+    posted_by text,
+    PRIMARY KEY (userid, posted_month, posted_time)
+) WITH compaction = { 'class' : 'LeveledCompactionStrategy' };
+
+CREATE TABLE loads (
+    machine inet,
+    cpu int,
+    mtime timeuuid,
+    load float,
+    PRIMARY KEY ((machine, cpu), mtime)
+) WITH CLUSTERING ORDER BY (mtime DESC);
diff --git a/doc/modules/cassandra/examples/CQL/create_table2_backup.cql b/doc/modules/cassandra/examples/CQL/create_table2_backup.cql
new file mode 100644
index 0000000..f339300
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_table2_backup.cql
@@ -0,0 +1,14 @@
+USE catalogkeyspace;
+CREATE TABLE journal (
+   id int,
+   name text,
+   publisher text,
+   PRIMARY KEY (id)
+);
+
+CREATE TABLE magazine (
+   id int,
+   name text,
+   publisher text,
+   PRIMARY KEY (id)
+);
diff --git a/doc/modules/cassandra/examples/CQL/create_table_backup.cql b/doc/modules/cassandra/examples/CQL/create_table_backup.cql
new file mode 100644
index 0000000..c80b999
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_table_backup.cql
@@ -0,0 +1,13 @@
+USE cqlkeyspace;
+CREATE TABLE t (
+   id int,
+   k int,
+   v text,
+   PRIMARY KEY (id)
+);
+CREATE TABLE t2 (
+   id int,
+   k int,
+   v text,
+   PRIMARY KEY (id)
+);
diff --git a/doc/modules/cassandra/examples/CQL/create_table_clustercolumn.cql b/doc/modules/cassandra/examples/CQL/create_table_clustercolumn.cql
new file mode 100644
index 0000000..f7de266
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_table_clustercolumn.cql
@@ -0,0 +1,7 @@
+CREATE TABLE t2 (
+    a int,
+    b int,
+    c int,
+    d int,
+    PRIMARY KEY (a, b, c)
+);
diff --git a/doc/modules/cassandra/examples/CQL/create_table_compound_pk.cql b/doc/modules/cassandra/examples/CQL/create_table_compound_pk.cql
new file mode 100644
index 0000000..eb199c7
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_table_compound_pk.cql
@@ -0,0 +1,7 @@
+CREATE TABLE t (
+    a int,
+    b int,
+    c int,
+    d int,
+    PRIMARY KEY ((a, b), c, d)
+);
diff --git a/doc/modules/cassandra/examples/CQL/create_table_simple.cql b/doc/modules/cassandra/examples/CQL/create_table_simple.cql
new file mode 100644
index 0000000..0ebe747
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_table_simple.cql
@@ -0,0 +1,4 @@
+CREATE TABLE users (
+    userid text PRIMARY KEY,
+    username text,
+);
diff --git a/doc/modules/cassandra/examples/CQL/create_table_single_pk.cql b/doc/modules/cassandra/examples/CQL/create_table_single_pk.cql
new file mode 100644
index 0000000..ce6fff8
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_table_single_pk.cql
@@ -0,0 +1 @@
+CREATE TABLE t (k text PRIMARY KEY);
diff --git a/doc/modules/cassandra/examples/CQL/create_trigger.cql b/doc/modules/cassandra/examples/CQL/create_trigger.cql
new file mode 100644
index 0000000..9bbf2f2
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_trigger.cql
@@ -0,0 +1 @@
+CREATE TRIGGER myTrigger ON myTable USING 'org.apache.cassandra.triggers.InvertedIndex';
diff --git a/doc/modules/cassandra/examples/CQL/create_user.cql b/doc/modules/cassandra/examples/CQL/create_user.cql
new file mode 100644
index 0000000..b6531eb
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_user.cql
@@ -0,0 +1,2 @@
+CREATE USER alice WITH PASSWORD 'password_a' SUPERUSER;
+CREATE USER bob WITH PASSWORD 'password_b' NOSUPERUSER;
diff --git a/doc/modules/cassandra/examples/CQL/create_user_role.cql b/doc/modules/cassandra/examples/CQL/create_user_role.cql
new file mode 100644
index 0000000..810f76c
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/create_user_role.cql
@@ -0,0 +1,14 @@
+CREATE USER alice WITH PASSWORD 'password_a' SUPERUSER;
+CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND SUPERUSER = true;
+
+CREATE USER IF NOT EXISTS alice WITH PASSWORD 'password_a' SUPERUSER;
+CREATE ROLE IF NOT EXISTS alice WITH PASSWORD = 'password_a' AND LOGIN = true AND SUPERUSER = true;
+
+CREATE USER alice WITH PASSWORD 'password_a' NOSUPERUSER;
+CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND SUPERUSER = false;
+
+CREATE USER alice WITH PASSWORD 'password_a' NOSUPERUSER;
+CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true;
+
+CREATE USER alice WITH PASSWORD 'password_a';
+CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true;
diff --git a/doc/modules/cassandra/examples/CQL/currentdate.cql b/doc/modules/cassandra/examples/CQL/currentdate.cql
new file mode 100644
index 0000000..0bed1b2
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/currentdate.cql
@@ -0,0 +1 @@
+SELECT * FROM myTable WHERE date >= currentDate() - 2d;
diff --git a/doc/modules/cassandra/examples/CQL/datetime_arithmetic.cql b/doc/modules/cassandra/examples/CQL/datetime_arithmetic.cql
new file mode 100644
index 0000000..310bf3b
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/datetime_arithmetic.cql
@@ -0,0 +1 @@
+SELECT * FROM myTable WHERE t = '2017-01-01' - 2d;
diff --git a/doc/modules/cassandra/examples/CQL/delete_all_elements_list.cql b/doc/modules/cassandra/examples/CQL/delete_all_elements_list.cql
new file mode 100644
index 0000000..3d02668
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/delete_all_elements_list.cql
@@ -0,0 +1 @@
+UPDATE plays SET scores = scores - [ 12, 21 ] WHERE id = '123-afde';
diff --git a/doc/modules/cassandra/examples/CQL/delete_element_list.cql b/doc/modules/cassandra/examples/CQL/delete_element_list.cql
new file mode 100644
index 0000000..26b3e58
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/delete_element_list.cql
@@ -0,0 +1 @@
+DELETE scores[1] FROM plays WHERE id = '123-afde';
diff --git a/doc/modules/cassandra/examples/CQL/delete_map.cql b/doc/modules/cassandra/examples/CQL/delete_map.cql
new file mode 100644
index 0000000..e16b134
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/delete_map.cql
@@ -0,0 +1,2 @@
+DELETE favs['author'] FROM users WHERE id = 'jsmith';
+UPDATE users SET favs = favs - { 'movie', 'band'} WHERE id = 'jsmith';
diff --git a/doc/modules/cassandra/examples/CQL/delete_set.cql b/doc/modules/cassandra/examples/CQL/delete_set.cql
new file mode 100644
index 0000000..308da3c
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/delete_set.cql
@@ -0,0 +1 @@
+UPDATE images SET tags = tags - { 'cat' } WHERE name = 'cat.jpg';
diff --git a/doc/modules/cassandra/examples/CQL/delete_statement.cql b/doc/modules/cassandra/examples/CQL/delete_statement.cql
new file mode 100644
index 0000000..b574e71
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/delete_statement.cql
@@ -0,0 +1,5 @@
+DELETE FROM NerdMovies USING TIMESTAMP 1240003134
+ WHERE movie = 'Serenity';
+
+DELETE phone FROM Users
+ WHERE userid IN (C73DE1D3-AF08-40F3-B124-3FF3E5109F22, B70DE1D0-9908-4AE3-BE34-5573E5B09F14);
diff --git a/doc/modules/cassandra/examples/CQL/drop_aggregate.cql b/doc/modules/cassandra/examples/CQL/drop_aggregate.cql
new file mode 100644
index 0000000..f05b69a
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/drop_aggregate.cql
@@ -0,0 +1,4 @@
+DROP AGGREGATE myAggregate;
+DROP AGGREGATE myKeyspace.anAggregate;
+DROP AGGREGATE someAggregate ( int );
+DROP AGGREGATE someAggregate ( text );
diff --git a/doc/modules/cassandra/examples/CQL/drop_function.cql b/doc/modules/cassandra/examples/CQL/drop_function.cql
new file mode 100644
index 0000000..6d444c1
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/drop_function.cql
@@ -0,0 +1,4 @@
+DROP FUNCTION myfunction;
+DROP FUNCTION mykeyspace.afunction;
+DROP FUNCTION afunction ( int );
+DROP FUNCTION afunction ( text );
diff --git a/doc/modules/cassandra/examples/CQL/drop_ks.cql b/doc/modules/cassandra/examples/CQL/drop_ks.cql
new file mode 100644
index 0000000..46a920d
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/drop_ks.cql
@@ -0,0 +1 @@
+DROP KEYSPACE excelsior;
diff --git a/doc/modules/cassandra/examples/CQL/drop_trigger.cql b/doc/modules/cassandra/examples/CQL/drop_trigger.cql
new file mode 100644
index 0000000..05a7a95
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/drop_trigger.cql
@@ -0,0 +1 @@
+DROP TRIGGER myTrigger ON myTable;
diff --git a/doc/modules/cassandra/examples/CQL/function_dollarsign.cql b/doc/modules/cassandra/examples/CQL/function_dollarsign.cql
new file mode 100644
index 0000000..878d044
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/function_dollarsign.cql
@@ -0,0 +1,15 @@
+CREATE FUNCTION some_function ( arg int )
+    RETURNS NULL ON NULL INPUT
+    RETURNS int
+    LANGUAGE java
+    AS $$ return arg; $$;
+
+SELECT some_function(column) FROM atable ...;
+UPDATE atable SET col = some_function(?) ...;
+
+CREATE TYPE custom_type (txt text, i int);
+CREATE FUNCTION fct_using_udt ( udtarg frozen )
+    RETURNS NULL ON NULL INPUT
+    RETURNS text
+    LANGUAGE java
+    AS $$ return udtarg.getString("txt"); $$;
diff --git a/doc/modules/cassandra/examples/CQL/function_overload.cql b/doc/modules/cassandra/examples/CQL/function_overload.cql
new file mode 100644
index 0000000..d70e8e9
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/function_overload.cql
@@ -0,0 +1,2 @@
+CREATE FUNCTION sample ( arg int ) ...;
+CREATE FUNCTION sample ( arg text ) ...;
diff --git a/doc/modules/cassandra/examples/CQL/function_udfcontext.cql b/doc/modules/cassandra/examples/CQL/function_udfcontext.cql
new file mode 100644
index 0000000..87f89fe
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/function_udfcontext.cql
@@ -0,0 +1,11 @@
+CREATE TYPE custom_type (txt text, i int);
+CREATE FUNCTION fct\_using\_udt ( somearg int )
+    RETURNS NULL ON NULL INPUT
+    RETURNS custom_type
+    LANGUAGE java
+    AS $$
+        UDTValue udt = udfContext.newReturnUDTValue();
+        udt.setString("txt", "some string");
+        udt.setInt("i", 42);
+        return udt;
+    $$;
diff --git a/doc/modules/cassandra/examples/CQL/grant_describe.cql b/doc/modules/cassandra/examples/CQL/grant_describe.cql
new file mode 100644
index 0000000..7218145
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/grant_describe.cql
@@ -0,0 +1 @@
+GRANT DESCRIBE ON ALL ROLES TO role_admin;
diff --git a/doc/modules/cassandra/examples/CQL/grant_drop.cql b/doc/modules/cassandra/examples/CQL/grant_drop.cql
new file mode 100644
index 0000000..745369d
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/grant_drop.cql
@@ -0,0 +1 @@
+GRANT DROP ON keyspace1.table1 TO schema_owner;
diff --git a/doc/modules/cassandra/examples/CQL/grant_execute.cql b/doc/modules/cassandra/examples/CQL/grant_execute.cql
new file mode 100644
index 0000000..96b34de
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/grant_execute.cql
@@ -0,0 +1 @@
+GRANT EXECUTE ON FUNCTION keyspace1.user_function( int ) TO report_writer;
diff --git a/doc/modules/cassandra/examples/CQL/grant_modify.cql b/doc/modules/cassandra/examples/CQL/grant_modify.cql
new file mode 100644
index 0000000..7f9a30b
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/grant_modify.cql
@@ -0,0 +1 @@
+GRANT MODIFY ON KEYSPACE keyspace1 TO data_writer;
diff --git a/doc/modules/cassandra/examples/CQL/grant_perm.cql b/doc/modules/cassandra/examples/CQL/grant_perm.cql
new file mode 100644
index 0000000..1dc9a7b
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/grant_perm.cql
@@ -0,0 +1 @@
+GRANT SELECT ON ALL KEYSPACES TO data_reader;
diff --git a/doc/modules/cassandra/examples/CQL/grant_role.cql b/doc/modules/cassandra/examples/CQL/grant_role.cql
new file mode 100644
index 0000000..1adffb3
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/grant_role.cql
@@ -0,0 +1 @@
+GRANT report_writer TO alice;
diff --git a/doc/modules/cassandra/examples/CQL/insert_data2_backup.cql b/doc/modules/cassandra/examples/CQL/insert_data2_backup.cql
new file mode 100644
index 0000000..35e20a3
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_data2_backup.cql
@@ -0,0 +1,5 @@
+INSERT INTO journal (id, name, publisher) VALUES (0, 'Apache Cassandra Magazine', 'Apache Cassandra');
+INSERT INTO journal (id, name, publisher) VALUES (1, 'Couchbase Magazine', 'Couchbase');
+
+INSERT INTO magazine (id, name, publisher) VALUES (0, 'Apache Cassandra Magazine', 'Apache Cassandra');
+INSERT INTO magazine (id, name, publisher) VALUES (1, 'Couchbase Magazine', 'Couchbase');
diff --git a/doc/modules/cassandra/examples/CQL/insert_data_backup.cql b/doc/modules/cassandra/examples/CQL/insert_data_backup.cql
new file mode 100644
index 0000000..15eb375
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_data_backup.cql
@@ -0,0 +1,6 @@
+INSERT INTO t (id, k, v) VALUES (0, 0, 'val0');
+INSERT INTO t (id, k, v) VALUES (1, 1, 'val1');
+
+INSERT INTO t2 (id, k, v) VALUES (0, 0, 'val0');
+INSERT INTO t2 (id, k, v) VALUES (1, 1, 'val1');
+INSERT INTO t2 (id, k, v) VALUES (2, 2, 'val2');
diff --git a/doc/modules/cassandra/examples/CQL/insert_duration.cql b/doc/modules/cassandra/examples/CQL/insert_duration.cql
new file mode 100644
index 0000000..b52801b
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_duration.cql
@@ -0,0 +1,6 @@
+INSERT INTO RiderResults (rider, race, result)
+   VALUES ('Christopher Froome', 'Tour de France', 89h4m48s);
+INSERT INTO RiderResults (rider, race, result)
+   VALUES ('BARDET Romain', 'Tour de France', PT89H8M53S);
+INSERT INTO RiderResults (rider, race, result)
+   VALUES ('QUINTANA Nairo', 'Tour de France', P0000-00-00T89:09:09);
diff --git a/doc/modules/cassandra/examples/CQL/insert_json.cql b/doc/modules/cassandra/examples/CQL/insert_json.cql
new file mode 100644
index 0000000..d3a5dec
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_json.cql
@@ -0,0 +1 @@
+INSERT INTO mytable JSON '{ "\"myKey\"": 0, "value": 0}';
diff --git a/doc/modules/cassandra/examples/CQL/insert_statement.cql b/doc/modules/cassandra/examples/CQL/insert_statement.cql
new file mode 100644
index 0000000..0f7a943
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_statement.cql
@@ -0,0 +1,5 @@
+INSERT INTO NerdMovies (movie, director, main_actor, year)
+   VALUES ('Serenity', 'Joss Whedon', 'Nathan Fillion', 2005)
+   USING TTL 86400;
+
+INSERT INTO NerdMovies JSON '{"movie": "Serenity", "director": "Joss Whedon", "year": 2005}';
diff --git a/doc/modules/cassandra/examples/CQL/insert_static_data.cql b/doc/modules/cassandra/examples/CQL/insert_static_data.cql
new file mode 100644
index 0000000..c6a588f
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_static_data.cql
@@ -0,0 +1,2 @@
+INSERT INTO t (pk, t, v, s) VALUES (0, 0, 'val0', 'static0');
+INSERT INTO t (pk, t, v, s) VALUES (0, 1, 'val1', 'static1');
diff --git a/doc/modules/cassandra/examples/CQL/insert_table_cc_addl.cql b/doc/modules/cassandra/examples/CQL/insert_table_cc_addl.cql
new file mode 100644
index 0000000..f574d53
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_table_cc_addl.cql
@@ -0,0 +1 @@
+INSERT INTO t3 (a,b,c,d) VALUES (0,0,0,9);
diff --git a/doc/modules/cassandra/examples/CQL/insert_table_clustercolumn.cql b/doc/modules/cassandra/examples/CQL/insert_table_clustercolumn.cql
new file mode 100644
index 0000000..449f921
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_table_clustercolumn.cql
@@ -0,0 +1,5 @@
+INSERT INTO t2 (a, b, c, d) VALUES (0,0,0,0);
+INSERT INTO t2 (a, b, c, d) VALUES (0,0,1,1);
+INSERT INTO t2 (a, b, c, d) VALUES (0,1,2,2);
+INSERT INTO t2 (a, b, c, d) VALUES (0,1,3,3);
+INSERT INTO t2 (a, b, c, d) VALUES (1,1,4,4);
diff --git a/doc/modules/cassandra/examples/CQL/insert_table_clustercolumn2.cql b/doc/modules/cassandra/examples/CQL/insert_table_clustercolumn2.cql
new file mode 100644
index 0000000..a048c9f
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_table_clustercolumn2.cql
@@ -0,0 +1,5 @@
+INSERT INTO t3 (a, b, c, d) VALUES (0,0,0,0);
+INSERT INTO t3 (a, b, c, d) VALUES (0,0,1,1);
+INSERT INTO t3 (a, b, c, d) VALUES (0,1,2,2);
+INSERT INTO t3 (a, b, c, d) VALUES (0,1,3,3);
+INSERT INTO t3 (a, b, c, d) VALUES (1,1,4,4);
diff --git a/doc/modules/cassandra/examples/CQL/insert_table_compound_pk.cql b/doc/modules/cassandra/examples/CQL/insert_table_compound_pk.cql
new file mode 100644
index 0000000..3ce1953
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_table_compound_pk.cql
@@ -0,0 +1,5 @@
+INSERT INTO t (a, b, c, d) VALUES (0,0,0,0);
+INSERT INTO t (a, b, c, d) VALUES (0,0,1,1);
+INSERT INTO t (a, b, c, d) VALUES (0,1,2,2);
+INSERT INTO t (a, b, c, d) VALUES (0,1,3,3);
+INSERT INTO t (a, b, c, d) VALUES (1,1,4,4);
diff --git a/doc/modules/cassandra/examples/CQL/insert_udt.cql b/doc/modules/cassandra/examples/CQL/insert_udt.cql
new file mode 100644
index 0000000..5c6f176
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/insert_udt.cql
@@ -0,0 +1,17 @@
+INSERT INTO user (name, addresses)
+   VALUES ('z3 Pr3z1den7', {
+     'home' : {
+        street: '1600 Pennsylvania Ave NW',
+        city: 'Washington',
+        zip: '20500',
+        phones: { 'cell' : { country_code: 1, number: '202 456-1111' },
+                  'landline' : { country_code: 1, number: '...' } }
+     },
+     'work' : {
+        street: '1600 Pennsylvania Ave NW',
+        city: 'Washington',
+        zip: '20500',
+        phones: { 'fax' : { country_code: 1, number: '...' } }
+     }
+  }
+);
diff --git a/doc/modules/cassandra/examples/CQL/list.cql b/doc/modules/cassandra/examples/CQL/list.cql
new file mode 100644
index 0000000..4d1ef13
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/list.cql
@@ -0,0 +1,12 @@
+CREATE TABLE plays (
+    id text PRIMARY KEY,
+    game text,
+    players int,
+    scores list<int> // A list of integers
+)
+
+INSERT INTO plays (id, game, players, scores)
+           VALUES ('123-afde', 'quake', 3, [17, 4, 2]);
+
+// Replace the existing list entirely
+UPDATE plays SET scores = [ 3, 9, 4] WHERE id = '123-afde';
diff --git a/doc/modules/cassandra/examples/CQL/list_all_perm.cql b/doc/modules/cassandra/examples/CQL/list_all_perm.cql
new file mode 100644
index 0000000..efbcfc8
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/list_all_perm.cql
@@ -0,0 +1 @@
+LIST ALL PERMISSIONS ON keyspace1.table1 OF bob;
diff --git a/doc/modules/cassandra/examples/CQL/list_perm.cql b/doc/modules/cassandra/examples/CQL/list_perm.cql
new file mode 100644
index 0000000..094bf09
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/list_perm.cql
@@ -0,0 +1 @@
+LIST ALL PERMISSIONS OF alice;
diff --git a/doc/modules/cassandra/examples/CQL/list_roles.cql b/doc/modules/cassandra/examples/CQL/list_roles.cql
new file mode 100644
index 0000000..5c0f063
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/list_roles.cql
@@ -0,0 +1 @@
+LIST ROLES;
diff --git a/doc/modules/cassandra/examples/CQL/list_roles_nonrecursive.cql b/doc/modules/cassandra/examples/CQL/list_roles_nonrecursive.cql
new file mode 100644
index 0000000..eea6218
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/list_roles_nonrecursive.cql
@@ -0,0 +1 @@
+LIST ROLES OF bob NORECURSIVE;
diff --git a/doc/modules/cassandra/examples/CQL/list_roles_of.cql b/doc/modules/cassandra/examples/CQL/list_roles_of.cql
new file mode 100644
index 0000000..c338ca3
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/list_roles_of.cql
@@ -0,0 +1 @@
+LIST ROLES OF alice;
diff --git a/doc/modules/cassandra/examples/CQL/list_select_perm.cql b/doc/modules/cassandra/examples/CQL/list_select_perm.cql
new file mode 100644
index 0000000..c085df4
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/list_select_perm.cql
@@ -0,0 +1 @@
+LIST SELECT PERMISSIONS OF carlos;
diff --git a/doc/modules/cassandra/examples/CQL/map.cql b/doc/modules/cassandra/examples/CQL/map.cql
new file mode 100644
index 0000000..ca9ca5e
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/map.cql
@@ -0,0 +1,11 @@
+CREATE TABLE users (
+   id text PRIMARY KEY,
+   name text,
+   favs map<text, text> // A map of text keys, and text values
+);
+
+INSERT INTO users (id, name, favs)
+   VALUES ('jsmith', 'John Smith', { 'fruit' : 'Apple', 'band' : 'Beatles' });
+
+// Replace the existing map entirely.
+UPDATE users SET favs = { 'fruit' : 'Banana' } WHERE id = 'jsmith';
diff --git a/doc/modules/cassandra/examples/CQL/min_max.cql b/doc/modules/cassandra/examples/CQL/min_max.cql
new file mode 100644
index 0000000..3f31cc5
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/min_max.cql
@@ -0,0 +1 @@
+SELECT MIN (players), MAX (players) FROM plays WHERE game = 'quake';
diff --git a/doc/modules/cassandra/examples/CQL/mv_table_def.cql b/doc/modules/cassandra/examples/CQL/mv_table_def.cql
new file mode 100644
index 0000000..106fe11
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/mv_table_def.cql
@@ -0,0 +1,8 @@
+CREATE TABLE t (
+    k int,
+    c1 int,
+    c2 int,
+    v1 int,
+    v2 int,
+    PRIMARY KEY (k, c1, c2)
+);
diff --git a/doc/modules/cassandra/examples/CQL/mv_table_error.cql b/doc/modules/cassandra/examples/CQL/mv_table_error.cql
new file mode 100644
index 0000000..e7560f9
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/mv_table_error.cql
@@ -0,0 +1,13 @@
+// Error: cannot include both v1 and v2 in the primary key as both are not in the base table primary key
+
+CREATE MATERIALIZED VIEW mv1 AS
+   SELECT * FROM t 
+   WHERE k IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL AND v1 IS NOT NULL
+   PRIMARY KEY (v1, v2, k, c1, c2);
+
+// Error: must include k in the primary as it's a base table primary key column
+
+CREATE MATERIALIZED VIEW mv1 AS
+   SELECT * FROM t 
+   WHERE c1 IS NOT NULL AND c2 IS NOT NULL
+   PRIMARY KEY (c1, c2);
diff --git a/doc/modules/cassandra/examples/CQL/mv_table_from_base.cql b/doc/modules/cassandra/examples/CQL/mv_table_from_base.cql
new file mode 100644
index 0000000..bd2f9f2
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/mv_table_from_base.cql
@@ -0,0 +1,9 @@
+CREATE MATERIALIZED VIEW mv1 AS
+   SELECT * FROM t 
+   WHERE k IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL
+   PRIMARY KEY (c1, k, c2);
+
+CREATE MATERIALIZED VIEW mv1 AS
+  SELECT * FROM t 
+  WHERE k IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL
+  PRIMARY KEY (v1, k, c1, c2);
diff --git a/doc/modules/cassandra/examples/CQL/no_revoke.cql b/doc/modules/cassandra/examples/CQL/no_revoke.cql
new file mode 100644
index 0000000..b6a044c
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/no_revoke.cql
@@ -0,0 +1,5 @@
+* `system_schema.keyspaces`
+* `system_schema.columns`
+* `system_schema.tables`
+* `system.local`
+* `system.peers`
diff --git a/doc/modules/cassandra/examples/CQL/qs_create_ks.cql b/doc/modules/cassandra/examples/CQL/qs_create_ks.cql
new file mode 100644
index 0000000..2dba1bd
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/qs_create_ks.cql
@@ -0,0 +1,2 @@
+# Create a keyspace
+CREATE KEYSPACE IF NOT EXISTS store WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' };
diff --git a/doc/modules/cassandra/examples/CQL/qs_create_table.cql b/doc/modules/cassandra/examples/CQL/qs_create_table.cql
new file mode 100644
index 0000000..daeef5f
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/qs_create_table.cql
@@ -0,0 +1,6 @@
+# Create a table
+CREATE TABLE IF NOT EXISTS store.shopping_cart  (
+	userid text PRIMARY KEY,
+	item_count int,
+	last_update_timestamp timestamp
+);
diff --git a/doc/modules/cassandra/examples/CQL/qs_insert_data.cql b/doc/modules/cassandra/examples/CQL/qs_insert_data.cql
new file mode 100644
index 0000000..130f901
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/qs_insert_data.cql
@@ -0,0 +1,7 @@
+# Insert some data
+INSERT INTO store.shopping_cart
+(userid, item_count, last_update_timestamp)
+VALUES ('9876', 2, toTimeStamp(toDate(now))));
+INSERT INTO store.shopping_cart
+(userid, item_count, last_update_timestamp)
+VALUES (1234, 5, toTimeStamp(toDate(now))));
diff --git a/doc/modules/cassandra/examples/CQL/qs_insert_data_again.cql b/doc/modules/cassandra/examples/CQL/qs_insert_data_again.cql
new file mode 100644
index 0000000..b95473f
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/qs_insert_data_again.cql
@@ -0,0 +1 @@
+INSERT (userid, item_count) VALUES (4567, 20) INTO store.shopping_cart;
diff --git a/doc/modules/cassandra/examples/CQL/qs_select_data.cql b/doc/modules/cassandra/examples/CQL/qs_select_data.cql
new file mode 100644
index 0000000..e9e55db
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/qs_select_data.cql
@@ -0,0 +1 @@
+SELECT * FROM store.shopping_cart;
diff --git a/doc/modules/cassandra/examples/CQL/query_allow_filtering.cql b/doc/modules/cassandra/examples/CQL/query_allow_filtering.cql
new file mode 100644
index 0000000..c4aaf39
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/query_allow_filtering.cql
@@ -0,0 +1,5 @@
+// All users are returned
+SELECT * FROM users;
+
+// All users with a particular birth year are returned
+SELECT * FROM users WHERE birth_year = 1981;
diff --git a/doc/modules/cassandra/examples/CQL/query_fail_allow_filtering.cql b/doc/modules/cassandra/examples/CQL/query_fail_allow_filtering.cql
new file mode 100644
index 0000000..2e6c63b
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/query_fail_allow_filtering.cql
@@ -0,0 +1 @@
+SELECT * FROM users WHERE birth_year = 1981 AND country = 'FR';
diff --git a/doc/modules/cassandra/examples/CQL/query_nofail_allow_filtering.cql b/doc/modules/cassandra/examples/CQL/query_nofail_allow_filtering.cql
new file mode 100644
index 0000000..88aed56
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/query_nofail_allow_filtering.cql
@@ -0,0 +1 @@
+SELECT * FROM users WHERE birth_year = 1981 AND country = 'FR' ALLOW FILTERING;
diff --git a/doc/modules/cassandra/examples/CQL/rename_udt_field.cql b/doc/modules/cassandra/examples/CQL/rename_udt_field.cql
new file mode 100644
index 0000000..7718788
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/rename_udt_field.cql
@@ -0,0 +1 @@
+ALTER TYPE address RENAME zip TO zipcode;
diff --git a/doc/modules/cassandra/examples/CQL/revoke_perm.cql b/doc/modules/cassandra/examples/CQL/revoke_perm.cql
new file mode 100644
index 0000000..d4ac1ed
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/revoke_perm.cql
@@ -0,0 +1,5 @@
+REVOKE SELECT ON ALL KEYSPACES FROM data_reader;
+REVOKE MODIFY ON KEYSPACE keyspace1 FROM data_writer;
+REVOKE DROP ON keyspace1.table1 FROM schema_owner;
+REVOKE EXECUTE ON FUNCTION keyspace1.user_function( int ) FROM report_writer;
+REVOKE DESCRIBE ON ALL ROLES FROM role_admin;
diff --git a/doc/modules/cassandra/examples/CQL/revoke_role.cql b/doc/modules/cassandra/examples/CQL/revoke_role.cql
new file mode 100644
index 0000000..acf5066
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/revoke_role.cql
@@ -0,0 +1 @@
+REVOKE report_writer FROM alice;
diff --git a/doc/modules/cassandra/examples/CQL/role_error.cql b/doc/modules/cassandra/examples/CQL/role_error.cql
new file mode 100644
index 0000000..fa061a2
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/role_error.cql
@@ -0,0 +1,6 @@
+GRANT role_a TO role_b;
+GRANT role_b TO role_a;
+
+GRANT role_a TO role_b;
+GRANT role_b TO role_c;
+GRANT role_c TO role_a;
diff --git a/doc/modules/cassandra/examples/CQL/select_data2_backup.cql b/doc/modules/cassandra/examples/CQL/select_data2_backup.cql
new file mode 100644
index 0000000..7a409d7
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/select_data2_backup.cql
@@ -0,0 +1,2 @@
+SELECT * FROM catalogkeyspace.journal;
+SELECT * FROM catalogkeyspace.magazine;
diff --git a/doc/modules/cassandra/examples/CQL/select_data_backup.cql b/doc/modules/cassandra/examples/CQL/select_data_backup.cql
new file mode 100644
index 0000000..4468467
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/select_data_backup.cql
@@ -0,0 +1,2 @@
+SELECT * FROM t;
+SELECT * FROM t2;
diff --git a/doc/modules/cassandra/examples/CQL/select_range.cql b/doc/modules/cassandra/examples/CQL/select_range.cql
new file mode 100644
index 0000000..fcf3bd5
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/select_range.cql
@@ -0,0 +1 @@
+SELECT * FROM t2 WHERE a = 0 AND b > 0 and b <= 3;
diff --git a/doc/modules/cassandra/examples/CQL/select_statement.cql b/doc/modules/cassandra/examples/CQL/select_statement.cql
new file mode 100644
index 0000000..cee5a19
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/select_statement.cql
@@ -0,0 +1,11 @@
+SELECT name, occupation FROM users WHERE userid IN (199, 200, 207);
+SELECT JSON name, occupation FROM users WHERE userid = 199;
+SELECT name AS user_name, occupation AS user_occupation FROM users;
+
+SELECT time, value
+FROM events
+WHERE event_type = 'myEvent'
+  AND time > '2011-02-03'
+  AND time <= '2012-01-01'
+
+SELECT COUNT (*) AS user_count FROM users;
diff --git a/doc/modules/cassandra/examples/CQL/select_static_data.cql b/doc/modules/cassandra/examples/CQL/select_static_data.cql
new file mode 100644
index 0000000..8bca937
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/select_static_data.cql
@@ -0,0 +1 @@
+SELECT * FROM t;
diff --git a/doc/modules/cassandra/examples/CQL/select_table_clustercolumn.cql b/doc/modules/cassandra/examples/CQL/select_table_clustercolumn.cql
new file mode 100644
index 0000000..60bb2cf
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/select_table_clustercolumn.cql
@@ -0,0 +1 @@
+SELECT * FROM t2;
diff --git a/doc/modules/cassandra/examples/CQL/select_table_compound_pk.cql b/doc/modules/cassandra/examples/CQL/select_table_compound_pk.cql
new file mode 100644
index 0000000..8bca937
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/select_table_compound_pk.cql
@@ -0,0 +1 @@
+SELECT * FROM t;
diff --git a/doc/modules/cassandra/examples/CQL/set.cql b/doc/modules/cassandra/examples/CQL/set.cql
new file mode 100644
index 0000000..607981b
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/set.cql
@@ -0,0 +1,11 @@
+CREATE TABLE images (
+   name text PRIMARY KEY,
+   owner text,
+   tags set<text> // A set of text values
+);
+
+INSERT INTO images (name, owner, tags)
+   VALUES ('cat.jpg', 'jsmith', { 'pet', 'cute' });
+
+// Replace the existing set entirely
+UPDATE images SET tags = { 'kitten', 'cat', 'lol' } WHERE name = 'cat.jpg';
diff --git a/doc/modules/cassandra/examples/CQL/spec_retry_values.cql b/doc/modules/cassandra/examples/CQL/spec_retry_values.cql
new file mode 100644
index 0000000..bcd8d26
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/spec_retry_values.cql
@@ -0,0 +1,6 @@
+min(99percentile,50ms)
+max(99p,50MS)
+MAX(99P,50ms)
+MIN(99.9PERCENTILE,50ms)
+max(90percentile,100MS)
+MAX(100.0PERCENTILE,60ms)
diff --git a/doc/modules/cassandra/examples/CQL/sum.cql b/doc/modules/cassandra/examples/CQL/sum.cql
new file mode 100644
index 0000000..bccfcbc
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/sum.cql
@@ -0,0 +1 @@
+SELECT SUM (players) FROM plays;
diff --git a/doc/modules/cassandra/examples/CQL/table_for_where.cql b/doc/modules/cassandra/examples/CQL/table_for_where.cql
new file mode 100644
index 0000000..f5ed500
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/table_for_where.cql
@@ -0,0 +1,9 @@
+CREATE TABLE posts (
+    userid text,
+    blog_title text,
+    posted_at timestamp,
+    entry_title text,
+    content text,
+    category int,
+    PRIMARY KEY (userid, blog_title, posted_at)
+);
diff --git a/doc/modules/cassandra/examples/CQL/timeuuid_min_max.cql b/doc/modules/cassandra/examples/CQL/timeuuid_min_max.cql
new file mode 100644
index 0000000..81353f5
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/timeuuid_min_max.cql
@@ -0,0 +1,3 @@
+SELECT * FROM myTable
+ WHERE t > maxTimeuuid('2013-01-01 00:05+0000')
+   AND t < minTimeuuid('2013-02-02 10:00+0000');
diff --git a/doc/modules/cassandra/examples/CQL/timeuuid_now.cql b/doc/modules/cassandra/examples/CQL/timeuuid_now.cql
new file mode 100644
index 0000000..54c2cc4
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/timeuuid_now.cql
@@ -0,0 +1 @@
+SELECT * FROM myTable WHERE t = now();
diff --git a/doc/modules/cassandra/examples/CQL/token.cql b/doc/modules/cassandra/examples/CQL/token.cql
new file mode 100644
index 0000000..b5c7f8b
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/token.cql
@@ -0,0 +1,2 @@
+SELECT * FROM posts
+ WHERE token(userid) > token('tom') AND token(userid) < token('bob');
diff --git a/doc/modules/cassandra/examples/CQL/tuple.cql b/doc/modules/cassandra/examples/CQL/tuple.cql
new file mode 100644
index 0000000..b612d07
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/tuple.cql
@@ -0,0 +1,6 @@
+CREATE TABLE durations (
+  event text,
+  duration tuple<int, text>,
+);
+
+INSERT INTO durations (event, duration) VALUES ('ev1', (3, 'hours'));
diff --git a/doc/modules/cassandra/examples/CQL/uda.cql b/doc/modules/cassandra/examples/CQL/uda.cql
new file mode 100644
index 0000000..b40dd11
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/uda.cql
@@ -0,0 +1,41 @@
+CREATE OR REPLACE FUNCTION test.averageState(state tuple<int,bigint>, val int)
+    CALLED ON NULL INPUT
+    RETURNS tuple
+    LANGUAGE java
+    AS $$
+        if (val != null) {
+            state.setInt(0, state.getInt(0)+1);
+            state.setLong(1, state.getLong(1)+val.intValue());
+        }
+        return state;
+    $$;
+
+CREATE OR REPLACE FUNCTION test.averageFinal (state tuple<int,bigint>)
+    CALLED ON NULL INPUT
+    RETURNS double
+    LANGUAGE java
+    AS $$
+        double r = 0;
+        if (state.getInt(0) == 0) return null;
+        r = state.getLong(1);
+        r /= state.getInt(0);
+        return Double.valueOf(r);
+    $$;
+
+CREATE OR REPLACE AGGREGATE test.average(int)
+    SFUNC averageState
+    STYPE tuple
+    FINALFUNC averageFinal
+    INITCOND (0, 0);
+
+CREATE TABLE test.atable (
+    pk int PRIMARY KEY,
+    val int
+);
+
+INSERT INTO test.atable (pk, val) VALUES (1,1);
+INSERT INTO test.atable (pk, val) VALUES (2,2);
+INSERT INTO test.atable (pk, val) VALUES (3,3);
+INSERT INTO test.atable (pk, val) VALUES (4,4);
+
+SELECT test.average(val) FROM atable;
diff --git a/doc/modules/cassandra/examples/CQL/udt.cql b/doc/modules/cassandra/examples/CQL/udt.cql
new file mode 100644
index 0000000..defcc82
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/udt.cql
@@ -0,0 +1,16 @@
+CREATE TYPE phone (
+    country_code int,
+    number text,
+);
+
+CREATE TYPE address (
+    street text,
+    city text,
+    zip text,
+    phones map<text, phone>
+);
+
+CREATE TABLE user (
+    name text PRIMARY KEY,
+    addresses map<text, frozen<address>>
+);
diff --git a/doc/modules/cassandra/examples/CQL/update_list.cql b/doc/modules/cassandra/examples/CQL/update_list.cql
new file mode 100644
index 0000000..70aacf5
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/update_list.cql
@@ -0,0 +1,2 @@
+UPDATE plays SET players = 5, scores = scores + [ 14, 21 ] WHERE id = '123-afde';
+UPDATE plays SET players = 6, scores = [ 3 ] + scores WHERE id = '123-afde';
diff --git a/doc/modules/cassandra/examples/CQL/update_map.cql b/doc/modules/cassandra/examples/CQL/update_map.cql
new file mode 100644
index 0000000..870f463
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/update_map.cql
@@ -0,0 +1,2 @@
+UPDATE users SET favs['author'] = 'Ed Poe' WHERE id = 'jsmith';
+UPDATE users SET favs = favs + { 'movie' : 'Cassablanca', 'band' : 'ZZ Top' } WHERE id = 'jsmith';
diff --git a/doc/modules/cassandra/examples/CQL/update_particular_list_element.cql b/doc/modules/cassandra/examples/CQL/update_particular_list_element.cql
new file mode 100644
index 0000000..604ad34
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/update_particular_list_element.cql
@@ -0,0 +1 @@
+UPDATE plays SET scores[1] = 7 WHERE id = '123-afde';
diff --git a/doc/modules/cassandra/examples/CQL/update_set.cql b/doc/modules/cassandra/examples/CQL/update_set.cql
new file mode 100644
index 0000000..16e6eb2
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/update_set.cql
@@ -0,0 +1 @@
+UPDATE images SET tags = tags + { 'gray', 'cuddly' } WHERE name = 'cat.jpg';
diff --git a/doc/modules/cassandra/examples/CQL/update_statement.cql b/doc/modules/cassandra/examples/CQL/update_statement.cql
new file mode 100644
index 0000000..7e1cfa7
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/update_statement.cql
@@ -0,0 +1,10 @@
+UPDATE NerdMovies USING TTL 400
+   SET director   = 'Joss Whedon',
+       main_actor = 'Nathan Fillion',
+       year       = 2005
+ WHERE movie = 'Serenity';
+
+UPDATE UserActions
+   SET total = total + 2
+   WHERE user = B70DE1D0-9908-4AE3-BE34-5573E5B09F14
+     AND action = 'click';
diff --git a/doc/modules/cassandra/examples/CQL/update_ttl_map.cql b/doc/modules/cassandra/examples/CQL/update_ttl_map.cql
new file mode 100644
index 0000000..d2db9bd
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/update_ttl_map.cql
@@ -0,0 +1 @@
+UPDATE users USING TTL 10 SET favs['color'] = 'green' WHERE id = 'jsmith';
diff --git a/doc/modules/cassandra/examples/CQL/use_ks.cql b/doc/modules/cassandra/examples/CQL/use_ks.cql
new file mode 100644
index 0000000..b3aaaf3
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/use_ks.cql
@@ -0,0 +1 @@
+USE excelsior;
diff --git a/doc/modules/cassandra/examples/CQL/where.cql b/doc/modules/cassandra/examples/CQL/where.cql
new file mode 100644
index 0000000..22d4bca
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/where.cql
@@ -0,0 +1,4 @@
+SELECT entry_title, content FROM posts
+ WHERE userid = 'john doe'
+   AND blog_title='John''s Blog'
+   AND posted_at >= '2012-01-01' AND posted_at < '2012-01-31';
diff --git a/doc/modules/cassandra/examples/CQL/where_fail.cql b/doc/modules/cassandra/examples/CQL/where_fail.cql
new file mode 100644
index 0000000..57413df
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/where_fail.cql
@@ -0,0 +1,5 @@
+// Needs a blog_title to be set to select ranges of posted_at
+
+SELECT entry_title, content FROM posts
+ WHERE userid = 'john doe'
+   AND posted_at >= '2012-01-01' AND posted_at < '2012-01-31';
diff --git a/doc/modules/cassandra/examples/CQL/where_group_cluster_columns.cql b/doc/modules/cassandra/examples/CQL/where_group_cluster_columns.cql
new file mode 100644
index 0000000..1efb55e
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/where_group_cluster_columns.cql
@@ -0,0 +1,3 @@
+SELECT * FROM posts
+ WHERE userid = 'john doe'
+   AND (blog_title, posted_at) > ('John''s Blog', '2012-01-01');
diff --git a/doc/modules/cassandra/examples/CQL/where_in_tuple.cql b/doc/modules/cassandra/examples/CQL/where_in_tuple.cql
new file mode 100644
index 0000000..1d55804
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/where_in_tuple.cql
@@ -0,0 +1,3 @@
+SELECT * FROM posts
+ WHERE userid = 'john doe'
+   AND (blog_title, posted_at) IN (('John''s Blog', '2012-01-01'), ('Extreme Chess', '2014-06-01'));
diff --git a/doc/modules/cassandra/examples/CQL/where_no_group_cluster_columns.cql b/doc/modules/cassandra/examples/CQL/where_no_group_cluster_columns.cql
new file mode 100644
index 0000000..6681ba5
--- /dev/null
+++ b/doc/modules/cassandra/examples/CQL/where_no_group_cluster_columns.cql
@@ -0,0 +1,4 @@
+SELECT * FROM posts
+ WHERE userid = 'john doe'
+   AND blog_title > 'John''s Blog'
+   AND posted_at > '2012-01-01';
diff --git a/doc/modules/cassandra/examples/JAVA/udf_imports.java b/doc/modules/cassandra/examples/JAVA/udf_imports.java
new file mode 100644
index 0000000..6b883bf
--- /dev/null
+++ b/doc/modules/cassandra/examples/JAVA/udf_imports.java
@@ -0,0 +1,8 @@
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.cassandra.cql3.functions.UDFContext;
+import com.datastax.driver.core.TypeCodec;
+import com.datastax.driver.core.TupleValue;
+import com.datastax.driver.core.UDTValue;
diff --git a/doc/modules/cassandra/examples/JAVA/udfcontext.java b/doc/modules/cassandra/examples/JAVA/udfcontext.java
new file mode 100644
index 0000000..65e0c7f
--- /dev/null
+++ b/doc/modules/cassandra/examples/JAVA/udfcontext.java
@@ -0,0 +1,11 @@
+public interface UDFContext
+{
+    UDTValue newArgUDTValue(String argName);
+    UDTValue newArgUDTValue(int argNum);
+    UDTValue newReturnUDTValue();
+    UDTValue newUDTValue(String udtName);
+    TupleValue newArgTupleValue(String argName);
+    TupleValue newArgTupleValue(int argNum);
+    TupleValue newReturnTupleValue();
+    TupleValue newTupleValue(String cqlDefinition);
+}
diff --git a/doc/modules/cassandra/examples/RESULTS/add_repo_keys.result b/doc/modules/cassandra/examples/RESULTS/add_repo_keys.result
new file mode 100644
index 0000000..4736ece
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/add_repo_keys.result
@@ -0,0 +1,4 @@
+% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
+                                 Dload  Upload   Total   Spent    Left  Speed
+100  266k  100  266k    0     0   320k      0 --:--:-- --:--:-- --:--:--  320k
+OK
diff --git a/doc/modules/cassandra/examples/RESULTS/add_yum_repo.result b/doc/modules/cassandra/examples/RESULTS/add_yum_repo.result
new file mode 100644
index 0000000..8fdb78c
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/add_yum_repo.result
@@ -0,0 +1,6 @@
+[cassandra]
+name=Apache Cassandra
+baseurl=https://downloads.apache.org/cassandra/redhat/311x/
+gpgcheck=1
+repo_gpgcheck=1
+gpgkey=https://downloads.apache.org/cassandra/KEYS
diff --git a/doc/modules/cassandra/examples/RESULTS/autoexpand_exclude_dc.result b/doc/modules/cassandra/examples/RESULTS/autoexpand_exclude_dc.result
new file mode 100644
index 0000000..6d5a8a4
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/autoexpand_exclude_dc.result
@@ -0,0 +1 @@
+CREATE KEYSPACE excalibur WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true;
diff --git a/doc/modules/cassandra/examples/RESULTS/autoexpand_ks.result b/doc/modules/cassandra/examples/RESULTS/autoexpand_ks.result
new file mode 100644
index 0000000..fcc8855
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/autoexpand_ks.result
@@ -0,0 +1 @@
+CREATE KEYSPACE excalibur WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '3'} AND durable_writes = true;
diff --git a/doc/modules/cassandra/examples/RESULTS/autoexpand_ks_override.result b/doc/modules/cassandra/examples/RESULTS/autoexpand_ks_override.result
new file mode 100644
index 0000000..b76189d
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/autoexpand_ks_override.result
@@ -0,0 +1 @@
+CREATE KEYSPACE excalibur WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '2'} AND durable_writes = true;
diff --git a/doc/modules/cassandra/examples/RESULTS/cqlsh_localhost.result b/doc/modules/cassandra/examples/RESULTS/cqlsh_localhost.result
new file mode 100644
index 0000000..b5a1908
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/cqlsh_localhost.result
@@ -0,0 +1,11 @@
+Connected to Test Cluster at localhost:9042.
+[cqlsh 5.0.1 | Cassandra 3.8 | CQL spec 3.4.2 | Native protocol v4]
+Use HELP for help.
+cqlsh> SELECT cluster_name, listen_address FROM system.local;
+
+ cluster_name | listen_address
+--------------+----------------
+ Test Cluster |      127.0.0.1
+
+(1 rows)
+cqlsh>
diff --git a/doc/modules/cassandra/examples/RESULTS/curl_verify_sha.result b/doc/modules/cassandra/examples/RESULTS/curl_verify_sha.result
new file mode 100644
index 0000000..ac77d26
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/curl_verify_sha.result
@@ -0,0 +1 @@
+28757dde589f70410f9a6a95c39ee7e6cde63440e2b06b91ae6b200614fa364d
diff --git a/doc/modules/cassandra/examples/RESULTS/find_backups.result b/doc/modules/cassandra/examples/RESULTS/find_backups.result
new file mode 100644
index 0000000..156b569
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/find_backups.result
@@ -0,0 +1,4 @@
+./cassandra/data/data/cqlkeyspace/t-d132e240c21711e9bbee19821dcea330/backups
+./cassandra/data/data/cqlkeyspace/t2-d993a390c22911e9b1350d927649052c/backups
+./cassandra/data/data/catalogkeyspace/journal-296a2d30c22a11e9b1350d927649052c/backups
+./cassandra/data/data/catalogkeyspace/magazine-446eae30c22a11e9b1350d927649052c/backups
diff --git a/doc/modules/cassandra/examples/RESULTS/find_backups_table.result b/doc/modules/cassandra/examples/RESULTS/find_backups_table.result
new file mode 100644
index 0000000..7e01fa6
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/find_backups_table.result
@@ -0,0 +1 @@
+./cassandra/data/data/cqlkeyspace/t-d132e240c21711e9bbee19821dcea330/backups
diff --git a/doc/modules/cassandra/examples/RESULTS/find_two_snapshots.result b/doc/modules/cassandra/examples/RESULTS/find_two_snapshots.result
new file mode 100644
index 0000000..9cfb693
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/find_two_snapshots.result
@@ -0,0 +1,3 @@
+total 0
+drwxrwxr-x. 2 ec2-user ec2-user 265 Aug 19 02:44 catalog-ks
+drwxrwxr-x. 2 ec2-user ec2-user 265 Aug 19 02:52 multi-ks
diff --git a/doc/modules/cassandra/examples/RESULTS/flush_and_check.result b/doc/modules/cassandra/examples/RESULTS/flush_and_check.result
new file mode 100644
index 0000000..33863ad
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/flush_and_check.result
@@ -0,0 +1,9 @@
+total 36
+-rw-rw-r--. 2 ec2-user ec2-user   47 Aug 19 00:32 na-1-big-CompressionInfo.db
+-rw-rw-r--. 2 ec2-user ec2-user   43 Aug 19 00:32 na-1-big-Data.db
+-rw-rw-r--. 2 ec2-user ec2-user   10 Aug 19 00:32 na-1-big-Digest.crc32
+-rw-rw-r--. 2 ec2-user ec2-user   16 Aug 19 00:32 na-1-big-Filter.db
+-rw-rw-r--. 2 ec2-user ec2-user    8 Aug 19 00:32 na-1-big-Index.db
+-rw-rw-r--. 2 ec2-user ec2-user 4673 Aug 19 00:32 na-1-big-Statistics.db
+-rw-rw-r--. 2 ec2-user ec2-user   56 Aug 19 00:32 na-1-big-Summary.db
+-rw-rw-r--. 2 ec2-user ec2-user   92 Aug 19 00:32 na-1-big-TOC.txt
diff --git a/doc/modules/cassandra/examples/RESULTS/flush_and_check2.result b/doc/modules/cassandra/examples/RESULTS/flush_and_check2.result
new file mode 100644
index 0000000..d89b991
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/flush_and_check2.result
@@ -0,0 +1,17 @@
+total 72
+-rw-rw-r--. 2 ec2-user ec2-user   47 Aug 19 00:32 na-1-big-CompressionInfo.db
+-rw-rw-r--. 2 ec2-user ec2-user   43 Aug 19 00:32 na-1-big-Data.db
+-rw-rw-r--. 2 ec2-user ec2-user   10 Aug 19 00:32 na-1-big-Digest.crc32
+-rw-rw-r--. 2 ec2-user ec2-user   16 Aug 19 00:32 na-1-big-Filter.db
+-rw-rw-r--. 2 ec2-user ec2-user    8 Aug 19 00:32 na-1-big-Index.db
+-rw-rw-r--. 2 ec2-user ec2-user 4673 Aug 19 00:32 na-1-big-Statistics.db
+-rw-rw-r--. 2 ec2-user ec2-user   56 Aug 19 00:32 na-1-big-Summary.db
+-rw-rw-r--. 2 ec2-user ec2-user   92 Aug 19 00:32 na-1-big-TOC.txt
+-rw-rw-r--. 2 ec2-user ec2-user   47 Aug 19 00:35 na-2-big-CompressionInfo.db
+-rw-rw-r--. 2 ec2-user ec2-user   41 Aug 19 00:35 na-2-big-Data.db
+-rw-rw-r--. 2 ec2-user ec2-user   10 Aug 19 00:35 na-2-big-Digest.crc32
+-rw-rw-r--. 2 ec2-user ec2-user   16 Aug 19 00:35 na-2-big-Filter.db
+-rw-rw-r--. 2 ec2-user ec2-user    8 Aug 19 00:35 na-2-big-Index.db
+-rw-rw-r--. 2 ec2-user ec2-user 4673 Aug 19 00:35 na-2-big-Statistics.db
+-rw-rw-r--. 2 ec2-user ec2-user   56 Aug 19 00:35 na-2-big-Summary.db
+-rw-rw-r--. 2 ec2-user ec2-user   92 Aug 19 00:35 na-2-big-TOC.txt
diff --git a/doc/modules/cassandra/examples/RESULTS/insert_data2_backup.result b/doc/modules/cassandra/examples/RESULTS/insert_data2_backup.result
new file mode 100644
index 0000000..23e3902
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/insert_data2_backup.result
@@ -0,0 +1,13 @@
+id | name                      | publisher
+----+---------------------------+------------------
+ 1 |        Couchbase Magazine |        Couchbase
+ 0 | Apache Cassandra Magazine | Apache Cassandra
+
+ (2 rows)
+
+id | name                      | publisher
+----+---------------------------+------------------
+ 1 |        Couchbase Magazine |        Couchbase
+ 0 | Apache Cassandra Magazine | Apache Cassandra
+
+ (2 rows)
diff --git a/doc/modules/cassandra/examples/RESULTS/insert_table_cc_addl.result b/doc/modules/cassandra/examples/RESULTS/insert_table_cc_addl.result
new file mode 100644
index 0000000..d9af0c6
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/insert_table_cc_addl.result
@@ -0,0 +1,9 @@
+ a | b | c | d
+---+---+---+---
+ 1 | 1 | 4 | 4
+ 0 | 0 | 0 | 9	<1>
+ 0 | 0 | 1 | 1
+ 0 | 1 | 2 | 2
+ 0 | 1 | 3 | 3
+
+(5 rows)
diff --git a/doc/modules/cassandra/examples/RESULTS/java_verify.result b/doc/modules/cassandra/examples/RESULTS/java_verify.result
new file mode 100644
index 0000000..3ea9625
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/java_verify.result
@@ -0,0 +1,3 @@
+openjdk version "1.8.0_222"							
+OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~16.04.1-b10)	
+OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)				
diff --git a/doc/modules/cassandra/examples/RESULTS/no_bups.result b/doc/modules/cassandra/examples/RESULTS/no_bups.result
new file mode 100644
index 0000000..9281104
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/no_bups.result
@@ -0,0 +1 @@
+total 0
diff --git a/doc/modules/cassandra/examples/RESULTS/nodetool_list_snapshots.result b/doc/modules/cassandra/examples/RESULTS/nodetool_list_snapshots.result
new file mode 100644
index 0000000..15503ed
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/nodetool_list_snapshots.result
@@ -0,0 +1,13 @@
+Snapshot Details:
+Snapshot name Keyspace name   Column family name True size Size on disk
+multi-table   cqlkeyspace     t2                 4.86 KiB  5.67 KiB
+multi-table   cqlkeyspace     t                  4.89 KiB  5.7 KiB
+multi-ks      cqlkeyspace     t                  4.89 KiB  5.7 KiB
+multi-ks      catalogkeyspace journal            4.9 KiB   5.73 KiB
+magazine      catalogkeyspace magazine           4.9 KiB   5.73 KiB
+multi-table-2 cqlkeyspace     t2                 4.86 KiB  5.67 KiB
+multi-table-2 cqlkeyspace     t                  4.89 KiB  5.7 KiB
+catalog-ks    catalogkeyspace journal            4.9 KiB   5.73 KiB
+catalog-ks    catalogkeyspace magazine           4.9 KiB   5.73 KiB
+
+Total TrueDiskSpaceUsed: 44.02 KiB
diff --git a/doc/modules/cassandra/examples/RESULTS/nodetool_snapshot_help.result b/doc/modules/cassandra/examples/RESULTS/nodetool_snapshot_help.result
new file mode 100644
index 0000000..a583608
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/nodetool_snapshot_help.result
@@ -0,0 +1,54 @@
+NAME
+       nodetool snapshot - Take a snapshot of specified keyspaces or a snapshot
+       of the specified table
+
+SYNOPSIS
+       nodetool [(-h <host> | --host <host>)] [(-p <port> | --port <port>)]
+               [(-pp | --print-port)] [(-pw <password> | --password <password>)]
+               [(-pwf <passwordFilePath> | --password-file <passwordFilePath>)]
+               [(-u <username> | --username <username>)] snapshot
+               [(-cf <table> | --column-family <table> | --table <table>)]
+               [(-kt <ktlist> | --kt-list <ktlist> | -kc <ktlist> | --kc.list <ktlist>)]
+               [(-sf | --skip-flush)] [(-t <tag> | --tag <tag>)] [--] [<keyspaces...>]
+
+OPTIONS
+       -cf <table>, --column-family <table>, --table <table>
+           The table name (you must specify one and only one keyspace for using
+           this option)
+
+       -h <host>, --host <host>
+           Node hostname or ip address
+
+       -kt <ktlist>, --kt-list <ktlist>, -kc <ktlist>, --kc.list <ktlist>
+           The list of Keyspace.table to take snapshot.(you must not specify
+           only keyspace)
+
+       -p <port>, --port <port>
+           Remote jmx agent port number
+
+       -pp, --print-port
+           Operate in 4.0 mode with hosts disambiguated by port number
+
+       -pw <password>, --password <password>
+           Remote jmx agent password
+
+       -pwf <passwordFilePath>, --password-file <passwordFilePath>
+           Path to the JMX password file
+
+       -sf, --skip-flush
+           Do not flush memtables before snapshotting (snapshot will not
+           contain unflushed data)
+
+       -t <tag>, --tag <tag>
+           The name of the snapshot
+
+       -u <username>, --username <username>
+           Remote jmx agent username
+
+       --
+           This option can be used to separate command-line options from the
+           list of argument, (useful when arguments might be mistaken for
+           command-line options
+
+       [<keyspaces...>]
+           List of keyspaces. By default, all keyspaces
diff --git a/doc/modules/cassandra/examples/RESULTS/select_data2_backup.result b/doc/modules/cassandra/examples/RESULTS/select_data2_backup.result
new file mode 100644
index 0000000..23e3902
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/select_data2_backup.result
@@ -0,0 +1,13 @@
+id | name                      | publisher
+----+---------------------------+------------------
+ 1 |        Couchbase Magazine |        Couchbase
+ 0 | Apache Cassandra Magazine | Apache Cassandra
+
+ (2 rows)
+
+id | name                      | publisher
+----+---------------------------+------------------
+ 1 |        Couchbase Magazine |        Couchbase
+ 0 | Apache Cassandra Magazine | Apache Cassandra
+
+ (2 rows)
diff --git a/doc/modules/cassandra/examples/RESULTS/select_data_backup.result b/doc/modules/cassandra/examples/RESULTS/select_data_backup.result
new file mode 100644
index 0000000..5d6a9e3
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/select_data_backup.result
@@ -0,0 +1,15 @@
+id | k | v
+----+---+------
+ 1 | 1 | val1
+ 0 | 0 | val0
+
+ (2 rows)
+
+
+id | k | v
+----+---+------
+ 1 | 1 | val1
+ 0 | 0 | val0
+ 2 | 2 | val2
+
+ (3 rows)
diff --git a/doc/modules/cassandra/examples/RESULTS/select_range.result b/doc/modules/cassandra/examples/RESULTS/select_range.result
new file mode 100644
index 0000000..a3d1c76
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/select_range.result
@@ -0,0 +1,6 @@
+ a | b | c | d
+---+---+---+---
+ 0 | 1 | 2 | 2
+ 0 | 1 | 3 | 3
+
+(2 rows)
diff --git a/doc/modules/cassandra/examples/RESULTS/select_static_data.result b/doc/modules/cassandra/examples/RESULTS/select_static_data.result
new file mode 100644
index 0000000..f1e8dec
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/select_static_data.result
@@ -0,0 +1,4 @@
+   pk | t | v      | s
+  ----+---+--------+-----------
+   0  | 0 | 'val0' | 'static1'
+   0  | 1 | 'val1' | 'static1'
diff --git a/doc/modules/cassandra/examples/RESULTS/select_table_clustercolumn.result b/doc/modules/cassandra/examples/RESULTS/select_table_clustercolumn.result
new file mode 100644
index 0000000..1d3899d
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/select_table_clustercolumn.result
@@ -0,0 +1,9 @@
+ a | b | c | d
+---+---+---+---
+ 1 | 1 | 4 | 4	<1>
+ 0 | 0 | 0 | 0	
+ 0 | 0 | 1 | 1	
+ 0 | 1 | 2 | 2	
+ 0 | 1 | 3 | 3	
+
+(5 rows)
diff --git a/doc/modules/cassandra/examples/RESULTS/select_table_compound_pk.result b/doc/modules/cassandra/examples/RESULTS/select_table_compound_pk.result
new file mode 100644
index 0000000..d098516
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/select_table_compound_pk.result
@@ -0,0 +1,9 @@
+ a | b | c | d
+---+---+---+---
+ 0 | 0 | 0 | 0 	<1>
+ 0 | 0 | 1 | 1	
+ 0 | 1 | 2 | 2	<2>
+ 0 | 1 | 3 | 3	
+ 1 | 1 | 4 | 4  <3>
+
+(5 rows)
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_all.result b/doc/modules/cassandra/examples/RESULTS/snapshot_all.result
new file mode 100644
index 0000000..6ec55a0
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_all.result
@@ -0,0 +1,4 @@
+./cassandra/data/data/cqlkeyspace/t-d132e240c21711e9bbee19821dcea330/snapshots
+./cassandra/data/data/cqlkeyspace/t2-d993a390c22911e9b1350d927649052c/snapshots
+./cassandra/data/data/catalogkeyspace/journal-296a2d30c22a11e9b1350d927649052c/snapshots
+./cassandra/data/data/catalogkeyspace/magazine-446eae30c22a11e9b1350d927649052c/snapshots
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_backup2.result b/doc/modules/cassandra/examples/RESULTS/snapshot_backup2.result
new file mode 100644
index 0000000..8276d52
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_backup2.result
@@ -0,0 +1,3 @@
+Requested creating snapshot(s) for [catalogkeyspace] with snapshot name [catalog-ks] and
+options {skipFlush=false}
+Snapshot directory: catalog-ks
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_backup2_find.result b/doc/modules/cassandra/examples/RESULTS/snapshot_backup2_find.result
new file mode 100644
index 0000000..88b5499
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_backup2_find.result
@@ -0,0 +1,2 @@
+./cassandra/data/data/catalogkeyspace/journal-296a2d30c22a11e9b1350d927649052c/snapshots
+./cassandra/data/data/catalogkeyspace/magazine-446eae30c22a11e9b1350d927649052c/snapshots
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_files.result b/doc/modules/cassandra/examples/RESULTS/snapshot_files.result
new file mode 100644
index 0000000..8dd91b5
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_files.result
@@ -0,0 +1,11 @@
+total 44
+-rw-rw-r--. 1 ec2-user ec2-user   31 Aug 19 02:44 manifest.jsonZ
+-rw-rw-r--. 4 ec2-user ec2-user   47 Aug 19 02:38 na-1-big-CompressionInfo.db
+-rw-rw-r--. 4 ec2-user ec2-user   97 Aug 19 02:38 na-1-big-Data.db
+-rw-rw-r--. 4 ec2-user ec2-user   10 Aug 19 02:38 na-1-big-Digest.crc32
+-rw-rw-r--. 4 ec2-user ec2-user   16 Aug 19 02:38 na-1-big-Filter.db
+-rw-rw-r--. 4 ec2-user ec2-user   16 Aug 19 02:38 na-1-big-Index.db
+-rw-rw-r--. 4 ec2-user ec2-user 4687 Aug 19 02:38 na-1-big-Statistics.db
+-rw-rw-r--. 4 ec2-user ec2-user   56 Aug 19 02:38 na-1-big-Summary.db
+-rw-rw-r--. 4 ec2-user ec2-user   92 Aug 19 02:38 na-1-big-TOC.txt
+-rw-rw-r--. 1 ec2-user ec2-user  814 Aug 19 02:44 schema.cql
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_mult_ks.result b/doc/modules/cassandra/examples/RESULTS/snapshot_mult_ks.result
new file mode 100644
index 0000000..61dff93
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_mult_ks.result
@@ -0,0 +1,3 @@
+Requested creating snapshot(s) for [catalogkeyspace.journal,cqlkeyspace.t] with snapshot
+name [multi-ks] and options {skipFlush=false}
+Snapshot directory: multi-ks
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_mult_tables.result b/doc/modules/cassandra/examples/RESULTS/snapshot_mult_tables.result
new file mode 100644
index 0000000..557a6a4
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_mult_tables.result
@@ -0,0 +1,3 @@
+Requested creating snapshot(s) for ["CQLKeyspace".t,"CQLKeyspace".t2] with snapshot name [multi-
+table] and options {skipFlush=false}
+Snapshot directory: multi-table
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_mult_tables_again.result b/doc/modules/cassandra/examples/RESULTS/snapshot_mult_tables_again.result
new file mode 100644
index 0000000..6c09e71
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_mult_tables_again.result
@@ -0,0 +1,3 @@
+Requested creating snapshot(s) for ["CQLKeyspace".t,"CQLKeyspace".t2] with snapshot name [multi-
+table-2] and options {skipFlush=false}
+Snapshot directory: multi-table-2
diff --git a/doc/modules/cassandra/examples/RESULTS/snapshot_one_table2.result b/doc/modules/cassandra/examples/RESULTS/snapshot_one_table2.result
new file mode 100644
index 0000000..c147889
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/snapshot_one_table2.result
@@ -0,0 +1,3 @@
+Requested creating snapshot(s) for [catalogkeyspace] with snapshot name [magazine] and
+options {skipFlush=false}
+Snapshot directory: magazine
diff --git a/doc/modules/cassandra/examples/RESULTS/tail_syslog.result b/doc/modules/cassandra/examples/RESULTS/tail_syslog.result
new file mode 100644
index 0000000..cb32dc0
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/tail_syslog.result
@@ -0,0 +1 @@
+INFO  [main] 2019-12-17 03:03:37,526 Server.java:156 - Starting listening for CQL clients on localhost/127.0.0.1:9042 (unencrypted)...
diff --git a/doc/modules/cassandra/examples/RESULTS/verify_gpg.result b/doc/modules/cassandra/examples/RESULTS/verify_gpg.result
new file mode 100644
index 0000000..da62736
--- /dev/null
+++ b/doc/modules/cassandra/examples/RESULTS/verify_gpg.result
@@ -0,0 +1,2 @@
+apache-cassandra-3.11.10-bin.tar.gz: 28757DDE 589F7041 0F9A6A95 C39EE7E6
+                                   CDE63440 E2B06B91 AE6B2006 14FA364D
diff --git a/doc/modules/cassandra/examples/TEXT/tarball_install_dirs.txt b/doc/modules/cassandra/examples/TEXT/tarball_install_dirs.txt
new file mode 100644
index 0000000..99b1a14
--- /dev/null
+++ b/doc/modules/cassandra/examples/TEXT/tarball_install_dirs.txt
@@ -0,0 +1,11 @@
+<tarball_installation>/
+    bin/		<1>
+    conf/		<2>
+    data/		<3>
+    doc/
+    interface/
+    javadoc/
+    lib/
+    logs/		<4>
+    pylib/
+    tools/		<5>
diff --git a/doc/modules/cassandra/examples/YAML/auto_snapshot.yaml b/doc/modules/cassandra/examples/YAML/auto_snapshot.yaml
new file mode 100644
index 0000000..8f5033d
--- /dev/null
+++ b/doc/modules/cassandra/examples/YAML/auto_snapshot.yaml
@@ -0,0 +1 @@
+auto_snapshot: false
diff --git a/doc/modules/cassandra/examples/YAML/incremental_bups.yaml b/doc/modules/cassandra/examples/YAML/incremental_bups.yaml
new file mode 100644
index 0000000..95fccdb
--- /dev/null
+++ b/doc/modules/cassandra/examples/YAML/incremental_bups.yaml
@@ -0,0 +1 @@
+incremental_backups: true
diff --git a/doc/modules/cassandra/examples/YAML/snapshot_before_compaction.yaml b/doc/modules/cassandra/examples/YAML/snapshot_before_compaction.yaml
new file mode 100644
index 0000000..4ee1b17
--- /dev/null
+++ b/doc/modules/cassandra/examples/YAML/snapshot_before_compaction.yaml
@@ -0,0 +1 @@
+snapshot_before_compaction: false
diff --git a/doc/modules/cassandra/examples/YAML/stress-example.yaml b/doc/modules/cassandra/examples/YAML/stress-example.yaml
new file mode 100644
index 0000000..4a67102
--- /dev/null
+++ b/doc/modules/cassandra/examples/YAML/stress-example.yaml
@@ -0,0 +1,62 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+spacenam: example # idenitifier for this spec if running with multiple yaml files
+keyspace: example
+
+# Would almost always be network topology unless running something locally
+keyspace_definition: |
+  CREATE KEYSPACE example WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
+
+table: staff_activities
+
+# The table under test. Start with a partition per staff member
+# Is this a good idea?
+table_definition: |
+  CREATE TABLE staff_activities (
+        name text,
+        when timeuuid,
+        what text,
+        PRIMARY KEY(name, when)
+  ) 
+
+columnspec:
+  - name: name
+    size: uniform(5..10) # The names of the staff members are between 5-10 characters
+    population: uniform(1..10) # 10 possible staff members to pick from 
+  - name: when
+    cluster: uniform(20..500) # Staff members do between 20 and 500 events
+  - name: what
+    size: normal(10..100,50)
+
+insert:
+  # we only update a single partition in any given insert 
+  partitions: fixed(1) 
+  # we want to insert a single row per partition and we have between 20 and 500
+  # rows per partition
+  select: fixed(1)/500 
+  batchtype: UNLOGGED             # Single partition unlogged batches are essentially noops
+
+queries:
+   events:
+      cql: select *  from staff_activities where name = ?
+      fields: samerow
+   latest_event:
+      cql: select * from staff_activities where name = ?  LIMIT 1
+      fields: samerow
+
diff --git a/doc/modules/cassandra/examples/YAML/stress-lwt-example.yaml b/doc/modules/cassandra/examples/YAML/stress-lwt-example.yaml
new file mode 100644
index 0000000..1f12c24
--- /dev/null
+++ b/doc/modules/cassandra/examples/YAML/stress-lwt-example.yaml
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Keyspace Name
+keyspace: stresscql
+
+# The CQL for creating a keyspace (optional if it already exists)
+# Would almost always be network topology unless running something locall
+keyspace_definition: |
+  CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
+
+# Table name
+table: blogposts
+
+# The CQL for creating a table you wish to stress (optional if it already exists)
+table_definition: |
+  CREATE TABLE blogposts (
+        domain text,
+        published_date timeuuid,
+        url text,
+        author text,
+        title text,
+        body text,
+        PRIMARY KEY(domain, published_date)
+  ) WITH CLUSTERING ORDER BY (published_date DESC) 
+    AND compaction = { 'class':'LeveledCompactionStrategy' } 
+    AND comment='A table to hold blog posts'
+
+### Column Distribution Specifications ###
+ 
+columnspec:
+  - name: domain
+    size: gaussian(5..100)       #domain names are relatively short
+    population: uniform(1..10M)  #10M possible domains to pick from
+
+  - name: published_date
+    cluster: fixed(1000)         #under each domain we will have max 1000 posts
+
+  - name: url
+    size: uniform(30..300)       
+
+  - name: title                  #titles shouldn't go beyond 200 chars
+    size: gaussian(10..200)
+
+  - name: author
+    size: uniform(5..20)         #author names should be short
+
+  - name: body
+    size: gaussian(100..5000)    #the body of the blog post can be long
+   
+### Batch Ratio Distribution Specifications ###
+
+insert:
+  partitions: fixed(1)            # Our partition key is the domain so only insert one per batch
+
+  select:    fixed(1)/1000        # We have 1000 posts per domain so 1/1000 will allow 1 post per batch
+
+  batchtype: UNLOGGED             # Unlogged batches
+
+
+#
+# A list of queries you wish to run against the schema
+#
+queries:
+   singlepost:
+      cql: select * from blogposts where domain = ? LIMIT 1
+      fields: samerow
+   regularupdate:
+      cql: update blogposts set author = ? where domain = ? and published_date = ?
+      fields: samerow
+   updatewithlwt:
+      cql: update blogposts set author = ? where domain = ? and published_date = ? IF body = ? AND url = ?
+      fields: samerow
diff --git a/doc/modules/cassandra/nav.adoc b/doc/modules/cassandra/nav.adoc
new file mode 100644
index 0000000..c0d30ed
--- /dev/null
+++ b/doc/modules/cassandra/nav.adoc
@@ -0,0 +1,97 @@
+* Cassandra
+** xref:getting_started/index.adoc[Getting Started]	
+*** xref:getting_started/installing.adoc[Installing Cassandra]
+*** xref:getting_started/configuring.adoc[Configuring Cassandra]
+*** xref:getting_started/querying.adoc[Inserting and querying]
+*** xref:getting_started/drivers.adoc[Client drivers]
+*** xref:getting_started/production.adoc[Production recommendations]
+
+** xref:architecture/index.adoc[Architecture]
+*** xref:architecture/overview.adoc[Overview]
+*** xref:architecture/dynamo.adoc[Dynamo]		
+*** xref:architecture/storage_engine.adoc[Storage engine]
+*** xref:architecture/guarantees.adoc[Guarantees]
+
+** xref:data_modeling/index.adoc[Data modeling]
+*** xref:data_modeling/intro.adoc[Introduction]
+*** xref:data_modeling/data_modeling_conceptual.adoc[Conceptual data modeling]
+*** xref:data_modeling/data_modeling_rdbms.adoc[RDBMS design]
+*** xref:data_modeling/data_modeling_queries.adoc[Defining application queries]
+*** xref:data_modeling/data_modeling_logical.adoc[Logical data modeling]
+*** xref:data_modeling/data_modeling_physical.adoc[Physical data modeling]
+*** xref:data_modeling/data_modeling_refining.adoc[Evaluating and refining data models]
+*** xref:data_modeling/data_modeling_schema.adoc[Defining database schema]
+*** xref:data_modeling/data_modeling_tools.adoc[Cassandra data modeling tools]
+
+** xref:cql/index.adoc[Cassandra Query Language (CQL)]
+*** xref:cql/definitions.adoc[Definitions]
+*** xref:cql/types.adoc[Data types]
+*** xref:cql/ddl.adoc[Data definition (DDL)]
+*** xref:cql/dml.adoc[Data manipulation (DML)]
+*** xref:cql/operators.adoc[Operators]
+*** xref:cql/indexes.adoc[Secondary indexes]
+*** xref:cql/mvs.adoc[Materialized views]
+*** xref:cql/functions.adoc[Functions]
+*** xref:cql/json.adoc[JSON]
+*** xref:cql/security.adoc[Security]
+*** xref:cql/triggers.adoc[Triggers]
+*** xref:cql/appendices.adoc[Appendices]
+*** xref:cql/changes.adoc[Changes]
+*** xref:cql/SASI.adoc[SASI]
+*** xref:cql/cql_singlefile.adoc[Single file of CQL information]
+
+** xref:configuration/index.adoc[Configuration]
+*** xref:configuration/cass_yaml_file.adoc[cassandra.yaml]
+*** xref:configuration/cass_rackdc_file.adoc[cassandra-rackdc.properties]
+*** xref:configuration/cass_env_sh_file.adoc[cassandra-env.sh]
+*** xref:configuration/cass_topo_file.adoc[cassandra-topologies.properties]
+*** xref:configuration/cass_cl_archive_file.adoc[commitlog-archiving.properties]
+*** xref:configuration/cass_logback_xml_file.adoc[logback.xml]
+*** xref:configuration/cass_jvm_options_file.adoc[jvm-* files]
+
+** xref:operating/index.adoc[Operating]
+*** xref:operating/snitch.adoc[Snitches]
+*** xref:operating/topo_changes.adoc[Topology changes]
+*** xref:operating/repair.adoc[Repair]
+*** xref:operating/read_repair.adoc[Read repair]
+*** xref:operating/hints.adoc[Hints]
+*** xref:operating/bloom_filters.adoc[Bloom filters]
+*** xref:operating/compression.adoc[Compression]
+*** xref:operating/cdc.adoc[Change Data Capture (CDC)]
+*** xref:operating/backups.adoc[Backups]
+*** xref:operating/bulk_loading.adoc[Bulk loading]
+*** xref:operating/metrics.adoc[Metrics]
+*** xref:operating/security.adoc[Security]
+*** xref:operating/hardware.adoc[Hardware]
+*** xref:operating/audit_logging.adoc[Audit logging]
+*** xref:operating/compaction/index.adoc[Compaction]		
+
+** xref:tools/index.adoc[Tools]
+*** xref:tools/cqlsh.adoc[cqlsh: the CQL shell]
+*** xref:tools/nodetool/nodetool.adoc[nodetool]
+*** xref:tools/sstable/index.adoc[SSTable tools]
+*** xref:tools/cassandra_stress.adoc[cassandra-stress]
+
+** xref:troubleshooting/index.adoc[Troubleshooting]
+*** xref:troubleshooting/finding_nodes.adoc[Finding misbehaving nodes]
+*** xref:troubleshooting/reading_logs.adoc[Reading Cassandra logs]
+*** xref:troubleshooting/use_nodetool.adoc[Using nodetool]
+*** xref:troubleshooting/use_tools.adoc[Using external tools to deep-dive]
+
+** xref:master@_:ROOT:development/index.adoc[Development]
+*** xref:master@_:ROOT:development/gettingstarted.adoc[Getting started]
+*** xref:master@_:ROOT:development/ide.adoc[Building and IDE integration]
+*** xref:master@_:ROOT:development/testing.adoc[Testing]
+*** xref:master@_:ROOT:development/patches.adoc[Contributing code changes]
+*** xref:master@_:ROOT:development/code_style.adoc[Code style]
+*** xref:master@_:ROOT:development/how_to_review.adoc[Review checklist]
+*** xref:master@_:ROOT:development/how_to_commit.adoc[How to commit]
+*** xref:master@_:ROOT:development/documentation.adoc[Working on documentation]
+*** xref:master@_:ROOT:development/ci.adoc[Jenkins CI environment]
+*** xref:master@_:ROOT:development/dependencies.adoc[Dependency management]
+*** xref:master@_:ROOT:development/release_process.adoc[Release process]
+
+** xref:faq/index.adoc[FAQ]
+
+** xref:plugins/index.adoc[Plug-ins]
+
diff --git a/doc/modules/cassandra/pages/architecture/dynamo.adoc b/doc/modules/cassandra/pages/architecture/dynamo.adoc
new file mode 100644
index 0000000..e90390a
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/dynamo.adoc
@@ -0,0 +1,531 @@
+= Dynamo
+
+Apache Cassandra relies on a number of techniques from Amazon's
+http://courses.cse.tamu.edu/caverlee/csce438/readings/dynamo-paper.pdf[Dynamo]
+distributed storage key-value system. Each node in the Dynamo system has
+three main components:
+
+* Request coordination over a partitioned dataset
+* Ring membership and failure detection
+* A local persistence (storage) engine
+
+Cassandra primarily draws from the first two clustering components,
+while using a storage engine based on a Log Structured Merge Tree
+(http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.44.2782&rep=rep1&type=pdf[LSM]).
+In particular, Cassandra relies on Dynamo style:
+
+* Dataset partitioning using consistent hashing
+* Multi-master replication using versioned data and tunable consistency
+* Distributed cluster membership and failure detection via a gossip
+protocol
+* Incremental scale-out on commodity hardware
+
+Cassandra was designed this way to meet large-scale (PiB+)
+business-critical storage requirements. In particular, as applications
+demanded full global replication of petabyte scale datasets along with
+always available low-latency reads and writes, it became imperative to
+design a new kind of database model as the relational database systems
+of the time struggled to meet the new requirements of global scale
+applications.
+
+== Dataset Partitioning: Consistent Hashing
+
+Cassandra achieves horizontal scalability by
+https://en.wikipedia.org/wiki/Partition_(database)[partitioning] all
+data stored in the system using a hash function. Each partition is
+replicated to multiple physical nodes, often across failure domains such
+as racks and even datacenters. As every replica can independently accept
+mutations to every key that it owns, every key must be versioned. Unlike
+in the original Dynamo paper where deterministic versions and vector
+clocks were used to reconcile concurrent updates to a key, Cassandra
+uses a simpler last write wins model where every mutation is timestamped
+(including deletes) and then the latest version of data is the "winning"
+value. Formally speaking, Cassandra uses a Last-Write-Wins Element-Set
+conflict-free replicated data type for each CQL row, or 
+https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type LWW-Element-Set_(Last-Write-Wins-Element-Set)[LWW-Element-Set
+CRDT], to resolve conflicting mutations on replica sets.
+
+=== Consistent Hashing using a Token Ring
+
+Cassandra partitions data over storage nodes using a special form of
+hashing called
+https://en.wikipedia.org/wiki/Consistent_hashing[consistent hashing]. In
+naive data hashing, you typically allocate keys to buckets by taking a
+hash of the key modulo the number of buckets. For example, if you want
+to distribute data to 100 nodes using naive hashing you might assign
+every node to a bucket between 0 and 100, hash the input key modulo 100,
+and store the data on the associated bucket. In this naive scheme,
+however, adding a single node might invalidate almost all of the
+mappings.
+
+Cassandra instead maps every node to one or more tokens on a continuous
+hash ring, and defines ownership by hashing a key onto the ring and then
+"walking" the ring in one direction, similar to the
+https://pdos.csail.mit.edu/papers/chord:sigcomm01/chord_sigcomm.pdf[Chord]
+algorithm. The main difference of consistent hashing to naive data
+hashing is that when the number of nodes (buckets) to hash into changes,
+consistent hashing only has to move a small fraction of the keys.
+
+For example, if we have an eight node cluster with evenly spaced tokens,
+and a replication factor (RF) of 3, then to find the owning nodes for a
+key we first hash that key to generate a token (which is just the hash
+of the key), and then we "walk" the ring in a clockwise fashion until we
+encounter three distinct nodes, at which point we have found all the
+replicas of that key. This example of an eight node cluster with
+gRF=3 can be visualized as follows:
+
+image::ring.svg[image]
+
+You can see that in a Dynamo like system, ranges of keys, also known as
+*token ranges*, map to the same physical set of nodes. In this example,
+all keys that fall in the token range excluding token 1 and including
+token 2 (grange(t1, t2]) are stored on nodes 2, 3 and 4.
+
+=== Multiple Tokens per Physical Node (vnodes)
+
+Simple single token consistent hashing works well if you have many
+physical nodes to spread data over, but with evenly spaced tokens and a
+small number of physical nodes, incremental scaling (adding just a few
+nodes of capacity) is difficult because there are no token selections
+for new nodes that can leave the ring balanced. Cassandra seeks to avoid
+token imbalance because uneven token ranges lead to uneven request load.
+For example, in the previous example there is no way to add a ninth
+token without causing imbalance; instead we would have to insert `8`
+tokens in the midpoints of the existing ranges.
+
+The Dynamo paper advocates for the use of "virtual nodes" to solve this
+imbalance problem. Virtual nodes solve the problem by assigning multiple
+tokens in the token ring to each physical node. By allowing a single
+physical node to take multiple positions in the ring, we can make small
+clusters look larger and therefore even with a single physical node
+addition we can make it look like we added many more nodes, effectively
+taking many smaller pieces of data from more ring neighbors when we add
+even a single node.
+
+Cassandra introduces some nomenclature to handle these concepts:
+
+* *Token*: A single position on the dynamo style hash
+ring.
+* *Endpoint*: A single physical IP and port on the network.
+* *Host ID*: A unique identifier for a single "physical" node, usually
+present at one gEndpoint and containing one or more
+gTokens.
+* *Virtual Node* (or *vnode*): A gToken on the hash ring
+owned by the same physical node, one with the same gHost
+ID.
+
+The mapping of *Tokens* to *Endpoints* gives rise to the *Token Map*
+where Cassandra keeps track of what ring positions map to which physical
+endpoints. For example, in the following figure we can represent an
+eight node cluster using only four physical nodes by assigning two
+tokens to every node:
+
+image::vnodes.svg[image]
+
+Multiple tokens per physical node provide the following benefits:
+
+[arabic]
+. When a new node is added it accepts approximately equal amounts of
+data from other nodes in the ring, resulting in equal distribution of
+data across the cluster.
+. When a node is decommissioned, it loses data roughly equally to other
+members of the ring, again keeping equal distribution of data across the
+cluster.
+. If a node becomes unavailable, query load (especially token aware
+query load), is evenly distributed across many other nodes.
+
+Multiple tokens, however, can also have disadvantages:
+
+[arabic]
+. Every token introduces up to `2 * (RF - 1)` additional neighbors on
+the token ring, which means that there are more combinations of node
+failures where we lose availability for a portion of the token ring. The
+more tokens you have,
+https://jolynch.github.io/pdf/cassandra-availability-virtual.pdf[the
+higher the probability of an outage].
+. Cluster-wide maintenance operations are often slowed. For example, as
+the number of tokens per node is increased, the number of discrete
+repair operations the cluster must do also increases.
+. Performance of operations that span token ranges could be affected.
+
+Note that in Cassandra `2.x`, the only token allocation algorithm
+available was picking random tokens, which meant that to keep balance
+the default number of tokens per node had to be quite high, at `256`.
+This had the effect of coupling many physical endpoints together,
+increasing the risk of unavailability. That is why in `3.x +` the new
+deterministic token allocator was added which intelligently picks tokens
+such that the ring is optimally balanced while requiring a much lower
+number of tokens per physical node.
+
+== Multi-master Replication: Versioned Data and Tunable Consistency
+
+Cassandra replicates every partition of data to many nodes across the
+cluster to maintain high availability and durability. When a mutation
+occurs, the coordinator hashes the partition key to determine the token
+range the data belongs to and then replicates the mutation to the
+replicas of that data according to the
+`Replication Strategy`.
+
+All replication strategies have the notion of a *replication factor*
+(`RF`), which indicates to Cassandra how many copies of the partition
+should exist. For example with a `RF=3` keyspace, the data will be
+written to three distinct *replicas*. Replicas are always chosen such
+that they are distinct physical nodes which is achieved by skipping
+virtual nodes if needed. Replication strategies may also choose to skip
+nodes present in the same failure domain such as racks or datacenters so
+that Cassandra clusters can tolerate failures of whole racks and even
+datacenters of nodes.
+
+=== Replication Strategy
+
+Cassandra supports pluggable *replication strategies*, which determine
+which physical nodes act as replicas for a given token range. Every
+keyspace of data has its own replication strategy. All production
+deployments should use the `NetworkTopologyStrategy` while the
+`SimpleStrategy` replication strategy is useful only for testing
+clusters where you do not yet know the datacenter layout of the cluster.
+
+[[network-topology-strategy]]
+==== `NetworkTopologyStrategy`
+
+`NetworkTopologyStrategy` requires a specified replication factor 
+for each datacenter in the cluster. Even if your cluster only uses a
+single datacenter, `NetworkTopologyStrategy` is recommended over
+`SimpleStrategy` to make it easier to add new physical or virtual
+datacenters to the cluster later, if required.
+
+In addition to allowing the replication factor to be specified
+individually by datacenter, `NetworkTopologyStrategy` also attempts to
+choose replicas within a datacenter from different racks as specified by
+the `Snitch`. If the number of racks is greater than or equal
+to the replication factor for the datacenter, each replica is guaranteed
+to be chosen from a different rack. Otherwise, each rack will hold at
+least one replica, but some racks may hold more than one. Note that this
+rack-aware behavior has some potentially
+https://issues.apache.org/jira/browse/CASSANDRA-3810[surprising
+implications]. For example, if there are not an even number of nodes in
+each rack, the data load on the smallest rack may be much higher.
+Similarly, if a single node is bootstrapped into a brand new rack, it
+will be considered a replica for the entire ring. For this reason, many
+operators choose to configure all nodes in a single availability zone or
+similar failure domain as a single "rack".
+
+[[simple-strategy]]
+==== `SimpleStrategy`
+
+`SimpleStrategy` allows a single integer `replication_factor` to be
+defined. This determines the number of nodes that should contain a copy
+of each row. For example, if `replication_factor` is 3, then three
+different nodes should store a copy of each row.
+
+`SimpleStrategy` treats all nodes identically, ignoring any configured
+datacenters or racks. To determine the replicas for a token range,
+Cassandra iterates through the tokens in the ring, starting with the
+token range of interest. For each token, it checks whether the owning
+node has been added to the set of replicas, and if it has not, it is
+added to the set. This process continues until `replication_factor`
+distinct nodes have been added to the set of replicas.
+
+==== Transient Replication
+
+Transient replication is an experimental feature in Cassandra {40_version} not
+present in the original Dynamo paper. This feature allows configuration of a
+subset of replicas to replicate only data that hasn't been incrementally
+repaired. This configuration decouples data redundancy from availability.
+For instance, if you have a keyspace replicated at RF=3, and alter it to
+RF=5 with two transient replicas, you go from tolerating one
+failed replica to tolerating two, without corresponding
+increase in storage usage. Now, three nodes will replicate all
+the data for a given token range, and the other two will only replicate
+data that hasn't been incrementally repaired.
+
+To use transient replication, first enable the option in
+`cassandra.yaml`. Once enabled, both `SimpleStrategy` and
+`NetworkTopologyStrategy` can be configured to transiently replicate
+data. Configure it by specifying replication factor as
+`<total_replicas>/<transient_replicas` Both `SimpleStrategy` and
+`NetworkTopologyStrategy` support configuring transient replication.
+
+Transiently replicated keyspaces only support tables created with
+`read_repair` set to `NONE`; monotonic reads are not currently
+supported. You also can't use `LWT`, logged batches, or counters in {40_version}.
+You will possibly never be able to use materialized views with
+transiently replicated keyspaces and probably never be able to use
+secondary indices with them.
+
+Transient replication is an experimental feature that is not ready
+for production use. The expected audience is experienced users of
+Cassandra capable of fully validating a deployment of their particular
+application. That means being able check that operations like reads,
+writes, decommission, remove, rebuild, repair, and replace all work with
+your queries, data, configuration, operational practices, and
+availability requirements.
+
+Anticipated additional features in `4.next` are support for monotonic reads with
+transient replication, as well as LWT, logged batches, and counters.
+
+=== Data Versioning
+
+Cassandra uses mutation timestamp versioning to guarantee eventual
+consistency of data. Specifically all mutations that enter the system do
+so with a timestamp provided either from a client clock or, absent a
+client provided timestamp, from the coordinator node's clock. Updates
+resolve according to the conflict resolution rule of last write wins.
+Cassandra's correctness does depend on these clocks, so make sure a
+proper time synchronization process is running such as NTP.
+
+Cassandra applies separate mutation timestamps to every column of every
+row within a CQL partition. Rows are guaranteed to be unique by primary
+key, and each column in a row resolve concurrent mutations according to
+last-write-wins conflict resolution. This means that updates to
+different primary keys within a partition can actually resolve without
+conflict! Furthermore the CQL collection types such as maps and sets use
+this same conflict free mechanism, meaning that concurrent updates to
+maps and sets are guaranteed to resolve as well.
+
+==== Replica Synchronization
+
+As replicas in Cassandra can accept mutations independently, it is
+possible for some replicas to have newer data than others. Cassandra has
+many best-effort techniques to drive convergence of replicas including
+`Replica read repair <read-repair>` in the read path and
+`Hinted handoff <hints>` in the write path.
+
+These techniques are only best-effort, however, and to guarantee
+eventual consistency Cassandra implements `anti-entropy
+repair <repair>` where replicas calculate hierarchical hash-trees over
+their datasets called https://en.wikipedia.org/wiki/Merkle_tree[Merkle
+trees] that can then be compared across replicas to identify mismatched
+data. Like the original Dynamo paper Cassandra supports full repairs
+where replicas hash their entire dataset, create Merkle trees, send them
+to each other and sync any ranges that don't match.
+
+Unlike the original Dynamo paper, Cassandra also implements sub-range
+repair and incremental repair. Sub-range repair allows Cassandra to
+increase the resolution of the hash trees (potentially down to the
+single partition level) by creating a larger number of trees that span
+only a portion of the data range. Incremental repair allows Cassandra to
+only repair the partitions that have changed since the last repair.
+
+=== Tunable Consistency
+
+Cassandra supports a per-operation tradeoff between consistency and
+availability through *Consistency Levels*. Cassandra's consistency
+levels are a version of Dynamo's `R + W > N` consistency mechanism where
+operators could configure the number of nodes that must participate in
+reads (`R`) and writes (`W`) to be larger than the replication factor
+(`N`). In Cassandra, you instead choose from a menu of common
+consistency levels which allow the operator to pick `R` and `W` behavior
+without knowing the replication factor. Generally writes will be visible
+to subsequent reads when the read consistency level contains enough
+nodes to guarantee a quorum intersection with the write consistency
+level.
+
+The following consistency levels are available:
+
+`ONE`::
+  Only a single replica must respond.
+`TWO`::
+  Two replicas must respond.
+`THREE`::
+  Three replicas must respond.
+`QUORUM`::
+  A majority (n/2 + 1) of the replicas must respond.
+`ALL`::
+  All of the replicas must respond.
+`LOCAL_QUORUM`::
+  A majority of the replicas in the local datacenter (whichever
+  datacenter the coordinator is in) must respond.
+`EACH_QUORUM`::
+  A majority of the replicas in each datacenter must respond.
+`LOCAL_ONE`::
+  Only a single replica must respond. In a multi-datacenter cluster,
+  this also gaurantees that read requests are not sent to replicas in a
+  remote datacenter.
+`ANY`::
+  A single replica may respond, or the coordinator may store a hint. If
+  a hint is stored, the coordinator will later attempt to replay the
+  hint and deliver the mutation to the replicas. This consistency level
+  is only accepted for write operations.
+
+Write operations *are always sent to all replicas*, regardless of
+consistency level. The consistency level simply controls how many
+responses the coordinator waits for before responding to the client.
+
+For read operations, the coordinator generally only issues read commands
+to enough replicas to satisfy the consistency level. The one exception
+to this is when speculative retry may issue a redundant read request to
+an extra replica if the original replicas have not responded within a
+specified time window.
+
+==== Picking Consistency Levels
+
+It is common to pick read and write consistency levels such that the
+replica sets overlap, resulting in all acknowledged writes being visible
+to subsequent reads. This is typically expressed in the same terms
+Dynamo does, in that `W + R > RF`, where `W` is the write consistency
+level, `R` is the read consistency level, and `RF` is the replication
+factor. For example, if `RF = 3`, a `QUORUM` request will require
+responses from at least `2/3` replicas. If `QUORUM` is used for both
+writes and reads, at least one of the replicas is guaranteed to
+participate in _both_ the write and the read request, which in turn
+guarantees that the quorums will overlap and the write will be visible
+to the read.
+
+In a multi-datacenter environment, `LOCAL_QUORUM` can be used to provide
+a weaker but still useful guarantee: reads are guaranteed to see the
+latest write from within the same datacenter. This is often sufficient
+as clients homed to a single datacenter will read their own writes.
+
+If this type of strong consistency isn't required, lower consistency
+levels like `LOCAL_ONE` or `ONE` may be used to improve throughput,
+latency, and availability. With replication spanning multiple
+datacenters, `LOCAL_ONE` is typically less available than `ONE` but is
+faster as a rule. Indeed `ONE` will succeed if a single replica is
+available in any datacenter.
+
+== Distributed Cluster Membership and Failure Detection
+
+The replication protocols and dataset partitioning rely on knowing which
+nodes are alive and dead in the cluster so that write and read
+operations can be optimally routed. In Cassandra liveness information is
+shared in a distributed fashion through a failure detection mechanism
+based on a gossip protocol.
+
+=== Gossip
+
+Gossip is how Cassandra propagates basic cluster bootstrapping
+information such as endpoint membership and internode network protocol
+versions. In Cassandra's gossip system, nodes exchange state information
+not only about themselves but also about other nodes they know about.
+This information is versioned with a vector clock of
+`(generation, version)` tuples, where the generation is a monotonic
+timestamp and version is a logical clock the increments roughly every
+second. These logical clocks allow Cassandra gossip to ignore old
+versions of cluster state just by inspecting the logical clocks
+presented with gossip messages.
+
+Every node in the Cassandra cluster runs the gossip task independently
+and periodically. Every second, every node in the cluster:
+
+[arabic]
+. Updates the local node's heartbeat state (the version) and constructs
+the node's local view of the cluster gossip endpoint state.
+. Picks a random other node in the cluster to exchange gossip endpoint
+state with.
+. Probabilistically attempts to gossip with any unreachable nodes (if
+one exists)
+. Gossips with a seed node if that didn't happen in step 2.
+
+When an operator first bootstraps a Cassandra cluster they designate
+certain nodes as seed nodes. Any node can be a seed node and the only
+difference between seed and non-seed nodes is seed nodes are allowed to
+bootstrap into the ring without seeing any other seed nodes.
+Furthermore, once a cluster is bootstrapped, seed nodes become
+hotspots for gossip due to step 4 above.
+
+As non-seed nodes must be able to contact at least one seed node in
+order to bootstrap into the cluster, it is common to include multiple
+seed nodes, often one for each rack or datacenter. Seed nodes are often
+chosen using existing off-the-shelf service discovery mechanisms.
+
+[NOTE]
+.Note
+====
+Nodes do not have to agree on the seed nodes, and indeed once a cluster
+is bootstrapped, newly launched nodes can be configured to use any
+existing nodes as seeds. The only advantage to picking the same nodes
+as seeds is it increases their usefullness as gossip hotspots.
+====
+
+Currently, gossip also propagates token metadata and schema
+_version_ information. This information forms the control plane for
+scheduling data movements and schema pulls. For example, if a node sees
+a mismatch in schema version in gossip state, it will schedule a schema
+sync task with the other nodes. As token information propagates via
+gossip it is also the control plane for teaching nodes which endpoints
+own what data.
+
+=== Ring Membership and Failure Detection
+
+Gossip forms the basis of ring membership, but the *failure detector*
+ultimately makes decisions about if nodes are `UP` or `DOWN`. Every node
+in Cassandra runs a variant of the
+https://www.computer.org/csdl/proceedings-article/srds/2004/22390066/12OmNvT2phv[Phi
+Accrual Failure Detector], in which every node is constantly making an
+independent decision of if their peer nodes are available or not. This
+decision is primarily based on received heartbeat state. For example, if
+a node does not see an increasing heartbeat from a node for a certain
+amount of time, the failure detector "convicts" that node, at which
+point Cassandra will stop routing reads to it (writes will typically be
+written to hints). If/when the node starts heartbeating again, Cassandra
+will try to reach out and connect, and if it can open communication
+channels it will mark that node as available.
+
+[NOTE]
+.Note
+====
+`UP` and `DOWN` state are local node decisions and are not propagated with
+gossip. Heartbeat state is propagated with gossip, but nodes will not
+consider each other as `UP` until they can successfully message each
+other over an actual network channel.
+====
+
+Cassandra will never remove a node from gossip state without
+explicit instruction from an operator via a decommission operation or a
+new node bootstrapping with a `replace_address_first_boot` option. This
+choice is intentional to allow Cassandra nodes to temporarily fail
+without causing data to needlessly re-balance. This also helps to
+prevent simultaneous range movements, where multiple replicas of a token
+range are moving at the same time, which can violate monotonic
+consistency and can even cause data loss.
+
+== Incremental Scale-out on Commodity Hardware
+
+Cassandra scales-out to meet the requirements of growth in data size and
+request rates. Scaling-out means adding additional nodes to the ring,
+and every additional node brings linear improvements in compute and
+storage. In contrast, scaling-up implies adding more capacity to the
+existing database nodes. Cassandra is also capable of scale-up, and in
+certain environments it may be preferable depending on the deployment.
+Cassandra gives operators the flexibility to chose either scale-out or
+scale-up.
+
+One key aspect of Dynamo that Cassandra follows is to attempt to run on
+commodity hardware, and many engineering choices are made under this
+assumption. For example, Cassandra assumes nodes can fail at any time,
+auto-tunes to make the best use of CPU and memory resources available
+and makes heavy use of advanced compression and caching techniques to
+get the most storage out of limited memory and storage capabilities.
+
+=== Simple Query Model
+
+Cassandra, like Dynamo, chooses not to provide cross-partition
+transactions that are common in SQL Relational Database Management
+Systems (RDBMS). This both gives the programmer a simpler read and write
+API, and allows Cassandra to more easily scale horizontally since
+multi-partition transactions spanning multiple nodes are notoriously
+difficult to implement and typically very latent.
+
+Instead, Cassanda chooses to offer fast, consistent, latency at any
+scale for single partition operations, allowing retrieval of entire
+partitions or only subsets of partitions based on primary key filters.
+Furthermore, Cassandra does support single partition compare and swap
+functionality via the lightweight transaction CQL API.
+
+=== Simple Interface for Storing Records
+
+Cassandra, in a slight departure from Dynamo, chooses a storage
+interface that is more sophisticated then "simple key value" stores but
+significantly less complex than SQL relational data models. Cassandra
+presents a wide-column store interface, where partitions of data contain
+multiple rows, each of which contains a flexible set of individually
+typed columns. Every row is uniquely identified by the partition key and
+one or more clustering keys, and every row can have as many columns as
+needed.
+
+This allows users to flexibly add new columns to existing datasets as
+new requirements surface. Schema changes involve only metadata changes
+and run fully concurrently with live workloads. Therefore, users can
+safely add columns to existing Cassandra databases while remaining
+confident that query performance will not degrade.
diff --git a/doc/modules/cassandra/pages/architecture/guarantees.adoc b/doc/modules/cassandra/pages/architecture/guarantees.adoc
new file mode 100644
index 0000000..3313a11
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/guarantees.adoc
@@ -0,0 +1,108 @@
+= Guarantees
+
+Apache Cassandra is a highly scalable and reliable database. Cassandra
+is used in web based applications that serve large number of clients and
+the quantity of data processed is web-scale (Petabyte) large. Cassandra
+makes some guarantees about its scalability, availability and
+reliability. To fully understand the inherent limitations of a storage
+system in an environment in which a certain level of network partition
+failure is to be expected and taken into account when designing the
+system it is important to first briefly introduce the CAP theorem.
+
+== What is CAP?
+
+According to the CAP theorem it is not possible for a distributed data
+store to provide more than two of the following guarantees
+simultaneously.
+
+* Consistency: Consistency implies that every read receives the most
+recent write or errors out
+* Availability: Availability implies that every request receives a
+response. It is not guaranteed that the response contains the most
+recent write or data.
+* Partition tolerance: Partition tolerance refers to the tolerance of a
+storage system to failure of a network partition. Even if some of the
+messages are dropped or delayed the system continues to operate.
+
+CAP theorem implies that when using a network partition, with the
+inherent risk of partition failure, one has to choose between
+consistency and availability and both cannot be guaranteed at the same
+time. CAP theorem is illustrated in Figure 1.
+
+image::Figure_1_guarantees.jpg[image]
+
+Figure 1. CAP Theorem
+
+High availability is a priority in web based applications and to this
+objective Cassandra chooses Availability and Partition Tolerance from
+the CAP guarantees, compromising on data Consistency to some extent.
+
+Cassandra makes the following guarantees.
+
+* High Scalability
+* High Availability
+* Durability
+* Eventual Consistency of writes to a single table
+* Lightweight transactions with linearizable consistency
+* Batched writes across multiple tables are guaranteed to succeed
+completely or not at all
+* Secondary indexes are guaranteed to be consistent with their local
+replicas data
+
+== High Scalability
+
+Cassandra is a highly scalable storage system in which nodes may be
+added/removed as needed. Using gossip-based protocol a unified and
+consistent membership list is kept at each node.
+
+== High Availability
+
+Cassandra guarantees high availability of data by implementing a
+fault-tolerant storage system. Failure detection in a node is detected
+using a gossip-based protocol.
+
+== Durability
+
+Cassandra guarantees data durability by using replicas. Replicas are
+multiple copies of a data stored on different nodes in a cluster. In a
+multi-datacenter environment the replicas may be stored on different
+datacenters. If one replica is lost due to unrecoverable node/datacenter
+failure the data is not completely lost as replicas are still available.
+
+== Eventual Consistency
+
+Meeting the requirements of performance, reliability, scalability and
+high availability in production Cassandra is an eventually consistent
+storage system. Eventually consistent implies that all updates reach all
+replicas eventually. Divergent versions of the same data may exist
+temporarily but they are eventually reconciled to a consistent state.
+Eventual consistency is a tradeoff to achieve high availability and it
+involves some read and write latencies.
+
+== Lightweight transactions with linearizable consistency
+
+Data must be read and written in a sequential order. Paxos consensus
+protocol is used to implement lightweight transactions. Paxos protocol
+implements lightweight transactions that are able to handle concurrent
+operations using linearizable consistency. Linearizable consistency is
+sequential consistency with real-time constraints and it ensures
+transaction isolation with compare and set (CAS) transaction. With CAS
+replica data is compared and data that is found to be out of date is set
+to the most consistent value. Reads with linearizable consistency allow
+reading the current state of the data, which may possibly be
+uncommitted, without making a new addition or update.
+
+== Batched Writes
+
+The guarantee for batched writes across multiple tables is that they
+will eventually succeed, or none will. Batch data is first written to
+batchlog system data, and when the batch data has been successfully
+stored in the cluster the batchlog data is removed. The batch is
+replicated to another node to ensure the full batch completes in the
+event the coordinator node fails.
+
+== Secondary Indexes
+
+A secondary index is an index on a column and is used to query a table
+that is normally not queryable. Secondary indexes when built are
+guaranteed to be consistent with their local replicas.
diff --git a/doc/modules/cassandra/pages/architecture/images/ring.svg b/doc/modules/cassandra/pages/architecture/images/ring.svg
new file mode 100644
index 0000000..d0db8c5
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/images/ring.svg
@@ -0,0 +1,11 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="651" height="709.4583740234375" style="
+        width:651px;
+        height:709.4583740234375px;
+        background: transparent;
+        fill: none;
+">
+        
+        
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M223.5,655 C223.5,634.84 239.84,618.5 260,618.5 C280.16,618.5 296.5,634.84 296.5,655 C296.5,675.16 280.16,691.5 260,691.5 C239.84,691.5 223.5,675.16 223.5,655 Z" style="stroke-width: 1; stroke: rgb(103, 148, 135); fill: rgb(103, 148, 135);"/></g><g class="composite-shape"><path class="real" d=" M229.26 [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="649" height="707.4583740234375" style="width:649px;height:707.4583740234375px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g><g><g><g style="transform:matrix(1,0,0,1,12.171875,40.31333587646485);"><path d="M175 386L316 386L316 444L175 444L175 571L106 571L106 444L19 444L19 386L103 386L103 119C103 59 117 -11 186 -11C256 -11 307 14 332 27L316 86C290 65 258 53 226 53C189 53 175 83 175 136ZM829 220C829 354 729 461 610 461 [...]
+</svg>
diff --git a/doc/modules/cassandra/pages/architecture/images/vnodes.svg b/doc/modules/cassandra/pages/architecture/images/vnodes.svg
new file mode 100644
index 0000000..71b4fa2
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/images/vnodes.svg
@@ -0,0 +1,11 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="651" height="384.66668701171875" style="
+        width:651px;
+        height:384.66668701171875px;
+        background: transparent;
+        fill: none;
+">
+        
+        
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M40.4,190 C40.4,107.38 107.38,40.4 190,40.4 C272.62,40.4 339.6,107.38 339.6,190 C339.6,272.62 272.62,339.6 190,339.6 C107.38,339.6 40.4,272.62 40.4,190 Z" style="stroke-width: 1; stroke: rgba(0, 0, 0, 0.52); fill: none; stroke-dasharray: 1.125, 3.35;"/></g><g class="composite-shape"><path class="real"  [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="649" height="382.66668701171875" style="width:649px;height:382.66668701171875px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g><g><g><g style="transform:matrix(1,0,0,1,178.65625,348.9985620117188);"><path d="M125 390L69 107C68 99 56 61 56 31C56 6 67 -9 86 -9C121 -9 156 11 234 74L265 99L255 117L210 86C181 66 161 56 150 56C141 56 136 64 136 76C136 102 150 183 179 328L192 390L299 390L310 440C272 436 238 434 200 434C216  [...]
+</svg>
diff --git a/doc/modules/cassandra/pages/architecture/index.adoc b/doc/modules/cassandra/pages/architecture/index.adoc
new file mode 100644
index 0000000..c4bef05
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/index.adoc
@@ -0,0 +1,9 @@
+= Architecture
+
+This section describes the general architecture of Apache Cassandra.
+
+* xref:architecture/overview.adoc[Overview]
+* xref:architecture/dynamo.adoc[Dynamo]
+* xref:architecture/storage_engine.adoc[Storage Engine]
+* xref:architecture/guarantees.adoc[Guarantees]
+* xref:architecture/snitch.adoc[Snitches]
diff --git a/doc/modules/cassandra/pages/architecture/overview.adoc b/doc/modules/cassandra/pages/architecture/overview.adoc
new file mode 100644
index 0000000..605e347
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/overview.adoc
@@ -0,0 +1,101 @@
+= Overview
+:exper: experimental
+
+Apache Cassandra is an open source, distributed, NoSQL database. It
+presents a partitioned wide column storage model with eventually
+consistent semantics.
+
+Apache Cassandra was initially designed at
+https://www.cs.cornell.edu/projects/ladis2009/papers/lakshman-ladis2009.pdf[Facebook]
+using a staged event-driven architecture
+(http://www.sosp.org/2001/papers/welsh.pdf[SEDA]) to implement a
+combination of Amazon’s
+http://courses.cse.tamu.edu/caverlee/csce438/readings/dynamo-paper.pdf[Dynamo]
+distributed storage and replication techniques and Google's
+https://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdf[Bigtable]
+data and storage engine model. Dynamo and Bigtable were both developed
+to meet emerging requirements for scalable, reliable and highly
+available storage systems, but each had areas that could be improved.
+
+Cassandra was designed as a best-in-class combination of both systems to
+meet emerging largescale, both in data footprint and query volume,
+storage requirements. As applications began to require full global
+replication and always available low-latency reads and writes, it became
+imperative to design a new kind of database model as the relational
+database systems of the time struggled to meet the new requirements of
+global scale applications.
+
+Systems like Cassandra are designed for these challenges and seek the
+following design objectives:
+
+* Full multi-master database replication
+* Global availability at low latency
+* Scaling out on commodity hardware
+* Linear throughput increase with each additional processor
+* Online load balancing and cluster growth
+* Partitioned key-oriented queries
+* Flexible schema
+
+== Features
+
+Cassandra provides the Cassandra Query Language (xref:cql/ddl.adoc[CQL]), an SQL-like
+language, to create and update database schema and access data. CQL
+allows users to organize data within a cluster of Cassandra nodes using:
+
+* *Keyspace*: Defines how a dataset is replicated, per datacenter. 
+Replication is the number of copies saved per cluster.
+Keyspaces contain tables.
+* *Table*: Defines the typed schema for a collection of partitions.
+Tables contain partitions, which contain rows, which contain columns.
+Cassandra tables can flexibly add new columns to tables with zero downtime. 
+* *Partition*: Defines the mandatory part of the primary key all rows in
+Cassandra must have to identify the node in a cluster where the row is stored. 
+All performant queries supply the partition key in the query.
+* *Row*: Contains a collection of columns identified by a unique primary
+key made up of the partition key and optionally additional clustering
+keys.
+* *Column*: A single datum with a type which belongs to a row.
+
+CQL supports numerous advanced features over a partitioned dataset such
+as:
+
+* Single partition lightweight transactions with atomic compare and set
+semantics.
+* User-defined types, functions and aggregates
+* Collection types including sets, maps, and lists.
+* Local secondary indices
+* (Experimental) materialized views
+
+Cassandra explicitly chooses not to implement operations that require
+cross partition coordination as they are typically slow and hard to
+provide highly available global semantics. For example Cassandra does
+not support:
+
+* Cross partition transactions
+* Distributed joins
+* Foreign keys or referential integrity.
+
+== Operating
+
+Apache Cassandra configuration settings are configured in the
+`cassandra.yaml` file that can be edited by hand or with the aid of
+configuration management tools. Some settings can be manipulated live
+using an online interface, but others require a restart of the database
+to take effect.
+
+Cassandra provides tools for managing a cluster. The `nodetool` command
+interacts with Cassandra's live control interface, allowing runtime
+manipulation of many settings from `cassandra.yaml`. The
+`auditlogviewer` is used to view the audit logs. The `fqltool` is used
+to view, replay and compare full query logs. The `auditlogviewer` and
+`fqltool` are new tools in Apache Cassandra {40_version}.
+
+In addition, Cassandra supports out of the box atomic snapshot
+functionality, which presents a point in time snapshot of Cassandra's
+data for easy integration with many backup tools. Cassandra also
+supports incremental backups where data can be backed up as it is
+written.
+
+Apache Cassandra {40_version} has added several new features including virtual
+tables, transient replication ({exper}), audit logging, full query logging, and
+support for Java 11 ({exper}). 
diff --git a/doc/modules/cassandra/pages/architecture/snitch.adoc b/doc/modules/cassandra/pages/architecture/snitch.adoc
new file mode 100644
index 0000000..90b32fb
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/snitch.adoc
@@ -0,0 +1,74 @@
+= Snitch
+
+In cassandra, the snitch has two functions:
+
+* it teaches Cassandra enough about your network topology to route
+requests efficiently.
+* it allows Cassandra to spread replicas around your cluster to avoid
+correlated failures. It does this by grouping machines into
+"datacenters" and "racks." Cassandra will do its best not to have more
+than one replica on the same "rack" (which may not actually be a
+physical location).
+
+== Dynamic snitching
+
+The dynamic snitch monitor read latencies to avoid reading from hosts
+that have slowed down. The dynamic snitch is configured with the
+following properties on `cassandra.yaml`:
+
+* `dynamic_snitch`: whether the dynamic snitch should be enabled or
+disabled.
+* `dynamic_snitch_update_interval_in_ms`: controls how often to perform
+the more expensive part of host score calculation.
+* `dynamic_snitch_reset_interval_in_ms`: if set greater than zero, this
+will allow 'pinning' of replicas to hosts in order to increase cache
+capacity.
+* `dynamic_snitch_badness_threshold:`: The badness threshold will
+control how much worse the pinned host has to be before the dynamic
+snitch will prefer other replicas over it. This is expressed as a double
+which represents a percentage. Thus, a value of 0.2 means Cassandra
+would continue to prefer the static snitch values until the pinned host
+was 20% worse than the fastest.
+
+== Snitch classes
+
+The `endpoint_snitch` parameter in `cassandra.yaml` should be set to the
+class that implements `IEndPointSnitch` which will be wrapped by the
+dynamic snitch and decide if two endpoints are in the same data center
+or on the same rack. Out of the box, Cassandra provides the snitch
+implementations:
+
+GossipingPropertyFileSnitch::
+  This should be your go-to snitch for production use. The rack and
+  datacenter for the local node are defined in
+  cassandra-rackdc.properties and propagated to other nodes via gossip.
+  If `cassandra-topology.properties` exists, it is used as a fallback,
+  allowing migration from the PropertyFileSnitch.
+SimpleSnitch::
+  Treats Strategy order as proximity. This can improve cache locality
+  when disabling read repair. Only appropriate for single-datacenter
+  deployments.
+PropertyFileSnitch::
+  Proximity is determined by rack and data center, which are explicitly
+  configured in `cassandra-topology.properties`.
+Ec2Snitch::
+  Appropriate for EC2 deployments in a single Region, or in multiple
+  regions with inter-region VPC enabled (available since the end of
+  2017, see
+  https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-support-for-inter-region-vpc-peering/[AWS
+  announcement]). Loads Region and Availability Zone information from
+  the EC2 API. The Region is treated as the datacenter, and the
+  Availability Zone as the rack. Only private IPs are used, so this will
+  work across multiple regions only if inter-region VPC is enabled.
+Ec2MultiRegionSnitch::
+  Uses public IPs as broadcast_address to allow cross-region
+  connectivity (thus, you should set seed addresses to the public IP as
+  well). You will need to open the `storage_port` or `ssl_storage_port`
+  on the public IP firewall (For intra-Region traffic, Cassandra will
+  switch to the private IP after establishing a connection).
+RackInferringSnitch::
+  Proximity is determined by rack and data center, which are assumed to
+  correspond to the 3rd and 2nd octet of each node's IP address,
+  respectively. Unless this happens to match your deployment
+  conventions, this is best used as an example of writing a custom
+  Snitch class and is provided in that spirit.
diff --git a/doc/modules/cassandra/pages/architecture/storage_engine.adoc b/doc/modules/cassandra/pages/architecture/storage_engine.adoc
new file mode 100644
index 0000000..77c52e5
--- /dev/null
+++ b/doc/modules/cassandra/pages/architecture/storage_engine.adoc
@@ -0,0 +1,225 @@
+= Storage Engine
+
+[[commit-log]]
+== CommitLog
+
+Commitlogs are an append only log of all mutations local to a Cassandra
+node. Any data written to Cassandra will first be written to a commit
+log before being written to a memtable. This provides durability in the
+case of unexpected shutdown. On startup, any mutations in the commit log
+will be applied to memtables.
+
+All mutations write optimized by storing in commitlog segments, reducing
+the number of seeks needed to write to disk. Commitlog Segments are
+limited by the `commitlog_segment_size_in_mb` option, once the size is
+reached, a new commitlog segment is created. Commitlog segments can be
+archived, deleted, or recycled once all its data has been flushed to
+SSTables. Commitlog segments are truncated when Cassandra has written
+data older than a certain point to the SSTables. Running "nodetool
+drain" before stopping Cassandra will write everything in the memtables
+to SSTables and remove the need to sync with the commitlogs on startup.
+
+* `commitlog_segment_size_in_mb`: The default size is 32, which is
+almost always fine, but if you are archiving commitlog segments (see
+commitlog_archiving.properties), then you probably want a finer
+granularity of archiving; 8 or 16 MB is reasonable. Max mutation size is
+also configurable via `max_mutation_size_in_kb` setting in `cassandra.yaml`.
+The default is half the size `commitlog_segment_size_in_mb * 1024`.
+
+**NOTE: If `max_mutation_size_in_kb` is set explicitly then
+`commitlog_segment_size_in_mb` must be set to at least twice the size of
+`max_mutation_size_in_kb / 1024`**.
+
+Commitlogs are an append only log of all mutations local to a Cassandra
+node. Any data written to Cassandra will first be written to a commit
+log before being written to a memtable. This provides durability in the
+case of unexpected shutdown. On startup, any mutations in the commit log
+will be applied.
+
+* `commitlog_sync`: may be either _periodic_ or _batch_.
+** `batch`: In batch mode, Cassandra won’t ack writes until the commit
+log has been fsynced to disk. It will wait
+"commitlog_sync_batch_window_in_ms" milliseconds between fsyncs. This
+window should be kept short because the writer threads will be unable to
+do extra work while waiting. You may need to increase concurrent_writes
+for the same reason.
++
+- `commitlog_sync_batch_window_in_ms`: Time to wait between "batch"
+fsyncs _Default Value:_ 2
+** `periodic`: In periodic mode, writes are immediately ack'ed, and the
+CommitLog is simply synced every "commitlog_sync_period_in_ms"
+milliseconds.
++
+- `commitlog_sync_period_in_ms`: Time to wait between "periodic" fsyncs
+_Default Value:_ 10000
+
+_Default Value:_ batch
+
+** NOTE: In the event of an unexpected shutdown, Cassandra can lose up
+to the sync period or more if the sync is delayed. If using "batch"
+mode, it is recommended to store commitlogs in a separate, dedicated
+device.*
+
+* `commitlog_directory`: This option is commented out by default When
+running on magnetic HDD, this should be a separate spindle than the data
+directories. If not set, the default directory is
+$CASSANDRA_HOME/data/commitlog.
+
+_Default Value:_ /var/lib/cassandra/commitlog
+
+* `commitlog_compression`: Compression to apply to the commitlog. If
+omitted, the commit log will be written uncompressed. LZ4, Snappy,
+Deflate and Zstd compressors are supported.
+
+(Default Value: (complex option):
+
+[source, yaml]
+----
+#   - class_name: LZ4Compressor
+#     parameters:
+----
+
+* `commitlog_total_space_in_mb`: Total space to use for commit logs on
+disk.
+
+If space gets above this value, Cassandra will flush every dirty CF in
+the oldest segment and remove it. So a small total commitlog space will
+tend to cause more flush activity on less-active columnfamilies.
+
+The default value is the smaller of 8192, and 1/4 of the total space of
+the commitlog volume.
+
+_Default Value:_ 8192
+
+== Memtables
+
+Memtables are in-memory structures where Cassandra buffers writes. In
+general, there is one active memtable per table. Eventually, memtables
+are flushed onto disk and become immutable link:#sstables[SSTables].
+This can be triggered in several ways:
+
+* The memory usage of the memtables exceeds the configured threshold
+(see `memtable_cleanup_threshold`)
+* The `commit-log` approaches its maximum size, and forces memtable
+flushes in order to allow commitlog segments to be freed
+
+Memtables may be stored entirely on-heap or partially off-heap,
+depending on `memtable_allocation_type`.
+
+== SSTables
+
+SSTables are the immutable data files that Cassandra uses for persisting
+data on disk.
+
+As SSTables are flushed to disk from `memtables` or are streamed from
+other nodes, Cassandra triggers compactions which combine multiple
+SSTables into one. Once the new SSTable has been written, the old
+SSTables can be removed.
+
+Each SSTable is comprised of multiple components stored in separate
+files:
+
+`Data.db`::
+  The actual data, i.e. the contents of rows.
+`Index.db`::
+  An index from partition keys to positions in the `Data.db` file. For
+  wide partitions, this may also include an index to rows within a
+  partition.
+`Summary.db`::
+  A sampling of (by default) every 128th entry in the `Index.db` file.
+`Filter.db`::
+  A Bloom Filter of the partition keys in the SSTable.
+`CompressionInfo.db`::
+  Metadata about the offsets and lengths of compression chunks in the
+  `Data.db` file.
+`Statistics.db`::
+  Stores metadata about the SSTable, including information about
+  timestamps, tombstones, clustering keys, compaction, repair,
+  compression, TTLs, and more.
+`Digest.crc32`::
+  A CRC-32 digest of the `Data.db` file.
+`TOC.txt`::
+  A plain text list of the component files for the SSTable.
+
+Within the `Data.db` file, rows are organized by partition. These
+partitions are sorted in token order (i.e. by a hash of the partition
+key when the default partitioner, `Murmur3Partition`, is used). Within a
+partition, rows are stored in the order of their clustering keys.
+
+SSTables can be optionally compressed using block-based compression.
+
+== SSTable Versions
+
+This section was created using the following
+https://gist.github.com/shyamsalimkumar/49a61e5bc6f403d20c55[gist] which
+utilized this original
+http://www.bajb.net/2013/03/cassandra-sstable-format-version-numbers/[source].
+
+The version numbers, to date are:
+
+=== Version 0
+
+* b (0.7.0): added version to sstable filenames
+* c (0.7.0): bloom filter component computes hashes over raw key bytes
+instead of strings
+* d (0.7.0): row size in data component becomes a long instead of int
+* e (0.7.0): stores undecorated keys in data and index components
+* f (0.7.0): switched bloom filter implementations in data component
+* g (0.8): tracks flushed-at context in metadata component
+
+=== Version 1
+
+* h (1.0): tracks max client timestamp in metadata component
+* hb (1.0.3): records compression ration in metadata component
+* hc (1.0.4): records partitioner in metadata component
+* hd (1.0.10): includes row tombstones in maxtimestamp
+* he (1.1.3): includes ancestors generation in metadata component
+* hf (1.1.6): marker that replay position corresponds to 1.1.5+
+millis-based id (see CASSANDRA-4782)
+* ia (1.2.0):
+** column indexes are promoted to the index file
+** records estimated histogram of deletion times in tombstones
+** bloom filter (keys and columns) upgraded to Murmur3
+* ib (1.2.1): tracks min client timestamp in metadata component
+* ic (1.2.5): omits per-row bloom filter of column names
+
+=== Version 2
+
+* ja (2.0.0):
+** super columns are serialized as composites (note that there is no
+real format change, this is mostly a marker to know if we should expect
+super columns or not. We do need a major version bump however, because
+we should not allow streaming of super columns into this new format)
+** tracks max local deletiontime in sstable metadata
+** records bloom_filter_fp_chance in metadata component
+** remove data size and column count from data file (CASSANDRA-4180)
+** tracks max/min column values (according to comparator)
+* jb (2.0.1):
+** switch from crc32 to adler32 for compression checksums
+** checksum the compressed data
+* ka (2.1.0):
+** new Statistics.db file format
+** index summaries can be downsampled and the sampling level is
+persisted
+** switch uncompressed checksums to adler32
+** tracks presense of legacy (local and remote) counter shards
+* la (2.2.0): new file name format
+* lb (2.2.7): commit log lower bound included
+
+=== Version 3
+
+* ma (3.0.0):
+** swap bf hash order
+** store rows natively
+* mb (3.0.7, 3.7): commit log lower bound included
+* mc (3.0.8, 3.9): commit log intervals included
+
+=== Example Code
+
+The following example is useful for finding all sstables that do not
+match the "ib" SSTable version
+
+[source,bash]
+----
+include:example$find_sstables.sh[]
+----
diff --git a/doc/modules/cassandra/pages/configuration/cass_cl_archive_file.adoc b/doc/modules/cassandra/pages/configuration/cass_cl_archive_file.adoc
new file mode 100644
index 0000000..f7b0788
--- /dev/null
+++ b/doc/modules/cassandra/pages/configuration/cass_cl_archive_file.adoc
@@ -0,0 +1,48 @@
+[[cassandra-cl-archive]]
+== commitlog-archiving.properties file
+
+The `commitlog-archiving.properties` configuration file can optionally
+set commands that are executed when archiving or restoring a commitlog
+segment.
+
+== Options
+
+`archive_command=<command>` ------One command can be inserted with %path
+and %name arguments. %path is the fully qualified path of the commitlog
+segment to archive. %name is the filename of the commitlog. STDOUT,
+STDIN, or multiple commands cannot be executed. If multiple commands are
+required, add a pointer to a script in this option.
+
+*Example:* archive_command=/bin/ln %path /backup/%name
+
+*Default value:* blank
+
+`restore_command=<command>` ------One command can be inserted with %from
+and %to arguments. %from is the fully qualified path to an archived
+commitlog segment using the specified restore directories. %to defines
+the directory to the live commitlog location.
+
+*Example:* restore_command=/bin/cp -f %from %to
+
+*Default value:* blank
+
+`restore_directories=<directory>` ------Defines the directory to scan
+the recovery files into.
+
+*Default value:* blank
+
+`restore_point_in_time=<timestamp>` ------Restore mutations created up
+to and including this timestamp in GMT in the format
+`yyyy:MM:dd HH:mm:ss`. Recovery will continue through the segment when
+the first client-supplied timestamp greater than this time is
+encountered, but only mutations less than or equal to this timestamp
+will be applied.
+
+*Example:* 2020:04:31 20:43:12
+
+*Default value:* blank
+
+`precision=<timestamp_precision>` ------Precision of the timestamp used
+in the inserts. Choice is generally MILLISECONDS or MICROSECONDS
+
+*Default value:* MICROSECONDS
diff --git a/doc/modules/cassandra/pages/configuration/cass_env_sh_file.adoc b/doc/modules/cassandra/pages/configuration/cass_env_sh_file.adoc
new file mode 100644
index 0000000..d895186
--- /dev/null
+++ b/doc/modules/cassandra/pages/configuration/cass_env_sh_file.adoc
@@ -0,0 +1,162 @@
+= cassandra-env.sh file
+
+The `cassandra-env.sh` bash script file can be used to pass additional
+options to the Java virtual machine (JVM), such as maximum and minimum
+heap size, rather than setting them in the environment. If the JVM
+settings are static and do not need to be computed from the node
+characteristics, the `cassandra-jvm-options` files should be used
+instead. For example, commonly computed values are the heap sizes, using
+the system values.
+
+For example, add
+`JVM_OPTS="$JVM_OPTS -Dcassandra.load_ring_state=false"` to the
+`cassandra_env.sh` file and run the command-line `cassandra` to start.
+The option is set from the `cassandra-env.sh` file, and is equivalent to
+starting Cassandra with the command-line option
+`cassandra -Dcassandra.load_ring_state=false`.
+
+The `-D` option specifies the start-up parameters in both the command
+line and `cassandra-env.sh` file. The following options are available:
+
+== `cassandra.auto_bootstrap=false`
+
+Facilitates setting auto_bootstrap to false on initial set-up of the
+cluster. The next time you start the cluster, you do not need to change
+the `cassandra.yaml` file on each node to revert to true, the default
+value.
+
+== `cassandra.available_processors=<number_of_processors>`
+
+In a multi-instance deployment, multiple Cassandra instances will
+independently assume that all CPU processors are available to it. This
+setting allows you to specify a smaller set of processors.
+
+== `cassandra.boot_without_jna=true`
+
+If JNA fails to initialize, Cassandra fails to boot. Use this command to
+boot Cassandra without JNA.
+
+== `cassandra.config=<directory>`
+
+The directory location of the `cassandra.yaml file`. The default
+location depends on the type of installation.
+
+== `cassandra.ignore_dynamic_snitch_severity=true|false`
+
+Setting this property to true causes the dynamic snitch to ignore the
+severity indicator from gossip when scoring nodes. Explore failure
+detection and recovery and dynamic snitching for more information.
+
+*Default:* false
+
+== `cassandra.initial_token=<token>`
+
+Use when virtual nodes (vnodes) are not used. Sets the initial
+partitioner token for a node the first time the node is started. Note:
+Vnodes are highly recommended as they automatically select tokens.
+
+*Default:* disabled
+
+== `cassandra.join_ring=true|false`
+
+Set to false to start Cassandra on a node but not have the node join the
+cluster. You can use `nodetool join` and a JMX call to join the ring
+afterwards.
+
+*Default:* true
+
+== `cassandra.load_ring_state=true|false`
+
+Set to false to clear all gossip state for the node on restart.
+
+*Default:* true
+
+== `cassandra.metricsReporterConfigFile=<filename>`
+
+Enable pluggable metrics reporter. Explore pluggable metrics reporting
+for more information.
+
+== `cassandra.partitioner=<partitioner>`
+
+Set the partitioner.
+
+*Default:* org.apache.cassandra.dht.Murmur3Partitioner
+
+== `cassandra.prepared_statements_cache_size_in_bytes=<cache_size>`
+
+Set the cache size for prepared statements.
+
+== `cassandra.replace_address=<listen_address of dead node>|<broadcast_address of dead node>`
+
+To replace a node that has died, restart a new node in its place
+specifying the `listen_address` or `broadcast_address` that the new node
+is assuming. The new node must not have any data in its data directory,
+the same state as before bootstrapping. Note: The `broadcast_address`
+defaults to the `listen_address` except when using the
+`Ec2MultiRegionSnitch`.
+
+== `cassandra.replayList=<table>`
+
+Allow restoring specific tables from an archived commit log.
+
+== `cassandra.ring_delay_ms=<number_of_ms>`
+
+Defines the amount of time a node waits to hear from other nodes before
+formally joining the ring.
+
+*Default:* 1000ms
+
+== `cassandra.native_transport_port=<port>`
+
+Set the port on which the CQL native transport listens for clients.
+
+*Default:* 9042
+
+== `cassandra.rpc_port=<port>`
+
+Set the port for the Thrift RPC service, which is used for client
+connections.
+
+*Default:* 9160
+
+== `cassandra.storage_port=<port>`
+
+Set the port for inter-node communication.
+
+*Default:* 7000
+
+== `cassandra.ssl_storage_port=<port>`
+
+Set the SSL port for encrypted communication.
+
+*Default:* 7001
+
+== `cassandra.start_native_transport=true|false`
+
+Enable or disable the native transport server. See
+`start_native_transport` in `cassandra.yaml`.
+
+*Default:* true
+
+== `cassandra.start_rpc=true|false`
+
+Enable or disable the Thrift RPC server.
+
+*Default:* true
+
+== `cassandra.triggers_dir=<directory>`
+
+Set the default location for the trigger JARs.
+
+*Default:* conf/triggers
+
+== `cassandra.write_survey=true`
+
+For testing new compaction and compression strategies. It allows you to
+experiment with different strategies and benchmark write performance
+differences without affecting the production workload.
+
+== `consistent.rangemovement=true|false`
+
+Set to true makes Cassandra perform bootstrap safely without violating
+consistency. False disables this.
diff --git a/doc/modules/cassandra/pages/configuration/cass_jvm_options_file.adoc b/doc/modules/cassandra/pages/configuration/cass_jvm_options_file.adoc
new file mode 100644
index 0000000..b9a312c
--- /dev/null
+++ b/doc/modules/cassandra/pages/configuration/cass_jvm_options_file.adoc
@@ -0,0 +1,22 @@
+= jvm-* files
+
+Several files for JVM configuration are included in Cassandra. The
+`jvm-server.options` file, and corresponding files `jvm8-server.options`
+and `jvm11-server.options` are the main file for settings that affect
+the operation of the Cassandra JVM on cluster nodes. The file includes
+startup parameters, general JVM settings such as garbage collection, and
+heap settings. The `jvm-clients.options` and corresponding
+`jvm8-clients.options` and `jvm11-clients.options` files can be used to
+configure JVM settings for clients like `nodetool` and the `sstable`
+tools.
+
+See each file for examples of settings.
+
+[NOTE]
+.Note
+====
+The `jvm-*` files replace the `cassandra-envsh` file used in Cassandra
+versions prior to Cassandra 3.0. The `cassandra-env.sh` bash script file
+is still useful if JVM settings must be dynamically calculated based on
+system settings. The `jvm-*` files only store static JVM settings.
+====
diff --git a/doc/modules/cassandra/pages/configuration/cass_logback_xml_file.adoc b/doc/modules/cassandra/pages/configuration/cass_logback_xml_file.adoc
new file mode 100644
index 0000000..e673622
--- /dev/null
+++ b/doc/modules/cassandra/pages/configuration/cass_logback_xml_file.adoc
@@ -0,0 +1,166 @@
+= logback.xml file
+
+The `logback.xml` configuration file can optionally set logging levels
+for the logs written to `system.log` and `debug.log`. The logging levels
+can also be set using `nodetool setlogginglevels`.
+
+== Options
+
+=== `appender name="<appender_choice>"...</appender>` 
+
+Specify log type and settings. Possible appender names are: `SYSTEMLOG`,
+`DEBUGLOG`, `ASYNCDEBUGLOG`, and `STDOUT`. `SYSTEMLOG` ensures that WARN
+and ERROR message are written synchronously to the specified file.
+`DEBUGLOG` and `ASYNCDEBUGLOG` ensure that DEBUG messages are written
+either synchronously or asynchronously, respectively, to the specified
+file. `STDOUT` writes all messages to the console in a human-readable
+format.
+
+*Example:* <appender name="SYSTEMLOG"
+class="ch.qos.logback.core.rolling.RollingFileAppender">
+
+=== `<file> <filename> </file>` 
+
+Specify the filename for a log.
+
+*Example:* <file>$\{cassandra.logdir}/system.log</file>
+
+=== `<level> <log_level> </level>`
+
+Specify the level for a log. Part of the filter. Levels are: `ALL`,
+`TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `OFF`. `TRACE` creates the
+most verbose log, `ERROR` the least.
+
+[NOTE]
+.Note
+====
+Note: Increasing logging levels can generate heavy logging output on
+a moderately trafficked cluster. You can use the
+`nodetool getlogginglevels` command to see the current logging
+configuration.
+====
+
+*Default:* INFO
+
+*Example:* <level>INFO</level>
+
+=== `<rollingPolicy class="<rolling_policy_choice>" <fileNamePattern><pattern_info></fileNamePattern> ... </rollingPolicy>`
+
+Specify the policy for rolling logs over to an archive.
+
+*Example:* <rollingPolicy
+class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
+
+=== `<fileNamePattern> <pattern_info> </fileNamePattern>`
+
+Specify the pattern information for rolling over the log to archive.
+Part of the rolling policy.
+
+*Example:*
+<fileNamePattern>$\{cassandra.logdir}/system.log.%d\{yyyy-MM-dd}.%i.zip</fileNamePattern>
+
+=== `<maxFileSize> <size> </maxFileSize>`
+
+Specify the maximum file size to trigger rolling a log. Part of the
+rolling policy.
+
+*Example:* <maxFileSize>50MB</maxFileSize>
+
+=== `<maxHistory> <number_of_days> </maxHistory>`
+
+Specify the maximum history in days to trigger rolling a log. Part of
+the rolling policy.
+
+*Example:* <maxHistory>7</maxHistory>
+
+=== `<encoder> <pattern>...</pattern> </encoder>`
+
+Specify the format of the message. Part of the rolling policy.
+
+*Example:* <maxHistory>7</maxHistory> *Example:* <encoder>
+<pattern>%-5level [%thread] %date\{ISO8601} %F:%L - %msg%n</pattern>
+</encoder>
+
+=== Contents of default `logback.xml`
+
+[source,XML]
+----
+<configuration scan="true" scanPeriod="60 seconds">
+  <jmxConfigurator />
+
+  <!-- No shutdown hook; we run it ourselves in StorageService after shutdown -->
+
+  <!-- SYSTEMLOG rolling file appender to system.log (INFO level) -->
+
+  <appender name="SYSTEMLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
+    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
+  <level>INFO</level>
+    </filter>
+    <file>${cassandra.logdir}/system.log</file>
+    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
+      <!-- rollover daily -->
+      <fileNamePattern>${cassandra.logdir}/system.log.%d{yyyy-MM-dd}.%i.zip</fileNamePattern>
+      <!-- each file should be at most 50MB, keep 7 days worth of history, but at most 5GB -->
+      <maxFileSize>50MB</maxFileSize>
+      <maxHistory>7</maxHistory>
+      <totalSizeCap>5GB</totalSizeCap>
+    </rollingPolicy>
+    <encoder>
+      <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern>
+    </encoder>
+  </appender>
+
+  <!-- DEBUGLOG rolling file appender to debug.log (all levels) -->
+
+  <appender name="DEBUGLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
+    <file>${cassandra.logdir}/debug.log</file>
+    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
+      <!-- rollover daily -->
+      <fileNamePattern>${cassandra.logdir}/debug.log.%d{yyyy-MM-dd}.%i.zip</fileNamePattern>
+      <!-- each file should be at most 50MB, keep 7 days worth of history, but at most 5GB -->
+      <maxFileSize>50MB</maxFileSize>
+      <maxHistory>7</maxHistory>
+      <totalSizeCap>5GB</totalSizeCap>
+    </rollingPolicy>
+    <encoder>
+      <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern>
+    </encoder>
+  </appender>
+
+  <!-- ASYNCLOG assynchronous appender to debug.log (all levels) -->
+
+  <appender name="ASYNCDEBUGLOG" class="ch.qos.logback.classic.AsyncAppender">
+    <queueSize>1024</queueSize>
+    <discardingThreshold>0</discardingThreshold>
+    <includeCallerData>true</includeCallerData>
+    <appender-ref ref="DEBUGLOG" />
+  </appender>
+
+  <!-- STDOUT console appender to stdout (INFO level) -->
+
+  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
+    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
+      <level>INFO</level>
+    </filter>
+    <encoder>
+      <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern>
+    </encoder>
+  </appender>
+
+  <!-- Uncomment bellow and corresponding appender-ref to activate logback metrics
+  <appender name="LogbackMetrics" class="com.codahale.metrics.logback.InstrumentedAppender" />
+   -->
+
+  <root level="INFO">
+    <appender-ref ref="SYSTEMLOG" />
+    <appender-ref ref="STDOUT" />
+    <appender-ref ref="ASYNCDEBUGLOG" /> <!-- Comment this line to disable debug.log -->
+    <!--
+    <appender-ref ref="LogbackMetrics" />
+    -->
+  </root>
+
+  <logger name="org.apache.cassandra" level="DEBUG"/>
+  <logger name="com.thinkaurelius.thrift" level="ERROR"/>
+</configuration>
+----
diff --git a/doc/modules/cassandra/pages/configuration/cass_rackdc_file.adoc b/doc/modules/cassandra/pages/configuration/cass_rackdc_file.adoc
new file mode 100644
index 0000000..0b370c9
--- /dev/null
+++ b/doc/modules/cassandra/pages/configuration/cass_rackdc_file.adoc
@@ -0,0 +1,79 @@
+= cassandra-rackdc.properties file
+
+Several `snitch` options use the `cassandra-rackdc.properties`
+configuration file to determine which `datacenters` and racks cluster
+nodes belong to. Information about the network topology allows requests
+to be routed efficiently and to distribute replicas evenly. The
+following snitches can be configured here:
+
+* GossipingPropertyFileSnitch
+* AWS EC2 single-region snitch
+* AWS EC2 multi-region snitch
+
+The GossipingPropertyFileSnitch is recommended for production. This
+snitch uses the datacenter and rack information configured in a local
+node's `cassandra-rackdc.properties` file and propagates the information
+to other nodes using `gossip`. It is the default snitch and the settings
+in this properties file are enabled.
+
+The AWS EC2 snitches are configured for clusters in AWS. This snitch
+uses the `cassandra-rackdc.properties` options to designate one of two
+AWS EC2 datacenter and rack naming conventions:
+
+* legacy: Datacenter name is the part of the availability zone name
+preceding the last "-" when the zone ends in -1 and includes the number
+if not -1. Rack name is the portion of the availability zone name
+following the last "-".
++
+____
+Examples: us-west-1a => dc: us-west, rack: 1a; us-west-2b => dc:
+us-west-2, rack: 2b;
+____
+* standard: Datacenter name is the standard AWS region name, including
+the number. Rack name is the region plus the availability zone letter.
++
+____
+Examples: us-west-1a => dc: us-west-1, rack: us-west-1a; us-west-2b =>
+dc: us-west-2, rack: us-west-2b;
+____
+
+Either snitch can set to use the local or internal IP address when
+multiple datacenters are not communicating.
+
+== GossipingPropertyFileSnitch
+
+=== `dc`
+
+Name of the datacenter. The value is case-sensitive.
+
+*Default value:* DC1
+
+=== `rack`
+
+Rack designation. The value is case-sensitive.
+
+*Default value:* RAC1
+
+== AWS EC2 snitch
+
+=== `ec2_naming_scheme`
+
+Datacenter and rack naming convention. Options are `legacy` or
+`standard` (default). *This option is commented out by default.*
+
+*Default value:* standard
+
+[NOTE]
+.Note
+====
+YOU MUST USE THE `legacy` VALUE IF YOU ARE UPGRADING A PRE-4.0 CLUSTER.
+====
+
+== Either snitch
+
+=== `prefer_local`
+
+Option to use the local or internal IP address when communication is not
+across different datacenters. *This option is commented out by default.*
+
+*Default value:* true
diff --git a/doc/modules/cassandra/pages/configuration/cass_topo_file.adoc b/doc/modules/cassandra/pages/configuration/cass_topo_file.adoc
new file mode 100644
index 0000000..5ca8221
--- /dev/null
+++ b/doc/modules/cassandra/pages/configuration/cass_topo_file.adoc
@@ -0,0 +1,53 @@
+[[cassandra-topology]]
+cassandra-topologies.properties file ================================
+
+The `PropertyFileSnitch` `snitch` option uses the
+`cassandra-topologies.properties` configuration file to determine which
+`datacenters` and racks cluster nodes belong to. If other snitches are
+used, the :ref:cassandra_rackdc must be used. The snitch determines
+network topology (proximity by rack and datacenter) so that requests are
+routed efficiently and allows the database to distribute replicas
+evenly.
+
+Include every node in the cluster in the properties file, defining your
+datacenter names as in the keyspace definition. The datacenter and rack
+names are case-sensitive.
+
+The `cassandra-topologies.properties` file must be copied identically to
+every node in the cluster.
+
+== Example
+
+This example uses three datacenters:
+
+[source,bash]
+----
+# datacenter One
+
+175.56.12.105=DC1:RAC1
+175.50.13.200=DC1:RAC1
+175.54.35.197=DC1:RAC1
+
+120.53.24.101=DC1:RAC2
+120.55.16.200=DC1:RAC2
+120.57.102.103=DC1:RAC2
+
+# datacenter Two
+
+110.56.12.120=DC2:RAC1
+110.50.13.201=DC2:RAC1
+110.54.35.184=DC2:RAC1
+
+50.33.23.120=DC2:RAC2
+50.45.14.220=DC2:RAC2
+50.17.10.203=DC2:RAC2
+
+# datacenter Three
+
+172.106.12.120=DC3:RAC1
+172.106.12.121=DC3:RAC1
+172.106.12.122=DC3:RAC1
+
+# default for unknown nodes 
+default =DC3:RAC1
+----
diff --git a/doc/modules/cassandra/pages/configuration/index.adoc b/doc/modules/cassandra/pages/configuration/index.adoc
new file mode 100644
index 0000000..7c8ee36
--- /dev/null
+++ b/doc/modules/cassandra/pages/configuration/index.adoc
@@ -0,0 +1,11 @@
+= Configuring Cassandra
+
+This section describes how to configure Apache Cassandra.
+
+* xref:configuration/cass_yaml_file.adoc[cassandra.yaml]
+* xref:configuration/cass_rackdc_file.adoc[cassandra-rackdc.properties]
+* xref:configuration/cass_env_sh_file.adoc[cassandra-env.sh]
+* xref:configuration/cass_topo_file.adoc[cassandra-topologies.properties]
+* xref:configuration/cass_cl_archive_file.adoc[commitlog-archiving.properties]
+* xref:configuration/cass_cl_logback_xml_file.adoc[logback.xml]
+* xref:configuration/cass_jvm_options_file.adoc[jvm-* files]
diff --git a/doc/modules/cassandra/pages/cql/SASI.adoc b/doc/modules/cassandra/pages/cql/SASI.adoc
new file mode 100644
index 0000000..c24009a
--- /dev/null
+++ b/doc/modules/cassandra/pages/cql/SASI.adoc
@@ -0,0 +1,809 @@
+== SASIIndex
+
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/SASIIndex.java[`SASIIndex`],
+or ``SASI`` for short, is an implementation of Cassandra's `Index`
+interface that can be used as an alternative to the existing
+implementations. SASI's indexing and querying improves on existing
+implementations by tailoring it specifically to Cassandra’s needs. SASI
+has superior performance in cases where queries would previously require
+filtering. In achieving this performance, SASI aims to be significantly
+less resource intensive than existing implementations, in memory, disk,
+and CPU usage. In addition, SASI supports prefix and contains queries on
+strings (similar to SQL’s `LIKE = "foo*"` or `LIKE = "*foo*"'`).
+
+The following goes on describe how to get up and running with SASI,
+demonstrates usage with examples, and provides some details on its
+implementation.
+
+=== Using SASI
+
+The examples below walk through creating a table and indexes on its
+columns, and performing queries on some inserted data.
+
+The examples below assume the `demo` keyspace has been created and is in
+use.
+
+....
+cqlsh> CREATE KEYSPACE demo WITH replication = {
+   ... 'class': 'SimpleStrategy',
+   ... 'replication_factor': '1'
+   ... };
+cqlsh> USE demo;
+....
+
+All examples are performed on the `sasi` table:
+
+....
+cqlsh:demo> CREATE TABLE sasi (id uuid, first_name text, last_name text,
+        ... age int, height int, created_at bigint, primary key (id));
+....
+
+==== Creating Indexes
+
+To create SASI indexes use CQLs `CREATE CUSTOM INDEX` statement:
+
+....
+cqlsh:demo> CREATE CUSTOM INDEX ON sasi (first_name) USING 'org.apache.cassandra.index.sasi.SASIIndex'
+        ... WITH OPTIONS = {
+        ... 'analyzer_class':
+        ...   'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer',
+        ... 'case_sensitive': 'false'
+        ... };
+
+cqlsh:demo> CREATE CUSTOM INDEX ON sasi (last_name) USING 'org.apache.cassandra.index.sasi.SASIIndex'
+        ... WITH OPTIONS = {'mode': 'CONTAINS'};
+
+cqlsh:demo> CREATE CUSTOM INDEX ON sasi (age) USING 'org.apache.cassandra.index.sasi.SASIIndex';
+
+cqlsh:demo> CREATE CUSTOM INDEX ON sasi (created_at) USING 'org.apache.cassandra.index.sasi.SASIIndex'
+        ...  WITH OPTIONS = {'mode': 'SPARSE'};
+....
+
+The indexes created have some options specified that customize their
+behaviour and potentially performance. The index on `first_name` is
+case-insensitive. The analyzers are discussed more in a subsequent
+example. The `NonTokenizingAnalyzer` performs no analysis on the text.
+Each index has a mode: `PREFIX`, `CONTAINS`, or `SPARSE`, the first
+being the default. The `last_name` index is created with the mode
+`CONTAINS` which matches terms on suffixes instead of prefix only.
+Examples of this are available below and more detail can be found in the
+section on link:#ondiskindexbuilder[OnDiskIndex].The `created_at` column
+is created with its mode set to `SPARSE`, which is meant to improve
+performance of querying large, dense number ranges like timestamps for
+data inserted every millisecond. Details of the `SPARSE` implementation
+can also be found in the section on the
+link:#ondiskindexbuilder[OnDiskIndex]. The `age` index is created with
+the default `PREFIX` mode and no case-sensitivity or text analysis
+options are specified since the field is numeric.
+
+After inserting the following data and performing a `nodetool flush`,
+SASI performing index flushes to disk can be seen in Cassandra’s logs –
+although the direct call to flush is not required (see
+link:#indexmemtable[IndexMemtable] for more details).
+
+....
+cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
+        ... VALUES (556ebd54-cbe5-4b75-9aae-bf2a31a24500, 'Pavel', 'Yaskevich', 27, 181, 1442959315018);
+
+cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
+        ... VALUES (5770382a-c56f-4f3f-b755-450e24d55217, 'Jordan', 'West', 26, 173, 1442959315019);
+
+cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
+        ... VALUES (96053844-45c3-4f15-b1b7-b02c441d3ee1, 'Mikhail', 'Stepura', 36, 173, 1442959315020);
+
+cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
+        ... VALUES (f5dfcabe-de96-4148-9b80-a1c41ed276b4, 'Michael', 'Kjellman', 26, 180, 1442959315021);
+
+cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
+        ... VALUES (2970da43-e070-41a8-8bcb-35df7a0e608a, 'Johnny', 'Zhang', 32, 175, 1442959315022);
+
+cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
+        ... VALUES (6b757016-631d-4fdb-ac62-40b127ccfbc7, 'Jason', 'Brown', 40, 182, 1442959315023);
+
+cqlsh:demo> INSERT INTO sasi (id, first_name, last_name, age, height, created_at)
+        ... VALUES (8f909e8a-008e-49dd-8d43-1b0df348ed44, 'Vijay', 'Parthasarathy', 34, 183, 1442959315024);
+
+cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi;
+
+ first_name | last_name     | age | height | created_at
+------------+---------------+-----+--------+---------------
+    Michael |      Kjellman |  26 |    180 | 1442959315021
+    Mikhail |       Stepura |  36 |    173 | 1442959315020
+      Jason |         Brown |  40 |    182 | 1442959315023
+      Pavel |     Yaskevich |  27 |    181 | 1442959315018
+      Vijay | Parthasarathy |  34 |    183 | 1442959315024
+     Jordan |          West |  26 |    173 | 1442959315019
+     Johnny |         Zhang |  32 |    175 | 1442959315022
+
+(7 rows)
+....
+
+==== Equality & Prefix Queries
+
+SASI supports all queries already supported by CQL, including LIKE
+statement for PREFIX, CONTAINS and SUFFIX searches.
+
+....
+cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
+        ... WHERE first_name = 'Pavel';
+
+  first_name | last_name | age | height | created_at
+-------------+-----------+-----+--------+---------------
+       Pavel | Yaskevich |  27 |    181 | 1442959315018
+
+(1 rows)
+....
+
+....
+cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
+       ... WHERE first_name = 'pavel';
+
+  first_name | last_name | age | height | created_at
+-------------+-----------+-----+--------+---------------
+       Pavel | Yaskevich |  27 |    181 | 1442959315018
+
+(1 rows)
+....
+
+....
+cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
+        ... WHERE first_name LIKE 'M%';
+
+ first_name | last_name | age | height | created_at
+------------+-----------+-----+--------+---------------
+    Michael |  Kjellman |  26 |    180 | 1442959315021
+    Mikhail |   Stepura |  36 |    173 | 1442959315020
+
+(2 rows)
+....
+
+Of course, the case of the query does not matter for the `first_name`
+column because of the options provided at index creation time.
+
+....
+cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
+        ... WHERE first_name LIKE 'm%';
+
+ first_name | last_name | age | height | created_at
+------------+-----------+-----+--------+---------------
+    Michael |  Kjellman |  26 |    180 | 1442959315021
+    Mikhail |   Stepura |  36 |    173 | 1442959315020
+
+(2 rows)
+....
+
+==== Compound Queries
+
+SASI supports queries with multiple predicates, however, due to the
+nature of the default indexing implementation, CQL requires the user to
+specify `ALLOW FILTERING` to opt-in to the potential performance
+pitfalls of such a query. With SASI, while the requirement to include
+`ALLOW FILTERING` remains, to reduce modifications to the grammar, the
+performance pitfalls do not exist because filtering is not performed.
+Details on how SASI joins data from multiple predicates is available
+below in the link:#implementation-details[Implementation Details]
+section.
+
+....
+cqlsh:demo> SELECT first_name, last_name, age, height, created_at FROM sasi
+        ... WHERE first_name LIKE 'M%' and age < 30 ALLOW FILTERING;
+
+ first_name | last_name | age | height | created_at
+------------+-----------+-----+--------+---------------
+    Michael |  Kjellman |  26 |    180 | 1442959315021
+
+(1 rows)
+....
+
+==== Suffix Queries
+
+The next example demonstrates `CONTAINS` mode on the `last_name` column.
+By using this mode, predicates can search for any strings containing the
+search string as a sub-string. In this case the strings containing ``a''
+or ``an''.
+
+....
+cqlsh:demo> SELECT * FROM sasi WHERE last_name LIKE '%a%';
+
+ id                                   | age | created_at    | first_name | height | last_name
+--------------------------------------+-----+---------------+------------+--------+---------------
+ f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 | 1442959315021 |    Michael |    180 |      Kjellman
+ 96053844-45c3-4f15-b1b7-b02c441d3ee1 |  36 | 1442959315020 |    Mikhail |    173 |       Stepura
+ 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | 1442959315018 |      Pavel |    181 |     Yaskevich
+ 8f909e8a-008e-49dd-8d43-1b0df348ed44 |  34 | 1442959315024 |      Vijay |    183 | Parthasarathy
+ 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 | 1442959315022 |     Johnny |    175 |         Zhang
+
+(5 rows)
+
+cqlsh:demo> SELECT * FROM sasi WHERE last_name LIKE '%an%';
+
+ id                                   | age | created_at    | first_name | height | last_name
+--------------------------------------+-----+---------------+------------+--------+-----------
+ f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 | 1442959315021 |    Michael |    180 |  Kjellman
+ 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 | 1442959315022 |     Johnny |    175 |     Zhang
+
+(2 rows)
+....
+
+==== Expressions on Non-Indexed Columns
+
+SASI also supports filtering on non-indexed columns like `height`. The
+expression can only narrow down an existing query using `AND`.
+
+....
+cqlsh:demo> SELECT * FROM sasi WHERE last_name LIKE '%a%' AND height >= 175 ALLOW FILTERING;
+
+ id                                   | age | created_at    | first_name | height | last_name
+--------------------------------------+-----+---------------+------------+--------+---------------
+ f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 | 1442959315021 |    Michael |    180 |      Kjellman
+ 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | 1442959315018 |      Pavel |    181 |     Yaskevich
+ 8f909e8a-008e-49dd-8d43-1b0df348ed44 |  34 | 1442959315024 |      Vijay |    183 | Parthasarathy
+ 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 | 1442959315022 |     Johnny |    175 |         Zhang
+
+(4 rows)
+....
+
+==== Delimiter based Tokenization Analysis
+
+A simple text analysis provided is delimiter based tokenization. This
+provides an alternative to indexing collections, as delimiter separated
+text can be indexed without the overhead of `CONTAINS` mode nor using
+`PREFIX` or `SUFFIX` queries.
+
+....
+cqlsh:demo> ALTER TABLE sasi ADD aliases text;
+cqlsh:demo> CREATE CUSTOM INDEX on sasi (aliases) USING 'org.apache.cassandra.index.sasi.SASIIndex'
+        ... WITH OPTIONS = {
+        ... 'analyzer_class': 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer',
+        ... 'delimiter': ',',
+        ... 'mode': 'prefix',
+        ... 'analyzed': 'true'};
+cqlsh:demo> UPDATE sasi SET aliases = 'Mike,Mick,Mikey,Mickey' WHERE id = f5dfcabe-de96-4148-9b80-a1c41ed276b4;
+cqlsh:demo> SELECT * FROM sasi WHERE aliases LIKE 'Mikey' ALLOW FILTERING;
+
+ id                                   | age | aliases                | created_at    | first_name | height | last_name
+--------------------------------------+-----+------------------------+---------------+------------+--------+-----------
+ f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 | Mike,Mick,Mikey,Mickey | 1442959315021 |    Michael |    180 |  Kjellman
+....
+
+==== Text Analysis (Tokenization and Stemming)
+
+Lastly, to demonstrate text analysis an additional column is needed on
+the table. Its definition, index, and statements to update rows are
+shown below.
+
+....
+cqlsh:demo> ALTER TABLE sasi ADD bio text;
+cqlsh:demo> CREATE CUSTOM INDEX ON sasi (bio) USING 'org.apache.cassandra.index.sasi.SASIIndex'
+        ... WITH OPTIONS = {
+        ... 'analyzer_class': 'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer',
+        ... 'tokenization_enable_stemming': 'true',
+        ... 'analyzed': 'true',
+        ... 'tokenization_normalize_lowercase': 'true',
+        ... 'tokenization_locale': 'en'
+        ... };
+cqlsh:demo> UPDATE sasi SET bio = 'Software Engineer, who likes distributed systems, doesnt like to argue.' WHERE id = 5770382a-c56f-4f3f-b755-450e24d55217;
+cqlsh:demo> UPDATE sasi SET bio = 'Software Engineer, works on the freight distribution at nights and likes arguing' WHERE id = 556ebd54-cbe5-4b75-9aae-bf2a31a24500;
+cqlsh:demo> SELECT * FROM sasi;
+
+ id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
+--------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+---------------
+ f5dfcabe-de96-4148-9b80-a1c41ed276b4 |  26 |                                                                             null | 1442959315021 |    Michael |    180 |      Kjellman
+ 96053844-45c3-4f15-b1b7-b02c441d3ee1 |  36 |                                                                             null | 1442959315020 |    Mikhail |    173 |       Stepura
+ 6b757016-631d-4fdb-ac62-40b127ccfbc7 |  40 |                                                                             null | 1442959315023 |      Jason |    182 |         Brown
+ 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 |     Yaskevich
+ 8f909e8a-008e-49dd-8d43-1b0df348ed44 |  34 |                                                                             null | 1442959315024 |      Vijay |    183 | Parthasarathy
+ 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |          West
+ 2970da43-e070-41a8-8bcb-35df7a0e608a |  32 |                                                                             null | 1442959315022 |     Johnny |    175 |         Zhang
+
+(7 rows)
+....
+
+Index terms and query search strings are stemmed for the `bio` column
+because it was configured to use the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/analyzer/StandardAnalyzer.java[`StandardAnalyzer`]
+and `analyzed` is set to `true`. The `tokenization_normalize_lowercase`
+is similar to the `case_sensitive` property but for the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/analyzer/StandardAnalyzer.java[`StandardAnalyzer`].
+These query demonstrates the stemming applied by
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/analyzer/StandardAnalyzer.java[`StandardAnalyzer`].
+
+....
+cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'distributing';
+
+ id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
+--------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
+ 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
+ 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |      West
+
+(2 rows)
+
+cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'they argued';
+
+ id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
+--------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
+ 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
+ 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |      West
+
+(2 rows)
+
+cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'working at the company';
+
+ id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
+--------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
+ 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
+
+(1 rows)
+
+cqlsh:demo> SELECT * FROM sasi WHERE bio LIKE 'soft eng';
+
+ id                                   | age | bio                                                                              | created_at    | first_name | height | last_name
+--------------------------------------+-----+----------------------------------------------------------------------------------+---------------+------------+--------+-----------
+ 556ebd54-cbe5-4b75-9aae-bf2a31a24500 |  27 | Software Engineer, works on the freight distribution at nights and likes arguing | 1442959315018 |      Pavel |    181 | Yaskevich
+ 5770382a-c56f-4f3f-b755-450e24d55217 |  26 |          Software Engineer, who likes distributed systems, doesnt like to argue. | 1442959315019 |     Jordan |    173 |      West
+
+(2 rows)
+....
+
+=== Implementation Details
+
+While SASI, at the surface, is simply an implementation of the `Index`
+interface, at its core there are several data structures and algorithms
+used to satisfy it. These are described here. Additionally, the changes
+internal to Cassandra to support SASI’s integration are described.
+
+The `Index` interface divides responsibility of the implementer into two
+parts: Indexing and Querying. Further, Cassandra makes it possible to
+divide those responsibilities into the memory and disk components. SASI
+takes advantage of Cassandra’s write-once, immutable, ordered data model
+to build indexes along with the flushing of the memtable to disk – this
+is the origin of the name ``SSTable Attached Secondary Index''.
+
+The SASI index data structures are built in memory as the SSTable is
+being written and they are flushed to disk before the writing of the
+SSTable completes. The writing of each index file only requires
+sequential writes to disk. In some cases, partial flushes are performed,
+and later stitched back together, to reduce memory usage. These data
+structures are optimized for this use case.
+
+Taking advantage of Cassandra’s ordered data model, at query time,
+candidate indexes are narrowed down for searching, minimizing the amount
+of work done. Searching is then performed using an efficient method that
+streams data off disk as needed.
+
+==== Indexing
+
+Per SSTable, SASI writes an index file for each indexed column. The data
+for these files is built in memory using the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndexBuilder.java[`OnDiskIndexBuilder`].
+Once flushed to disk, the data is read using the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java[`OnDiskIndex`]
+class. These are composed of bytes representing indexed terms, organized
+for efficient writing or searching respectively. The keys and values
+they hold represent tokens and positions in an SSTable and these are
+stored per-indexed term in
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTreeBuilder.java[`TokenTreeBuilder`]s
+for writing, and
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java[`TokenTree`]s
+for querying. These index files are memory mapped after being written to
+disk, for quicker access. For indexing data in the memtable, SASI uses
+its
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/IndexMemtable.java[`IndexMemtable`]
+class.
+
+===== OnDiskIndex(Builder)
+
+Each
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java[`OnDiskIndex`]
+is an instance of a modified
+https://en.wikipedia.org/wiki/Suffix_array[Suffix Array] data structure.
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java[`OnDiskIndex`]
+is comprised of page-size blocks of sorted terms and pointers to the
+terms’ associated data, as well as the data itself, stored also in one
+or more page-sized blocks. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java[`OnDiskIndex`]
+is structured as a tree of arrays, where each level describes the terms
+in the level below, the final level being the terms themselves. The
+`PointerLevel`s and their `PointerBlock`s contain terms and pointers to
+other blocks that _end_ with those terms. The `DataLevel`, the final
+level, and its `DataBlock`s contain terms and point to the data itself,
+contained in
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java[`TokenTree`]s.
+
+The terms written to the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/OnDiskIndex.java[`OnDiskIndex`]
+vary depending on its ``mode'': either `PREFIX`, `CONTAINS`, or
+`SPARSE`. In the `PREFIX` and `SPARSE` cases, terms’ exact values are
+written exactly once per `OnDiskIndex`. For example, when using a
+`PREFIX` index with terms `Jason`, `Jordan`, `Pavel`, all three will be
+included in the index. A `CONTAINS` index writes additional terms for
+each suffix of each term recursively. Continuing with the example, a
+`CONTAINS` index storing the previous terms would also store `ason`,
+`ordan`, `avel`, `son`, `rdan`, `vel`, etc. This allows for queries on
+the suffix of strings. The `SPARSE` mode differs from `PREFIX` in that
+for every 64 blocks of terms a
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java[`TokenTree`]
+is built merging all the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java[`TokenTree`]s
+for each term into a single one. This copy of the data is used for
+efficient iteration of large ranges of e.g. timestamps. The index
+``mode'' is configurable per column at index creation time.
+
+===== TokenTree(Builder)
+
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java[`TokenTree`]
+is an implementation of the well-known
+https://en.wikipedia.org/wiki/B%2B_tree[B+-tree] that has been modified
+to optimize for its use-case. In particular, it has been optimized to
+associate tokens, longs, with a set of positions in an SSTable, also
+longs. Allowing the set of long values accommodates the possibility of a
+hash collision in the token, but the data structure is optimized for the
+unlikely possibility of such a collision.
+
+To optimize for its write-once environment the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTreeBuilder.java[`TokenTreeBuilder`]
+completely loads its interior nodes as the tree is built and it uses the
+well-known algorithm optimized for bulk-loading the data structure.
+
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/TokenTree.java[`TokenTree`]s
+provide the means to iterate over tokens, and file positions, that match
+a given term, and to skip forward in that iteration, an operation used
+heavily at query time.
+
+===== IndexMemtable
+
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/IndexMemtable.java[`IndexMemtable`]
+handles indexing the in-memory data held in the memtable. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/IndexMemtable.java[`IndexMemtable`]
+in turn manages either a
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java[`TrieMemIndex`]
+or a
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java[`SkipListMemIndex`]
+per-column. The choice of which index type is used is data dependent.
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java[`TrieMemIndex`]
+is used for literal types. `AsciiType` and `UTF8Type` are literal types
+by default but any column can be configured as a literal type using the
+`is_literal` option at index creation time. For non-literal types the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java[`SkipListMemIndex`]
+is used. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java[`TrieMemIndex`]
+is an implementation that can efficiently support prefix queries on
+character-like data. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java[`SkipListMemIndex`],
+conversely, is better suited for other Cassandra data types like
+numbers.
+
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/TrieMemIndex.java[`TrieMemIndex`]
+is built using either the `ConcurrentRadixTree` or
+`ConcurrentSuffixTree` from the `com.goooglecode.concurrenttrees`
+package. The choice between the two is made based on the indexing mode,
+`PREFIX` or other modes, and `CONTAINS` mode, respectively.
+
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/memory/SkipListMemIndex.java[`SkipListMemIndex`]
+is built on top of `java.util.concurrent.ConcurrentSkipListSet`.
+
+==== Querying
+
+Responsible for converting the internal `IndexExpression` representation
+into SASI’s
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]
+and
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java[`Expression`]
+trees, optimizing the trees to reduce the amount of work done, and
+driving the query itself, the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+is the work horse of SASI’s querying implementation. To efficiently
+perform union and intersection operations, SASI provides several
+iterators similar to Cassandra’s `MergeIterator`, but tailored
+specifically for SASI’s use while including more features. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java[`RangeUnionIterator`],
+like its name suggests, performs set unions over sets of tokens/keys
+matching the query, only reading as much data as it needs from each set
+to satisfy the query. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java[`RangeIntersectionIterator`],
+similar to its counterpart, performs set intersections over its data.
+
+===== QueryPlan
+
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+instantiated per search query is at the core of SASI’s querying
+implementation. Its work can be divided in two stages: analysis and
+execution.
+
+During the analysis phase,
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+converts from Cassandra’s internal representation of `IndexExpression`s,
+which has also been modified to support encoding queries that contain
+ORs and groupings of expressions using parentheses (see the
+link:#cassandra-internal-changes[Cassandra Internal Changes] section
+below for more details). This process produces a tree of
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]s,
+which in turn may contain
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java[`Expression`]s,
+all of which provide an alternative, more efficient, representation of
+the query.
+
+During execution, the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+uses the `DecoratedKey`-generating iterator created from the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]
+tree. These keys are read from disk and a final check to ensure they
+satisfy the query is made, once again using the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]
+tree. At the point the desired amount of matching data has been found,
+or there is no more matching data, the result set is returned to the
+coordinator through the existing internal components.
+
+The number of queries (total/failed/timed-out), and their latencies, are
+maintined per-table/column family.
+
+SASI also supports concurrently iterating terms for the same index
+across SSTables. The concurrency factor is controlled by the
+`cassandra.search_concurrency_factor` system property. The default is
+`1`.
+
+====== QueryController
+
+Each
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+references a
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java[`QueryController`]
+used throughout the execution phase. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java[`QueryController`]
+has two responsibilities: to manage and ensure the proper cleanup of
+resources (indexes), and to strictly enforce the time bound per query,
+specified by the user via the range slice timeout. All indexes are
+accessed via the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java[`QueryController`]
+so that they can be safely released by it later. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java[`QueryController`]’s
+`checkpoint` function is called in specific places in the execution path
+to ensure the time-bound is enforced.
+
+====== QueryPlan Optimizations
+
+While in the analysis phase, the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+performs several potential optimizations to the query. The goal of these
+optimizations is to reduce the amount of work performed during the
+execution phase.
+
+The simplest optimization performed is compacting multiple expressions
+joined by logical intersections (`AND`) into a single
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]
+with three or more
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java[`Expression`]s.
+For example, the query
+`WHERE age < 100 AND fname = 'p*' AND first_name != 'pa*' AND age > 21`
+would, without modification, have the following tree:
+
+....
+                      ┌───────┐
+             ┌────────│  AND  │──────┐
+             │        └───────┘      │
+             ▼                       ▼
+          ┌───────┐             ┌──────────┐
+    ┌─────│  AND  │─────┐       │age < 100 │
+    │     └───────┘     │       └──────────┘
+    ▼                   ▼
+┌──────────┐          ┌───────┐
+│ fname=p* │        ┌─│  AND  │───┐
+└──────────┘        │ └───────┘   │
+                    ▼             ▼
+                ┌──────────┐  ┌──────────┐
+                │fname!=pa*│  │ age > 21 │
+                └──────────┘  └──────────┘
+....
+
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+will remove the redundant right branch whose root is the final `AND` and
+has leaves `fname != pa*` and `age > 21`. These
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java[`Expression`]s
+will be compacted into the parent `AND`, a safe operation due to `AND`
+being associative and commutative. The resulting tree looks like the
+following:
+
+....
+                              ┌───────┐
+                     ┌────────│  AND  │──────┐
+                     │        └───────┘      │
+                     ▼                       ▼
+                  ┌───────┐             ┌──────────┐
+      ┌───────────│  AND  │────────┐    │age < 100 │
+      │           └───────┘        │    └──────────┘
+      ▼               │            ▼
+┌──────────┐          │      ┌──────────┐
+│ fname=p* │          ▼      │ age > 21 │
+└──────────┘    ┌──────────┐ └──────────┘
+                │fname!=pa*│
+                └──────────┘
+....
+
+When excluding results from the result set, using `!=`, the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+determines the best method for handling it. For range queries, for
+example, it may be optimal to divide the range into multiple parts with
+a hole for the exclusion. For string queries, such as this one, it is
+more optimal, however, to simply note which data to skip, or exclude,
+while scanning the index. Following this optimization the tree looks
+like this:
+
+....
+                               ┌───────┐
+                      ┌────────│  AND  │──────┐
+                      │        └───────┘      │
+                      ▼                       ▼
+                   ┌───────┐             ┌──────────┐
+           ┌───────│  AND  │────────┐    │age < 100 │
+           │       └───────┘        │    └──────────┘
+           ▼                        ▼
+    ┌──────────────────┐         ┌──────────┐
+    │     fname=p*     │         │ age > 21 │
+    │ exclusions=[pa*] │         └──────────┘
+    └──────────────────┘
+....
+
+The last type of optimization applied, for this query, is to merge range
+expressions across branches of the tree – without modifying the meaning
+of the query, of course. In this case, because the query contains all
+`AND`s the `age` expressions can be collapsed. Along with this
+optimization, the initial collapsing of unneeded `AND`s can also be
+applied once more to result in this final tree using to execute the
+query:
+
+....
+                        ┌───────┐
+                 ┌──────│  AND  │───────┐
+                 │      └───────┘       │
+                 ▼                      ▼
+       ┌──────────────────┐    ┌────────────────┐
+       │     fname=p*     │    │ 21 < age < 100 │
+       │ exclusions=[pa*] │    └────────────────┘
+       └──────────────────┘
+....
+
+===== Operations and Expressions
+
+As discussed, the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+optimizes a tree represented by
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]s
+as interior nodes, and
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Expression.java[`Expression`]s
+as leaves. The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]
+class, more specifically, can have zero, one, or two
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]s
+as children and an unlimited number of expressions. The iterators used
+to perform the queries, discussed below in the
+``Range(Union|Intersection)Iterator'' section, implement the necessary
+logic to merge results transparently regardless of the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]s
+children.
+
+Besides participating in the optimizations performed by the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`],
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]
+is also responsible for taking a row that has been returned by the query
+and performing a final validation that it in fact does match. This
+`satisfiesBy` operation is performed recursively from the root of the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java[`Operation`]
+tree for a given query. These checks are performed directly on the data
+in a given row. For more details on how `satisfiesBy` works, see the
+documentation
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/Operation.java#L87-L123[in
+the code].
+
+===== Range(Union|Intersection)Iterator
+
+The abstract `RangeIterator` class provides a unified interface over the
+two main operations performed by SASI at various layers in the execution
+path: set intersection and union. These operations are performed in a
+iterated, or ``streaming'', fashion to prevent unneeded reads of
+elements from either set. In both the intersection and union cases the
+algorithms take advantage of the data being pre-sorted using the same
+sort order, e.g. term or token order.
+
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java[`RangeUnionIterator`]
+performs the ``Merge-Join'' portion of the
+https://en.wikipedia.org/wiki/Sort-merge_join[Sort-Merge-Join]
+algorithm, with the properties of an outer-join, or union. It is
+implemented with several optimizations to improve its performance over a
+large number of iterators – sets to union. Specifically, the iterator
+exploits the likely case of the data having many sub-groups of
+overlapping ranges and the unlikely case that all ranges will overlap
+each other. For more details see the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java#L9-L21[javadoc].
+
+The
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java[`RangeIntersectionIterator`]
+itself is not a subclass of `RangeIterator`. It is a container for
+several classes, one of which, `AbstractIntersectionIterator`,
+sub-classes `RangeIterator`. SASI supports two methods of performing the
+intersection operation, and the ability to be adaptive in choosing
+between them based on some properties of the data.
+
+`BounceIntersectionIterator`, and the `BOUNCE` strategy, works like the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeUnionIterator.java[`RangeUnionIterator`]
+in that it performs a ``Merge-Join'', however, its nature is similar to
+a inner-join, where like values are merged by a data-specific merge
+function (e.g. merging two tokens in a list to lookup in a SSTable
+later). See the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java#L88-L101[javadoc]
+for more details on its implementation.
+
+`LookupIntersectionIterator`, and the `LOOKUP` strategy, performs a
+different operation, more similar to a lookup in an associative data
+structure, or ``hash lookup'' in database terminology. Once again,
+details on the implementation can be found in the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java#L199-L208[javadoc].
+
+The choice between the two iterators, or the `ADAPTIVE` strategy, is
+based upon the ratio of data set sizes of the minimum and maximum range
+of the sets being intersected. If the number of the elements in minimum
+range divided by the number of elements is the maximum range is less
+than or equal to `0.01`, then the `ADAPTIVE` strategy chooses the
+`LookupIntersectionIterator`, otherwise the `BounceIntersectionIterator`
+is chosen.
+
+==== The SASIIndex Class
+
+The above components are glued together by the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/SASIIndex.java[`SASIIndex`]
+class which implements `Index`, and is instantiated per-table containing
+SASI indexes. It manages all indexes for a table via the
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/conf/DataTracker.java[`sasi.conf.DataTracker`]
+and
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/conf/view/View.java[`sasi.conf.view.View`]
+components, controls writing of all indexes for an SSTable via its
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/disk/PerSSTableIndexWriter.java[`PerSSTableIndexWriter`],
+and initiates searches with `Searcher`. These classes glue the
+previously mentioned indexing components together with Cassandra’s
+SSTable life-cycle ensuring indexes are not only written when Memtable’s
+flush, but also as SSTable’s are compacted. For querying, the `Searcher`
+does little but defer to
+https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java[`QueryPlan`]
+and update e.g. latency metrics exposed by SASI.
+
+==== Cassandra Internal Changes
+
+To support the above changes and integrate them into Cassandra a few
+minor internal changes were made to Cassandra itself. These are
+described here.
+
+===== SSTable Write Life-cycle Notifications
+
+The `SSTableFlushObserver` is an observer pattern-like interface, whose
+sub-classes can register to be notified about events in the life-cycle
+of writing out a SSTable. Sub-classes can be notified when a flush
+begins and ends, as well as when each next row is about to be written,
+and each next column. SASI’s `PerSSTableIndexWriter`, discussed above,
+is the only current subclass.
+
+==== Limitations and Caveats
+
+The following are items that can be addressed in future updates but are
+not available in this repository or are not currently implemented.
+
+* The cluster must be configured to use a partitioner that produces
+`LongToken`s, e.g. `Murmur3Partitioner`. Other existing partitioners
+which don’t produce LongToken e.g. `ByteOrderedPartitioner` and
+`RandomPartitioner` will not work with SASI.
+* Not Equals and OR support have been removed in this release while
+changes are made to Cassandra itself to support them.
+
+==== Contributors
+
+* https://github.com/xedin[Pavel Yaskevich]
+* https://github.com/jrwest[Jordan West]
+* https://github.com/mkjellman[Michael Kjellman]
+* https://github.com/jasobrown[Jason Brown]
+* https://github.com/mishail[Mikhail Stepura]
diff --git a/doc/modules/cassandra/pages/cql/appendices.adoc b/doc/modules/cassandra/pages/cql/appendices.adoc
new file mode 100644
index 0000000..7e17266
--- /dev/null
+++ b/doc/modules/cassandra/pages/cql/appendices.adoc
@@ -0,0 +1,179 @@
+= Appendices
+
+[[appendix-A]]
+== Appendix A: CQL Keywords
+
+CQL distinguishes between _reserved_ and _non-reserved_ keywords.
+Reserved keywords cannot be used as identifier, they are truly reserved
+for the language (but one can enclose a reserved keyword by
+double-quotes to use it as an identifier). Non-reserved keywords however
+only have a specific meaning in certain context but can used as
+identifier otherwise. The only _raison d’être_ of these non-reserved
+keywords is convenience: some keyword are non-reserved when it was
+always easy for the parser to decide whether they were used as keywords
+or not.
+
+[width="48%",cols="60%,40%",options="header",]
+|===
+|Keyword |Reserved?
+|`ADD` |yes
+|`AGGREGATE` |no
+|`ALL` |no
+|`ALLOW` |yes
+|`ALTER` |yes
+|`AND` |yes
+|`APPLY` |yes
+|`AS` |no
+|`ASC` |yes
+|`ASCII` |no
+|`AUTHORIZE` |yes
+|`BATCH` |yes
+|`BEGIN` |yes
+|`BIGINT` |no
+|`BLOB` |no
+|`BOOLEAN` |no
+|`BY` |yes
+|`CALLED` |no
+|`CLUSTERING` |no
+|`COLUMNFAMILY` |yes
+|`COMPACT` |no
+|`CONTAINS` |no
+|`COUNT` |no
+|`COUNTER` |no
+|`CREATE` |yes
+|`CUSTOM` |no
+|`DATE` |no
+|`DECIMAL` |no
+|`DELETE` |yes
+|`DESC` |yes
+|`DESCRIBE` |yes
+|`DISTINCT` |no
+|`DOUBLE` |no
+|`DROP` |yes
+|`ENTRIES` |yes
+|`EXECUTE` |yes
+|`EXISTS` |no
+|`FILTERING` |no
+|`FINALFUNC` |no
+|`FLOAT` |no
+|`FROM` |yes
+|`FROZEN` |no
+|`FULL` |yes
+|`FUNCTION` |no
+|`FUNCTIONS` |no
+|`GRANT` |yes
+|`IF` |yes
+|`IN` |yes
+|`INDEX` |yes
+|`INET` |no
+|`INFINITY` |yes
+|`INITCOND` |no
+|`INPUT` |no
+|`INSERT` |yes
+|`INT` |no
+|`INTO` |yes
+|`JSON` |no
+|`KEY` |no
+|`KEYS` |no
+|`KEYSPACE` |yes
+|`KEYSPACES` |no
+|`LANGUAGE` |no
+|`LIMIT` |yes
+|`LIST` |no
+|`LOGIN` |no
+|`MAP` |no
+|`MODIFY` |yes
+|`NAN` |yes
+|`NOLOGIN` |no
+|`NORECURSIVE` |yes
+|`NOSUPERUSER` |no
+|`NOT` |yes
+|`NULL` |yes
+|`OF` |yes
+|`ON` |yes
+|`OPTIONS` |no
+|`OR` |yes
+|`ORDER` |yes
+|`PASSWORD` |no
+|`PERMISSION` |no
+|`PERMISSIONS` |no
+|`PRIMARY` |yes
+|`RENAME` |yes
+|`REPLACE` |yes
+|`RETURNS` |no
+|`REVOKE` |yes
+|`ROLE` |no
+|`ROLES` |no
+|`SCHEMA` |yes
+|`SELECT` |yes
+|`SET` |yes
+|`SFUNC` |no
+|`SMALLINT` |no
+|`STATIC` |no
+|`STORAGE` |no
+|`STYPE` |no
+|`SUPERUSER` |no
+|`TABLE` |yes
+|`TEXT` |no
+|`TIME` |no
+|`TIMESTAMP` |no
+|`TIMEUUID` |no
+|`TINYINT` |no
+|`TO` |yes
+|`TOKEN` |yes
+|`TRIGGER` |no
+|`TRUNCATE` |yes
+|`TTL` |no
+|`TUPLE` |no
+|`TYPE` |no
+|`UNLOGGED` |yes
+|`UPDATE` |yes
+|`USE` |yes
+|`USER` |no
+|`USERS` |no
+|`USING` |yes
+|`UUID` |no
+|`VALUES` |no
+|`VARCHAR` |no
+|`VARINT` |no
+|`WHERE` |yes
+|`WITH` |yes
+|`WRITETIME` |no
+|===
+
+== Appendix B: CQL Reserved Types
+
+The following type names are not currently used by CQL, but are reserved
+for potential future use. User-defined types may not use reserved type
+names as their name.
+
+[width="25%",cols="100%",options="header",]
+|===
+|type
+|`bitstring`
+|`byte`
+|`complex`
+|`enum`
+|`interval`
+|`macaddr`
+|===
+
+== Appendix C: Dropping Compact Storage
+
+Starting version 4.0, Thrift and COMPACT STORAGE is no longer supported.
+
+`ALTER ... DROP COMPACT STORAGE` statement makes Compact Tables
+CQL-compatible, exposing internal structure of Thrift/Compact Tables:
+
+* CQL-created Compact Tables that have no clustering columns, will
+expose an additional clustering column `column1` with `UTF8Type`.
+* CQL-created Compact Tables that had no regular columns, will expose a
+regular column `value` with `BytesType`.
+* For CQL-Created Compact Tables, all columns originally defined as
+`regular` will be come `static`
+* CQL-created Compact Tables that have clustering but have no regular
+columns will have an empty value column (of `EmptyType`)
+* SuperColumn Tables (can only be created through Thrift) will expose a
+compact value map with an empty name.
+* Thrift-created Compact Tables will have types corresponding to their
+Thrift definition.
diff --git a/doc/modules/cassandra/pages/cql/changes.adoc b/doc/modules/cassandra/pages/cql/changes.adoc
new file mode 100644
index 0000000..1f89469
--- /dev/null
+++ b/doc/modules/cassandra/pages/cql/changes.adoc
@@ -0,0 +1,215 @@
+= Changes
+
+The following describes the changes in each version of CQL.
+
+== 3.4.5
+
+* Adds support for arithmetic operators (`11935`)
+* Adds support for `+` and `-` operations on dates (`11936`)
+* Adds `currentTimestamp`, `currentDate`, `currentTime` and
+`currentTimeUUID` functions (`13132`)
+
+== 3.4.4
+
+* `ALTER TABLE` `ALTER` has been removed; a column's type may not be
+changed after creation (`12443`).
+* `ALTER TYPE` `ALTER` has been removed; a field's type may not be
+changed after creation (`12443`).
+
+== 3.4.3
+
+* Adds a new `duration` `data types <data-types>` (`11873`).
+* Support for `GROUP BY` (`10707`).
+* Adds a `DEFAULT UNSET` option for `INSERT JSON` to ignore omitted
+columns (`11424`).
+* Allows `null` as a legal value for TTL on insert and update. It will
+be treated as equivalent to inserting a 0 (`12216`).
+
+== 3.4.2
+
+* If a table has a non zero `default_time_to_live`, then explicitly
+specifying a TTL of 0 in an `INSERT` or `UPDATE` statement will result
+in the new writes not having any expiration (that is, an explicit TTL of
+0 cancels the `default_time_to_live`). This wasn't the case before and
+the `default_time_to_live` was applied even though a TTL had been
+explicitly set.
+* `ALTER TABLE` `ADD` and `DROP` now allow multiple columns to be
+added/removed.
+* New `PER PARTITION LIMIT` option for `SELECT` statements (see
+https://issues.apache.org/jira/browse/CASSANDRA-7017)[CASSANDRA-7017].
+* `User-defined functions <cql-functions>` can now instantiate
+`UDTValue` and `TupleValue` instances via the new `UDFContext` interface
+(see
+https://issues.apache.org/jira/browse/CASSANDRA-10818)[CASSANDRA-10818].
+* `User-defined types <udts>` may now be stored in a non-frozen form,
+allowing individual fields to be updated and deleted in `UPDATE`
+statements and `DELETE` statements, respectively.
+(https://issues.apache.org/jira/browse/CASSANDRA-7423)[CASSANDRA-7423]).
+
+== 3.4.1
+
+* Adds `CAST` functions.
+
+== 3.4.0
+
+* Support for `materialized views <materialized-views>`.
+* `DELETE` support for inequality expressions and `IN` restrictions on
+any primary key columns.
+* `UPDATE` support for `IN` restrictions on any primary key columns.
+
+== 3.3.1
+
+* The syntax `TRUNCATE TABLE X` is now accepted as an alias for
+`TRUNCATE X`.
+
+== 3.3.0
+
+* `User-defined functions and aggregates <cql-functions>` are now
+supported.
+* Allows double-dollar enclosed strings literals as an alternative to
+single-quote enclosed strings.
+* Introduces Roles to supersede user based authentication and access
+control
+* New `date`, `time`, `tinyint` and `smallint` `data types <data-types>`
+have been added.
+* `JSON support <cql-json>` has been added
+* Adds new time conversion functions and deprecate `dateOf` and
+`unixTimestampOf`.
+
+== 3.2.0
+
+* `User-defined types <udts>` supported.
+* `CREATE INDEX` now supports indexing collection columns, including
+indexing the keys of map collections through the `keys()` function
+* Indexes on collections may be queried using the new `CONTAINS` and
+`CONTAINS KEY` operators
+* `Tuple types <tuples>` were added to hold fixed-length sets of typed
+positional fields.
+* `DROP INDEX` now supports optionally specifying a keyspace.
+
+== 3.1.7
+
+* `SELECT` statements now support selecting multiple rows in a single
+partition using an `IN` clause on combinations of clustering columns.
+* `IF NOT EXISTS` and `IF EXISTS` syntax is now supported by
+`CREATE USER` and `DROP USER` statements, respectively.
+
+== 3.1.6
+
+* A new `uuid()` method has been added.
+* Support for `DELETE ... IF EXISTS` syntax.
+
+== 3.1.5
+
+* It is now possible to group clustering columns in a relation, see
+`WHERE <where-clause>` clauses.
+* Added support for `static columns <static-columns>`.
+
+== 3.1.4
+
+* `CREATE INDEX` now allows specifying options when creating CUSTOM
+indexes.
+
+== 3.1.3
+
+* Millisecond precision formats have been added to the
+`timestamp <timestamps>` parser.
+
+== 3.1.2
+
+* `NaN` and `Infinity` has been added as valid float constants. They are
+now reserved keywords. In the unlikely case you we using them as a
+column identifier (or keyspace/table one), you will now need to double
+quote them.
+
+== 3.1.1
+
+* `SELECT` statement now allows listing the partition keys (using the
+`DISTINCT` modifier). See
+https://issues.apache.org/jira/browse/CASSANDRA-4536[CASSANDRA-4536].
+* The syntax `c IN ?` is now supported in `WHERE` clauses. In that case,
+the value expected for the bind variable will be a list of whatever type
+`c` is.
+* It is now possible to use named bind variables (using `:name` instead
+of `?`).
+
+== 3.1.0
+
+* `ALTER TABLE` `DROP` option added.
+* `SELECT` statement now supports aliases in select clause. Aliases in
+WHERE and ORDER BY clauses are not supported.
+* `CREATE` statements for `KEYSPACE`, `TABLE` and `INDEX` now supports
+an `IF NOT EXISTS` condition. Similarly, `DROP` statements support a
+`IF EXISTS` condition.
+* `INSERT` statements optionally supports a `IF NOT EXISTS` condition
+and `UPDATE` supports `IF` conditions.
+
+== 3.0.5
+
+* `SELECT`, `UPDATE`, and `DELETE` statements now allow empty `IN`
+relations (see
+https://issues.apache.org/jira/browse/CASSANDRA-5626)[CASSANDRA-5626].
+
+== 3.0.4
+
+* Updated the syntax for custom `secondary indexes <secondary-indexes>`.
+* Non-equal condition on the partition key are now never supported, even
+for ordering partitioner as this was not correct (the order was *not*
+the one of the type of the partition key). Instead, the `token` method
+should always be used for range queries on the partition key (see
+`WHERE clauses <where-clause>`).
+
+== 3.0.3
+
+* Support for custom `secondary indexes <secondary-indexes>` has been
+added.
+
+== 3.0.2
+
+* Type validation for the `constants <constants>` has been fixed. For
+instance, the implementation used to allow `'2'` as a valid value for an
+`int` column (interpreting it has the equivalent of `2`), or `42` as a
+valid `blob` value (in which case `42` was interpreted as an hexadecimal
+representation of the blob). This is no longer the case, type validation
+of constants is now more strict. See the `data types <data-types>`
+section for details on which constant is allowed for which type.
+* The type validation fixed of the previous point has lead to the
+introduction of blobs constants to allow the input of blobs. Do note
+that while the input of blobs as strings constant is still supported by
+this version (to allow smoother transition to blob constant), it is now
+deprecated and will be removed by a future version. If you were using
+strings as blobs, you should thus update your client code ASAP to switch
+blob constants.
+* A number of functions to convert native types to blobs have also been
+introduced. Furthermore the token function is now also allowed in select
+clauses. See the `section on functions <cql-functions>` for details.
+
+== 3.0.1
+
+* Date strings (and timestamps) are no longer accepted as valid
+`timeuuid` values. Doing so was a bug in the sense that date string are
+not valid `timeuuid`, and it was thus resulting in
+https://issues.apache.org/jira/browse/CASSANDRA-4936[confusing
+behaviors]. However, the following new methods have been added to help
+working with `timeuuid`: `now`, `minTimeuuid`, `maxTimeuuid` , `dateOf`
+and `unixTimestampOf`.
+* Float constants now support the exponent notation. In other words,
+`4.2E10` is now a valid floating point value.
+
+== Versioning
+
+Versioning of the CQL language adheres to the http://semver.org[Semantic
+Versioning] guidelines. Versions take the form X.Y.Z where X, Y, and Z
+are integer values representing major, minor, and patch level
+respectively. There is no correlation between Cassandra release versions
+and the CQL language version.
+
+[cols=",",options="header",]
+|===
+|version |description
+| Major | The major version _must_ be bumped when backward incompatible changes
+are introduced. This should rarely occur.
+| Minor | Minor version increments occur when new, but backward compatible,
+functionality is introduced.
+| Patch | The patch version is incremented when bugs are fixed.
+|===
diff --git a/doc/modules/cassandra/pages/cql/cql_singlefile.adoc b/doc/modules/cassandra/pages/cql/cql_singlefile.adoc
new file mode 100644
index 0000000..e2fea00
--- /dev/null
+++ b/doc/modules/cassandra/pages/cql/cql_singlefile.adoc
@@ -0,0 +1,3904 @@
+== Cassandra Query Language (CQL) v3.4.3
+
+\{toc:maxLevel=3}
+
+=== CQL Syntax
+
+==== Preamble
+
+This document describes the Cassandra Query Language (CQL) version 3.
+CQL v3 is not backward compatible with CQL v2 and differs from it in
+numerous ways. Note that this document describes the last version of the
+languages. However, the link:#changes[changes] section provides the diff
+between the different versions of CQL v3.
+
+CQL v3 offers a model very close to SQL in the sense that data is put in
+_tables_ containing _rows_ of _columns_. For that reason, when used in
+this document, these terms (tables, rows and columns) have the same
+definition than they have in SQL. But please note that as such, they do
+*not* refer to the concept of rows and columns found in the internal
+implementation of Cassandra and in the thrift and CQL v2 API.
+
+==== Conventions
+
+To aid in specifying the CQL syntax, we will use the following
+conventions in this document:
+
+* Language rules will be given in a
+http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form[BNF] -like
+notation:
+
+bc(syntax). ::= TERMINAL
+
+* Nonterminal symbols will have `<angle brackets>`.
+* As additional shortcut notations to BNF, we’ll use traditional regular
+expression’s symbols (`?`, `+` and `*`) to signify that a given symbol
+is optional and/or can be repeated. We’ll also allow parentheses to
+group symbols and the `[<characters>]` notation to represent any one of
+`<characters>`.
+* The grammar is provided for documentation purposes and leave some
+minor details out. For instance, the last column definition in a
+`CREATE TABLE` statement is optional but supported if present even
+though the provided grammar in this document suggest it is not
+supported.
+* Sample code will be provided in a code block:
+
+bc(sample). SELECT sample_usage FROM cql;
+
+* References to keywords or pieces of CQL code in running text will be
+shown in a `fixed-width font`.
+
+[[identifiers]]
+==== Identifiers and keywords
+
+The CQL language uses _identifiers_ (or _names_) to identify tables,
+columns and other objects. An identifier is a token matching the regular
+expression `[a-zA-Z]``[a-zA-Z0-9_]``*`.
+
+A number of such identifiers, like `SELECT` or `WITH`, are _keywords_.
+They have a fixed meaning for the language and most are reserved. The
+list of those keywords can be found in link:#appendixA[Appendix A].
+
+Identifiers and (unquoted) keywords are case insensitive. Thus `SELECT`
+is the same than `select` or `sElEcT`, and `myId` is the same than
+`myid` or `MYID` for instance. A convention often used (in particular by
+the samples of this documentation) is to use upper case for keywords and
+lower case for other identifiers.
+
+There is a second kind of identifiers called _quoted identifiers_
+defined by enclosing an arbitrary sequence of characters in
+double-quotes(`"`). Quoted identifiers are never keywords. Thus
+`"select"` is not a reserved keyword and can be used to refer to a
+column, while `select` would raise a parse error. Also, contrarily to
+unquoted identifiers and keywords, quoted identifiers are case sensitive
+(`"My Quoted Id"` is _different_ from `"my quoted id"`). A fully
+lowercase quoted identifier that matches `[a-zA-Z]``[a-zA-Z0-9_]``*` is
+equivalent to the unquoted identifier obtained by removing the
+double-quote (so `"myid"` is equivalent to `myid` and to `myId` but
+different from `"myId"`). Inside a quoted identifier, the double-quote
+character can be repeated to escape it, so `"foo "" bar"` is a valid
+identifier.
+
+*Warning*: _quoted identifiers_ allows to declare columns with arbitrary
+names, and those can sometime clash with specific names used by the
+server. For instance, when using conditional update, the server will
+respond with a result-set containing a special result named
+`"[applied]"`. If you’ve declared a column with such a name, this could
+potentially confuse some tools and should be avoided. In general,
+unquoted identifiers should be preferred but if you use quoted
+identifiers, it is strongly advised to avoid any name enclosed by
+squared brackets (like `"[applied]"`) and any name that looks like a
+function call (like `"f(x)"`).
+
+==== Constants
+
+CQL defines the following kind of _constants_: strings, integers,
+floats, booleans, uuids and blobs:
+
+* A string constant is an arbitrary sequence of characters characters
+enclosed by single-quote(`'`). One can include a single-quote in a
+string by repeating it, e.g. `'It''s raining today'`. Those are not to
+be confused with quoted identifiers that use double-quotes.
+* An integer constant is defined by `'-'?[0-9]+`.
+* A float constant is defined by
+`'-'?[0-9]+('.'[0-9]*)?([eE][+-]?[0-9+])?`. On top of that, `NaN` and
+`Infinity` are also float constants.
+* A boolean constant is either `true` or `false` up to
+case-insensitivity (i.e. `True` is a valid boolean constant).
+* A http://en.wikipedia.org/wiki/Universally_unique_identifier[UUID]
+constant is defined by `hex{8}-hex{4}-hex{4}-hex{4}-hex{12}` where `hex`
+is an hexadecimal character, e.g. `[0-9a-fA-F]` and `{4}` is the number
+of such characters.
+* A blob constant is an hexadecimal number defined by `0[xX](hex)+`
+where `hex` is an hexadecimal character, e.g. `[0-9a-fA-F]`.
+
+For how these constants are typed, see the link:#types[data types
+section].
+
+==== Comments
+
+A comment in CQL is a line beginning by either double dashes (`--`) or
+double slash (`//`).
+
+Multi-line comments are also supported through enclosure within `/*` and
+`*/` (but nesting is not supported).
+
+bc(sample). +
+— This is a comment +
+// This is a comment too +
+/* This is +
+a multi-line comment */
+
+==== Statements
+
+CQL consists of statements. As in SQL, these statements can be divided
+in 3 categories:
+
+* Data definition statements, that allow to set and change the way data
+is stored.
+* Data manipulation statements, that allow to change data
+* Queries, to look up data
+
+All statements end with a semicolon (`;`) but that semicolon can be
+omitted when dealing with a single statement. The supported statements
+are described in the following sections. When describing the grammar of
+said statements, we will reuse the non-terminal symbols defined below:
+
+bc(syntax).. +
+::= any quoted or unquoted identifier, excluding reserved keywords +
+::= ( `.')?
+
+::= a string constant +
+::= an integer constant +
+::= a float constant +
+::= |  +
+::= a uuid constant +
+::= a boolean constant +
+::= a blob constant
+
+::=  +
+|  +
+|  +
+|  +
+|  +
+::= `?' +
+| `:'  +
+::=  +
+|  +
+|  +
+| `(' ( (`,' )*)? `)'
+
+::=  +
+|  +
+|  +
+::= `\{' ( `:' ( `,' `:' )* )? `}' +
+::= `\{' ( ( `,' )* )? `}' +
+::= `[' ( ( `,' )* )? `]'
+
+::=
+
+::= (AND )* +
+::= `=' ( | | ) +
+p. +
+Please note that not every possible productions of the grammar above
+will be valid in practice. Most notably, `<variable>` and nested
+`<collection-literal>` are currently not allowed inside
+`<collection-literal>`.
+
+A `<variable>` can be either anonymous (a question mark (`?`)) or named
+(an identifier preceded by `:`). Both declare a bind variables for
+link:#preparedStatement[prepared statements]. The only difference
+between an anymous and a named variable is that a named one will be
+easier to refer to (how exactly depends on the client driver used).
+
+The `<properties>` production is use by statement that create and alter
+keyspaces and tables. Each `<property>` is either a _simple_ one, in
+which case it just has a value, or a _map_ one, in which case it’s value
+is a map grouping sub-options. The following will refer to one or the
+other as the _kind_ (_simple_ or _map_) of the property.
+
+A `<tablename>` will be used to identify a table. This is an identifier
+representing the table name that can be preceded by a keyspace name. The
+keyspace name, if provided, allow to identify a table in another
+keyspace than the currently active one (the currently active keyspace is
+set through the `USE` statement).
+
+For supported `<function>`, see the section on
+link:#functions[functions].
+
+Strings can be either enclosed with single quotes or two dollar
+characters. The second syntax has been introduced to allow strings that
+contain single quotes. Typical candidates for such strings are source
+code fragments for user-defined functions.
+
+_Sample:_
+
+bc(sample).. +
+`some string value'
+
+$$double-dollar string can contain single ’ quotes$$ +
+p.
+
+[[preparedStatement]]
+==== Prepared Statement
+
+CQL supports _prepared statements_. Prepared statement is an
+optimization that allows to parse a query only once but execute it
+multiple times with different concrete values.
+
+In a statement, each time a column value is expected (in the data
+manipulation and query statements), a `<variable>` (see above) can be
+used instead. A statement with bind variables must then be _prepared_.
+Once it has been prepared, it can executed by providing concrete values
+for the bind variables. The exact procedure to prepare a statement and
+execute a prepared statement depends on the CQL driver used and is
+beyond the scope of this document.
+
+In addition to providing column values, bind markers may be used to
+provide values for `LIMIT`, `TIMESTAMP`, and `TTL` clauses. If anonymous
+bind markers are used, the names for the query parameters will be
+`[limit]`, `[timestamp]`, and `[ttl]`, respectively.
+
+[[dataDefinition]]
+=== Data Definition
+
+[[createKeyspaceStmt]]
+==== CREATE KEYSPACE
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE KEYSPACE (IF NOT EXISTS)? WITH  +
+p. +
+_Sample:_
+
+bc(sample).. +
+CREATE KEYSPACE Excelsior +
+WITH replication = \{’class’: `SimpleStrategy', `replication_factor' :
+3};
+
+CREATE KEYSPACE Excalibur +
+WITH replication = \{’class’: `NetworkTopologyStrategy', `DC1' : 1,
+`DC2' : 3} +
+AND durable_writes = false; +
+p. +
+The `CREATE KEYSPACE` statement creates a new top-level _keyspace_. A
+keyspace is a namespace that defines a replication strategy and some
+options for a set of tables. Valid keyspaces names are identifiers
+composed exclusively of alphanumerical characters and whose length is
+lesser or equal to 32. Note that as identifiers, keyspace names are case
+insensitive: use a quoted identifier for case sensitive keyspace names.
+
+The supported `<properties>` for `CREATE KEYSPACE` are:
+
+[cols=",,,,",options="header",]
+|===
+|name |kind |mandatory |default |description
+|`replication` |_map_ |yes | |The replication strategy and options to
+use for the keyspace.
+
+|`durable_writes` |_simple_ |no |true |Whether to use the commit log for
+updates on this keyspace (disable this option at your own risk!).
+|===
+
+The `replication` `<property>` is mandatory. It must at least contains
+the `'class'` sub-option which defines the replication strategy class to
+use. The rest of the sub-options depends on that replication strategy
+class. By default, Cassandra support the following `'class'`:
+
+* `'SimpleStrategy'`: A simple strategy that defines a simple
+replication factor for the whole cluster. The only sub-options supported
+is `'replication_factor'` to define that replication factor and is
+mandatory.
+* `'NetworkTopologyStrategy'`: A replication strategy that allows to set
+the replication factor independently for each data-center. The rest of
+the sub-options are key-value pairs where each time the key is the name
+of a datacenter and the value the replication factor for that
+data-center.
+
+Attempting to create an already existing keyspace will return an error
+unless the `IF NOT EXISTS` option is used. If it is used, the statement
+will be a no-op if the keyspace already exists.
+
+[[useStmt]]
+==== USE
+
+_Syntax:_
+
+bc(syntax). ::= USE
+
+_Sample:_
+
+bc(sample). USE myApp;
+
+The `USE` statement takes an existing keyspace name as argument and set
+it as the per-connection current working keyspace. All subsequent
+keyspace-specific actions will be performed in the context of the
+selected keyspace, unless link:#statements[otherwise specified], until
+another USE statement is issued or the connection terminates.
+
+[[alterKeyspaceStmt]]
+==== ALTER KEYSPACE
+
+_Syntax:_
+
+bc(syntax).. +
+::= ALTER KEYSPACE WITH  +
+p. +
+_Sample:_
+
+bc(sample).. +
+ALTER KEYSPACE Excelsior +
+WITH replication = \{’class’: `SimpleStrategy', `replication_factor' :
+4};
+
+The `ALTER KEYSPACE` statement alters the properties of an existing
+keyspace. The supported `<properties>` are the same as for the
+link:#createKeyspaceStmt[`CREATE KEYSPACE`] statement.
+
+[[dropKeyspaceStmt]]
+==== DROP KEYSPACE
+
+_Syntax:_
+
+bc(syntax). ::= DROP KEYSPACE ( IF EXISTS )?
+
+_Sample:_
+
+bc(sample). DROP KEYSPACE myApp;
+
+A `DROP KEYSPACE` statement results in the immediate, irreversible
+removal of an existing keyspace, including all column families in it,
+and all data contained in those column families.
+
+If the keyspace does not exists, the statement will return an error,
+unless `IF EXISTS` is used in which case the operation is a no-op.
+
+[[createTableStmt]]
+==== CREATE TABLE
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE ( TABLE | COLUMNFAMILY ) ( IF NOT EXISTS )?  +
+`(' ( `,' )* `)' +
+( WITH ( AND )* )?
+
+::= ( STATIC )? ( PRIMARY KEY )? +
+| PRIMARY KEY `(' ( `,' )* `)'
+
+::=  +
+| `(' (`,' )* `)'
+
+::=  +
+| COMPACT STORAGE +
+| CLUSTERING ORDER +
+p. +
+_Sample:_
+
+bc(sample).. +
+CREATE TABLE monkeySpecies ( +
+species text PRIMARY KEY, +
+common_name text, +
+population varint, +
+average_size int +
+) WITH comment=`Important biological records';
+
+CREATE TABLE timeline ( +
+userid uuid, +
+posted_month int, +
+posted_time uuid, +
+body text, +
+posted_by text, +
+PRIMARY KEY (userid, posted_month, posted_time) +
+) WITH compaction = \{ `class' : `LeveledCompactionStrategy' }; +
+p. +
+The `CREATE TABLE` statement creates a new table. Each such table is a
+set of _rows_ (usually representing related entities) for which it
+defines a number of properties. A table is defined by a
+link:#createTableName[name], it defines the columns composing rows of
+the table and have a number of link:#createTableOptions[options]. Note
+that the `CREATE COLUMNFAMILY` syntax is supported as an alias for
+`CREATE TABLE` (for historical reasons).
+
+Attempting to create an already existing table will return an error
+unless the `IF NOT EXISTS` option is used. If it is used, the statement
+will be a no-op if the table already exists.
+
+[[createTableName]]
+===== `<tablename>`
+
+Valid table names are the same as valid
+link:#createKeyspaceStmt[keyspace names] (up to 32 characters long
+alphanumerical identifiers). If the table name is provided alone, the
+table is created within the current keyspace (see `USE`), but if it is
+prefixed by an existing keyspace name (see
+link:#statements[`<tablename>`] grammar), it is created in the specified
+keyspace (but does *not* change the current keyspace).
+
+[[createTableColumn]]
+===== `<column-definition>`
+
+A `CREATE TABLE` statement defines the columns that rows of the table
+can have. A _column_ is defined by its name (an identifier) and its type
+(see the link:#types[data types] section for more details on allowed
+types and their properties).
+
+Within a table, a row is uniquely identified by its `PRIMARY KEY` (or
+more simply the key), and hence all table definitions *must* define a
+PRIMARY KEY (and only one). A `PRIMARY KEY` is composed of one or more
+of the columns defined in the table. If the `PRIMARY KEY` is only one
+column, this can be specified directly after the column definition.
+Otherwise, it must be specified by following `PRIMARY KEY` by the
+comma-separated list of column names composing the key within
+parenthesis. Note that:
+
+bc(sample). +
+CREATE TABLE t ( +
+k int PRIMARY KEY, +
+other text +
+)
+
+is equivalent to
+
+bc(sample). +
+CREATE TABLE t ( +
+k int, +
+other text, +
+PRIMARY KEY (k) +
+)
+
+[[createTablepartitionClustering]]
+===== Partition key and clustering columns
+
+In CQL, the order in which columns are defined for the `PRIMARY KEY`
+matters. The first column of the key is called the _partition key_. It
+has the property that all the rows sharing the same partition key (even
+across table in fact) are stored on the same physical node. Also,
+insertion/update/deletion on rows sharing the same partition key for a
+given table are performed _atomically_ and in _isolation_. Note that it
+is possible to have a composite partition key, i.e. a partition key
+formed of multiple columns, using an extra set of parentheses to define
+which columns forms the partition key.
+
+The remaining columns of the `PRIMARY KEY` definition, if any, are
+called __clustering columns. On a given physical node, rows for a given
+partition key are stored in the order induced by the clustering columns,
+making the retrieval of rows in that clustering order particularly
+efficient (see `SELECT`).
+
+[[createTableStatic]]
+===== `STATIC` columns
+
+Some columns can be declared as `STATIC` in a table definition. A column
+that is static will be ``shared'' by all the rows belonging to the same
+partition (having the same partition key). For instance, in:
+
+bc(sample). +
+CREATE TABLE test ( +
+pk int, +
+t int, +
+v text, +
+s text static, +
+PRIMARY KEY (pk, t) +
+); +
+INSERT INTO test(pk, t, v, s) VALUES (0, 0, `val0', `static0'); +
+INSERT INTO test(pk, t, v, s) VALUES (0, 1, `val1', `static1'); +
+SELECT * FROM test WHERE pk=0 AND t=0;
+
+the last query will return `'static1'` as value for `s`, since `s` is
+static and thus the 2nd insertion modified this ``shared'' value. Note
+however that static columns are only static within a given partition,
+and if in the example above both rows where from different partitions
+(i.e. if they had different value for `pk`), then the 2nd insertion
+would not have modified the value of `s` for the first row.
+
+A few restrictions applies to when static columns are allowed:
+
+* tables with the `COMPACT STORAGE` option (see below) cannot have them
+* a table without clustering columns cannot have static columns (in a
+table without clustering columns, every partition has only one row, and
+so every column is inherently static).
+* only non `PRIMARY KEY` columns can be static
+
+[[createTableOptions]]
+===== `<option>`
+
+The `CREATE TABLE` statement supports a number of options that controls
+the configuration of a new table. These options can be specified after
+the `WITH` keyword.
+
+The first of these option is `COMPACT STORAGE`. This option is mainly
+targeted towards backward compatibility for definitions created before
+CQL3 (see
+http://www.datastax.com/dev/blog/thrift-to-cql3[www.datastax.com/dev/blog/thrift-to-cql3]
+for more details). The option also provides a slightly more compact
+layout of data on disk but at the price of diminished flexibility and
+extensibility for the table. Most notably, `COMPACT STORAGE` tables
+cannot have collections nor static columns and a `COMPACT STORAGE` table
+with at least one clustering column supports exactly one (as in not 0
+nor more than 1) column not part of the `PRIMARY KEY` definition (which
+imply in particular that you cannot add nor remove columns after
+creation). For those reasons, `COMPACT STORAGE` is not recommended
+outside of the backward compatibility reason evoked above.
+
+Another option is `CLUSTERING ORDER`. It allows to define the ordering
+of rows on disk. It takes the list of the clustering column names with,
+for each of them, the on-disk order (Ascending or descending). Note that
+this option affects link:#selectOrderBy[what `ORDER BY` are allowed
+during `SELECT`].
+
+Table creation supports the following other `<property>`:
+
+[cols=",,,",options="header",]
+|===
+|option |kind |default |description
+|`comment` |_simple_ |none |A free-form, human-readable comment.
+
+|`gc_grace_seconds` |_simple_ |864000 |Time to wait before garbage
+collecting tombstones (deletion markers).
+
+|`bloom_filter_fp_chance` |_simple_ |0.00075 |The target probability of
+false positive of the sstable bloom filters. Said bloom filters will be
+sized to provide the provided probability (thus lowering this value
+impact the size of bloom filters in-memory and on-disk)
+
+|`default_time_to_live` |_simple_ |0 |The default expiration time
+(``TTL'') in seconds for a table.
+
+|`compaction` |_map_ |_see below_ |Compaction options, see
+link:#compactionOptions[below].
+
+|`compression` |_map_ |_see below_ |Compression options, see
+link:#compressionOptions[below].
+
+|`caching` |_map_ |_see below_ |Caching options, see
+link:#cachingOptions[below].
+|===
+
+[[compactionOptions]]
+===== Compaction options
+
+The `compaction` property must at least define the `'class'` sub-option,
+that defines the compaction strategy class to use. The default supported
+class are `'SizeTieredCompactionStrategy'`,
+`'LeveledCompactionStrategy'`, `'DateTieredCompactionStrategy'` and
+`'TimeWindowCompactionStrategy'`. Custom strategy can be provided by
+specifying the full class name as a link:#constants[string constant].
+The rest of the sub-options depends on the chosen class. The sub-options
+supported by the default classes are:
+
+[cols=",,,",options="header",]
+|===
+|option |supported compaction strategy |default |description
+|`enabled` |_all_ |true |A boolean denoting whether compaction should be
+enabled or not.
+
+|`tombstone_threshold` |_all_ |0.2 |A ratio such that if a sstable has
+more than this ratio of gcable tombstones over all contained columns,
+the sstable will be compacted (with no other sstables) for the purpose
+of purging those tombstones.
+
+|`tombstone_compaction_interval` |_all_ |1 day |The minimum time to wait
+after an sstable creation time before considering it for ``tombstone
+compaction'', where ``tombstone compaction'' is the compaction triggered
+if the sstable has more gcable tombstones than `tombstone_threshold`.
+
+|`unchecked_tombstone_compaction` |_all_ |false |Setting this to true
+enables more aggressive tombstone compactions - single sstable tombstone
+compactions will run without checking how likely it is that they will be
+successful.
+
+|`min_sstable_size` |SizeTieredCompactionStrategy |50MB |The size tiered
+strategy groups SSTables to compact in buckets. A bucket groups SSTables
+that differs from less than 50% in size. However, for small sizes, this
+would result in a bucketing that is too fine grained. `min_sstable_size`
+defines a size threshold (in bytes) below which all SSTables belong to
+one unique bucket
+
+|`min_threshold` |SizeTieredCompactionStrategy |4 |Minimum number of
+SSTables needed to start a minor compaction.
+
+|`max_threshold` |SizeTieredCompactionStrategy |32 |Maximum number of
+SSTables processed by one minor compaction.
+
+|`bucket_low` |SizeTieredCompactionStrategy |0.5 |Size tiered consider
+sstables to be within the same bucket if their size is within
+[average_size * `bucket_low`, average_size * `bucket_high` ] (i.e the
+default groups sstable whose sizes diverges by at most 50%)
+
+|`bucket_high` |SizeTieredCompactionStrategy |1.5 |Size tiered consider
+sstables to be within the same bucket if their size is within
+[average_size * `bucket_low`, average_size * `bucket_high` ] (i.e the
+default groups sstable whose sizes diverges by at most 50%).
+
+|`sstable_size_in_mb` |LeveledCompactionStrategy |5MB |The target size
+(in MB) for sstables in the leveled strategy. Note that while sstable
+sizes should stay less or equal to `sstable_size_in_mb`, it is possible
+to exceptionally have a larger sstable as during compaction, data for a
+given partition key are never split into 2 sstables
+
+|`timestamp_resolution` |DateTieredCompactionStrategy |MICROSECONDS |The
+timestamp resolution used when inserting data, could be MILLISECONDS,
+MICROSECONDS etc (should be understandable by Java TimeUnit) - don’t
+change this unless you do mutations with USING TIMESTAMP (or equivalent
+directly in the client)
+
+|`base_time_seconds` |DateTieredCompactionStrategy |60 |The base size of
+the time windows.
+
+|`max_sstable_age_days` |DateTieredCompactionStrategy |365 |SSTables
+only containing data that is older than this will never be compacted.
+
+|`timestamp_resolution` |TimeWindowCompactionStrategy |MICROSECONDS |The
+timestamp resolution used when inserting data, could be MILLISECONDS,
+MICROSECONDS etc (should be understandable by Java TimeUnit) - don’t
+change this unless you do mutations with USING TIMESTAMP (or equivalent
+directly in the client)
+
+|`compaction_window_unit` |TimeWindowCompactionStrategy |DAYS |The Java
+TimeUnit used for the window size, set in conjunction with
+`compaction_window_size`. Must be one of DAYS, HOURS, MINUTES
+
+|`compaction_window_size` |TimeWindowCompactionStrategy |1 |The number
+of `compaction_window_unit` units that make up a time window.
+
+|`unsafe_aggressive_sstable_expiration` |TimeWindowCompactionStrategy
+|false |Expired sstables will be dropped without checking its data is
+shadowing other sstables. This is a potentially risky option that can
+lead to data loss or deleted data re-appearing, going beyond what
+`unchecked_tombstone_compaction` does for single sstable compaction. Due
+to the risk the jvm must also be started with
+`-Dcassandra.unsafe_aggressive_sstable_expiration=true`.
+|===
+
+[[compressionOptions]]
+===== Compression options
+
+For the `compression` property, the following sub-options are available:
+
+[cols=",,,,,",options="header",]
+|===
+|option |default |description | | |
+|`class` |LZ4Compressor |The compression algorithm to use. Default
+compressor are: LZ4Compressor, SnappyCompressor and DeflateCompressor.
+Use `'enabled' : false` to disable compression. Custom compressor can be
+provided by specifying the full class name as a link:#constants[string
+constant]. | | |
+
+|`enabled` |true |By default compression is enabled. To disable it, set
+`enabled` to `false` |`chunk_length_in_kb` |64KB |On disk SSTables are
+compressed by block (to allow random reads). This defines the size (in
+KB) of said block. Bigger values may improve the compression rate, but
+increases the minimum size of data to be read from disk for a read
+
+|`crc_check_chance` |1.0 |When compression is enabled, each compressed
+block includes a checksum of that block for the purpose of detecting
+disk bitrot and avoiding the propagation of corruption to other replica.
+This option defines the probability with which those checksums are
+checked during read. By default they are always checked. Set to 0 to
+disable checksum checking and to 0.5 for instance to check them every
+other read | | |
+|===
+
+[[cachingOptions]]
+===== Caching options
+
+For the `caching` property, the following sub-options are available:
+
+[cols=",,",options="header",]
+|===
+|option |default |description
+|`keys` |ALL |Whether to cache keys (``key cache'') for this table.
+Valid values are: `ALL` and `NONE`.
+
+|`rows_per_partition` |NONE |The amount of rows to cache per partition
+(``row cache''). If an integer `n` is specified, the first `n` queried
+rows of a partition will be cached. Other possible options are `ALL`, to
+cache all rows of a queried partition, or `NONE` to disable row caching.
+|===
+
+===== Other considerations:
+
+* When link:#insertStmt[inserting] / link:#updateStmt[updating] a given
+row, not all columns needs to be defined (except for those part of the
+key), and missing columns occupy no space on disk. Furthermore, adding
+new columns (see `ALTER TABLE`) is a constant time operation. There is
+thus no need to try to anticipate future usage (or to cry when you
+haven’t) when creating a table.
+
+[[alterTableStmt]]
+==== ALTER TABLE
+
+_Syntax:_
+
+bc(syntax).. +
+::= ALTER (TABLE | COLUMNFAMILY)
+
+::= ADD  +
+| ADD ( ( , )* ) +
+| DROP  +
+| DROP ( ( , )* ) +
+| WITH ( AND )* +
+p. +
+_Sample:_
+
+bc(sample).. +
+ALTER TABLE addamsFamily
+
+ALTER TABLE addamsFamily +
+ADD gravesite varchar;
+
+ALTER TABLE addamsFamily +
+WITH comment = `A most excellent and useful column family'; +
+p. +
+The `ALTER` statement is used to manipulate table definitions. It allows
+for adding new columns, dropping existing ones, or updating the table
+options. As with table creation, `ALTER COLUMNFAMILY` is allowed as an
+alias for `ALTER TABLE`.
+
+The `<tablename>` is the table name optionally preceded by the keyspace
+name. The `<instruction>` defines the alteration to perform:
+
+* `ADD`: Adds a new column to the table. The `<identifier>` for the new
+column must not conflict with an existing column. Moreover, columns
+cannot be added to tables defined with the `COMPACT STORAGE` option.
+* `DROP`: Removes a column from the table. Dropped columns will
+immediately become unavailable in the queries and will not be included
+in compacted sstables in the future. If a column is readded, queries
+won’t return values written before the column was last dropped. It is
+assumed that timestamps represent actual time, so if this is not your
+case, you should NOT readd previously dropped columns. Columns can’t be
+dropped from tables defined with the `COMPACT STORAGE` option.
+* `WITH`: Allows to update the options of the table. The
+link:#createTableOptions[supported `<option>`] (and syntax) are the same
+as for the `CREATE TABLE` statement except that `COMPACT STORAGE` is not
+supported. Note that setting any `compaction` sub-options has the effect
+of erasing all previous `compaction` options, so you need to re-specify
+all the sub-options if you want to keep them. The same note applies to
+the set of `compression` sub-options.
+
+===== CQL type compatibility:
+
+CQL data types may be converted only as the following table.
+
+[cols=",",options="header",]
+|===
+|Data type may be altered to: |Data type
+|timestamp |bigint
+
+|ascii, bigint, boolean, date, decimal, double, float, inet, int,
+smallint, text, time, timestamp, timeuuid, tinyint, uuid, varchar,
+varint |blob
+
+|int |date
+
+|ascii, varchar |text
+
+|bigint |time
+
+|bigint |timestamp
+
+|timeuuid |uuid
+
+|ascii, text |varchar
+
+|bigint, int, timestamp |varint
+|===
+
+Clustering columns have stricter requirements, only the below
+conversions are allowed.
+
+[cols=",",options="header",]
+|===
+|Data type may be altered to: |Data type
+|ascii, text, varchar |blob
+|ascii, varchar |text
+|ascii, text |varchar
+|===
+
+[[dropTableStmt]]
+==== DROP TABLE
+
+_Syntax:_
+
+bc(syntax). ::= DROP TABLE ( IF EXISTS )?
+
+_Sample:_
+
+bc(sample). DROP TABLE worldSeriesAttendees;
+
+The `DROP TABLE` statement results in the immediate, irreversible
+removal of a table, including all data contained in it. As for table
+creation, `DROP COLUMNFAMILY` is allowed as an alias for `DROP TABLE`.
+
+If the table does not exist, the statement will return an error, unless
+`IF EXISTS` is used in which case the operation is a no-op.
+
+[[truncateStmt]]
+==== TRUNCATE
+
+_Syntax:_
+
+bc(syntax). ::= TRUNCATE ( TABLE | COLUMNFAMILY )?
+
+_Sample:_
+
+bc(sample). TRUNCATE superImportantData;
+
+The `TRUNCATE` statement permanently removes all data from a table.
+
+[[createIndexStmt]]
+==== CREATE INDEX
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE ( CUSTOM )? INDEX ( IF NOT EXISTS )? ( )? +
+ON `(' `)' +
+( USING ( WITH OPTIONS = )? )?
+
+::=  +
+| keys( ) +
+p. +
+_Sample:_
+
+bc(sample). +
+CREATE INDEX userIndex ON NerdMovies (user); +
+CREATE INDEX ON Mutants (abilityId); +
+CREATE INDEX ON users (keys(favs)); +
+CREATE CUSTOM INDEX ON users (email) USING `path.to.the.IndexClass'; +
+CREATE CUSTOM INDEX ON users (email) USING `path.to.the.IndexClass' WITH
+OPTIONS = \{’storage’: `/mnt/ssd/indexes/'};
+
+The `CREATE INDEX` statement is used to create a new (automatic)
+secondary index for a given (existing) column in a given table. A name
+for the index itself can be specified before the `ON` keyword, if
+desired. If data already exists for the column, it will be indexed
+asynchronously. After the index is created, new data for the column is
+indexed automatically at insertion time.
+
+Attempting to create an already existing index will return an error
+unless the `IF NOT EXISTS` option is used. If it is used, the statement
+will be a no-op if the index already exists.
+
+[[keysIndex]]
+===== Indexes on Map Keys
+
+When creating an index on a link:#map[map column], you may index either
+the keys or the values. If the column identifier is placed within the
+`keys()` function, the index will be on the map keys, allowing you to
+use `CONTAINS KEY` in `WHERE` clauses. Otherwise, the index will be on
+the map values.
+
+[[dropIndexStmt]]
+==== DROP INDEX
+
+_Syntax:_
+
+bc(syntax). ::= DROP INDEX ( IF EXISTS )? ( `.' )?
+
+_Sample:_
+
+bc(sample).. +
+DROP INDEX userIndex;
+
+DROP INDEX userkeyspace.address_index; +
+p. +
+The `DROP INDEX` statement is used to drop an existing secondary index.
+The argument of the statement is the index name, which may optionally
+specify the keyspace of the index.
+
+If the index does not exists, the statement will return an error, unless
+`IF EXISTS` is used in which case the operation is a no-op.
+
+[[createMVStmt]]
+==== CREATE MATERIALIZED VIEW
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE MATERIALIZED VIEW ( IF NOT EXISTS )? AS +
+SELECT ( `(' ( `,' ) * `)' | `*' ) +
+FROM  +
+( WHERE )? +
+PRIMARY KEY `(' ( `,' )* `)' +
+( WITH ( AND )* )? +
+p. +
+_Sample:_
+
+bc(sample).. +
+CREATE MATERIALIZED VIEW monkeySpecies_by_population AS +
+SELECT * +
+FROM monkeySpecies +
+WHERE population IS NOT NULL AND species IS NOT NULL +
+PRIMARY KEY (population, species) +
+WITH comment=`Allow query by population instead of species'; +
+p. +
+The `CREATE MATERIALIZED VIEW` statement creates a new materialized
+view. Each such view is a set of _rows_ which corresponds to rows which
+are present in the underlying, or base, table specified in the `SELECT`
+statement. A materialized view cannot be directly updated, but updates
+to the base table will cause corresponding updates in the view.
+
+Attempting to create an already existing materialized view will return
+an error unless the `IF NOT EXISTS` option is used. If it is used, the
+statement will be a no-op if the materialized view already exists.
+
+[[createMVWhere]]
+===== `WHERE` Clause
+
+The `<where-clause>` is similar to the link:#selectWhere[where clause of
+a `SELECT` statement], with a few differences. First, the where clause
+must contain an expression that disallows `NULL` values in columns in
+the view’s primary key. If no other restriction is desired, this can be
+accomplished with an `IS NOT NULL` expression. Second, only columns
+which are in the base table’s primary key may be restricted with
+expressions other than `IS NOT NULL`. (Note that this second restriction
+may be lifted in the future.)
+
+[[alterMVStmt]]
+==== ALTER MATERIALIZED VIEW
+
+_Syntax:_
+
+bc(syntax). ::= ALTER MATERIALIZED VIEW  +
+WITH ( AND )*
+
+The `ALTER MATERIALIZED VIEW` statement allows options to be update;
+these options are the same as `CREATE TABLE`’s options.
+
+[[dropMVStmt]]
+==== DROP MATERIALIZED VIEW
+
+_Syntax:_
+
+bc(syntax). ::= DROP MATERIALIZED VIEW ( IF EXISTS )?
+
+_Sample:_
+
+bc(sample). DROP MATERIALIZED VIEW monkeySpecies_by_population;
+
+The `DROP MATERIALIZED VIEW` statement is used to drop an existing
+materialized view.
+
+If the materialized view does not exists, the statement will return an
+error, unless `IF EXISTS` is used in which case the operation is a
+no-op.
+
+[[createTypeStmt]]
+==== CREATE TYPE
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE TYPE ( IF NOT EXISTS )?  +
+`(' ( `,' )* `)'
+
+::= ( `.' )?
+
+::=
+
+_Sample:_
+
+bc(sample).. +
+CREATE TYPE address ( +
+street_name text, +
+street_number int, +
+city text, +
+state text, +
+zip int +
+)
+
+CREATE TYPE work_and_home_addresses ( +
+home_address address, +
+work_address address +
+) +
+p. +
+The `CREATE TYPE` statement creates a new user-defined type. Each type
+is a set of named, typed fields. Field types may be any valid type,
+including collections and other existing user-defined types.
+
+Attempting to create an already existing type will result in an error
+unless the `IF NOT EXISTS` option is used. If it is used, the statement
+will be a no-op if the type already exists.
+
+[[createTypeName]]
+===== `<typename>`
+
+Valid type names are identifiers. The names of existing CQL types and
+link:#appendixB[reserved type names] may not be used.
+
+If the type name is provided alone, the type is created with the current
+keyspace (see `USE`). If it is prefixed by an existing keyspace name,
+the type is created within the specified keyspace instead of the current
+keyspace.
+
+[[alterTypeStmt]]
+==== ALTER TYPE
+
+_Syntax:_
+
+bc(syntax).. +
+::= ALTER TYPE
+
+::= ADD  +
+| RENAME TO ( AND TO )* +
+p. +
+_Sample:_
+
+bc(sample).. +
+ALTER TYPE address ADD country text
+
+ALTER TYPE address RENAME zip TO zipcode AND street_name TO street +
+p. +
+The `ALTER TYPE` statement is used to manipulate type definitions. It
+allows for adding new fields, renaming existing fields, or changing the
+type of existing fields.
+
+[[dropTypeStmt]]
+==== DROP TYPE
+
+_Syntax:_
+
+bc(syntax).. +
+::= DROP TYPE ( IF EXISTS )?  +
+p. +
+The `DROP TYPE` statement results in the immediate, irreversible removal
+of a type. Attempting to drop a type that is still in use by another
+type or a table will result in an error.
+
+If the type does not exist, an error will be returned unless `IF EXISTS`
+is used, in which case the operation is a no-op.
+
+[[createTriggerStmt]]
+==== CREATE TRIGGER
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE TRIGGER ( IF NOT EXISTS )? ( )? +
+ON  +
+USING
+
+_Sample:_
+
+bc(sample). +
+CREATE TRIGGER myTrigger ON myTable USING
+`org.apache.cassandra.triggers.InvertedIndex';
+
+The actual logic that makes up the trigger can be written in any Java
+(JVM) language and exists outside the database. You place the trigger
+code in a `lib/triggers` subdirectory of the Cassandra installation
+directory, it loads during cluster startup, and exists on every node
+that participates in a cluster. The trigger defined on a table fires
+before a requested DML statement occurs, which ensures the atomicity of
+the transaction.
+
+[[dropTriggerStmt]]
+==== DROP TRIGGER
+
+_Syntax:_
+
+bc(syntax).. +
+::= DROP TRIGGER ( IF EXISTS )? ( )? +
+ON  +
+p. +
+_Sample:_
+
+bc(sample). +
+DROP TRIGGER myTrigger ON myTable;
+
+`DROP TRIGGER` statement removes the registration of a trigger created
+using `CREATE TRIGGER`.
+
+[[createFunctionStmt]]
+==== CREATE FUNCTION
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE ( OR REPLACE )? +
+FUNCTION ( IF NOT EXISTS )? +
+( `.' )?  +
+`(' ( `,' )* `)' +
+( CALLED | RETURNS NULL ) ON NULL INPUT +
+RETURNS  +
+LANGUAGE  +
+AS
+
+_Sample:_
+
+bc(sample). +
+CREATE OR REPLACE FUNCTION somefunction +
+( somearg int, anotherarg text, complexarg frozen, listarg list ) +
+RETURNS NULL ON NULL INPUT +
+RETURNS text +
+LANGUAGE java +
+AS $$ +
+// some Java code +
+$$; +
+CREATE FUNCTION akeyspace.fname IF NOT EXISTS +
+( someArg int ) +
+CALLED ON NULL INPUT +
+RETURNS text +
+LANGUAGE java +
+AS $$ +
+// some Java code +
+$$;
+
+`CREATE FUNCTION` creates or replaces a user-defined function.
+
+[[functionSignature]]
+===== Function Signature
+
+Signatures are used to distinguish individual functions. The signature
+consists of:
+
+. The fully qualified function name - i.e _keyspace_ plus
+_function-name_
+. The concatenated list of all argument types
+
+Note that keyspace names, function names and argument types are subject
+to the default naming conventions and case-sensitivity rules.
+
+`CREATE FUNCTION` with the optional `OR REPLACE` keywords either creates
+a function or replaces an existing one with the same signature. A
+`CREATE FUNCTION` without `OR REPLACE` fails if a function with the same
+signature already exists.
+
+Behavior on invocation with `null` values must be defined for each
+function. There are two options:
+
+. `RETURNS NULL ON NULL INPUT` declares that the function will always
+return `null` if any of the input arguments is `null`.
+. `CALLED ON NULL INPUT` declares that the function will always be
+executed.
+
+If the optional `IF NOT EXISTS` keywords are used, the function will
+only be created if another function with the same signature does not
+exist.
+
+`OR REPLACE` and `IF NOT EXIST` cannot be used together.
+
+Functions belong to a keyspace. If no keyspace is specified in
+`<function-name>`, the current keyspace is used (i.e. the keyspace
+specified using the link:#useStmt[`USE`] statement). It is not possible
+to create a user-defined function in one of the system keyspaces.
+
+See the section on link:#udfs[user-defined functions] for more
+information.
+
+[[dropFunctionStmt]]
+==== DROP FUNCTION
+
+_Syntax:_
+
+bc(syntax).. +
+::= DROP FUNCTION ( IF EXISTS )? +
+( `.' )?  +
+( `(' ( `,' )* `)' )?
+
+_Sample:_
+
+bc(sample). +
+DROP FUNCTION myfunction; +
+DROP FUNCTION mykeyspace.afunction; +
+DROP FUNCTION afunction ( int ); +
+DROP FUNCTION afunction ( text );
+
+`DROP FUNCTION` statement removes a function created using
+`CREATE FUNCTION`. +
+You must specify the argument types (link:#functionSignature[signature]
+) of the function to drop if there are multiple functions with the same
+name but a different signature (overloaded functions).
+
+`DROP FUNCTION` with the optional `IF EXISTS` keywords drops a function
+if it exists.
+
+[[createAggregateStmt]]
+==== CREATE AGGREGATE
+
+_Syntax:_
+
+bc(syntax).. +
+::= CREATE ( OR REPLACE )? +
+AGGREGATE ( IF NOT EXISTS )? +
+( `.' )?  +
+`(' ( `,' )* `)' +
+SFUNC  +
+STYPE  +
+( FINALFUNC )? +
+( INITCOND )? +
+p. +
+_Sample:_
+
+bc(sample). +
+CREATE AGGREGATE myaggregate ( val text ) +
+SFUNC myaggregate_state +
+STYPE text +
+FINALFUNC myaggregate_final +
+INITCOND `foo';
+
+See the section on link:#udas[user-defined aggregates] for a complete
+example.
+
+`CREATE AGGREGATE` creates or replaces a user-defined aggregate.
+
+`CREATE AGGREGATE` with the optional `OR REPLACE` keywords either
+creates an aggregate or replaces an existing one with the same
+signature. A `CREATE AGGREGATE` without `OR REPLACE` fails if an
+aggregate with the same signature already exists.
+
+`CREATE AGGREGATE` with the optional `IF NOT EXISTS` keywords either
+creates an aggregate if it does not already exist.
+
+`OR REPLACE` and `IF NOT EXIST` cannot be used together.
+
+Aggregates belong to a keyspace. If no keyspace is specified in
+`<aggregate-name>`, the current keyspace is used (i.e. the keyspace
+specified using the link:#useStmt[`USE`] statement). It is not possible
+to create a user-defined aggregate in one of the system keyspaces.
+
+Signatures for user-defined aggregates follow the
+link:#functionSignature[same rules] as for user-defined functions.
+
+`STYPE` defines the type of the state value and must be specified.
+
+The optional `INITCOND` defines the initial state value for the
+aggregate. It defaults to `null`. A non-`null` `INITCOND` must be
+specified for state functions that are declared with
+`RETURNS NULL ON NULL INPUT`.
+
+`SFUNC` references an existing function to be used as the state
+modifying function. The type of first argument of the state function
+must match `STYPE`. The remaining argument types of the state function
+must match the argument types of the aggregate function. State is not
+updated for state functions declared with `RETURNS NULL ON NULL INPUT`
+and called with `null`.
+
+The optional `FINALFUNC` is called just before the aggregate result is
+returned. It must take only one argument with type `STYPE`. The return
+type of the `FINALFUNC` may be a different type. A final function
+declared with `RETURNS NULL ON NULL INPUT` means that the aggregate’s
+return value will be `null`, if the last state is `null`.
+
+If no `FINALFUNC` is defined, the overall return type of the aggregate
+function is `STYPE`. If a `FINALFUNC` is defined, it is the return type
+of that function.
+
+See the section on link:#udas[user-defined aggregates] for more
+information.
+
+[[dropAggregateStmt]]
+==== DROP AGGREGATE
+
+_Syntax:_
+
+bc(syntax).. +
+::= DROP AGGREGATE ( IF EXISTS )? +
+( `.' )?  +
+( `(' ( `,' )* `)' )? +
+p.
+
+_Sample:_
+
+bc(sample). +
+DROP AGGREGATE myAggregate; +
+DROP AGGREGATE myKeyspace.anAggregate; +
+DROP AGGREGATE someAggregate ( int ); +
+DROP AGGREGATE someAggregate ( text );
+
+The `DROP AGGREGATE` statement removes an aggregate created using
+`CREATE AGGREGATE`. You must specify the argument types of the aggregate
+to drop if there are multiple aggregates with the same name but a
+different signature (overloaded aggregates).
+
+`DROP AGGREGATE` with the optional `IF EXISTS` keywords drops an
+aggregate if it exists, and does nothing if a function with the
+signature does not exist.
+
+Signatures for user-defined aggregates follow the
+link:#functionSignature[same rules] as for user-defined functions.
+
+[[dataManipulation]]
+=== Data Manipulation
+
+[[insertStmt]]
+==== INSERT
+
+_Syntax:_
+
+bc(syntax).. +
+::= INSERT INTO  +
+( ( VALUES ) +
+| ( JSON )) +
+( IF NOT EXISTS )? +
+( USING ( AND )* )?
+
+::= `(' ( `,' )* `)'
+
+::= `(' ( `,' )* `)'
+
+::= TIMESTAMP  +
+| TTL  +
+p. +
+_Sample:_
+
+bc(sample).. +
+INSERT INTO NerdMovies (movie, director, main_actor, year) +
+VALUES (`Serenity', `Joss Whedon', `Nathan Fillion', 2005) +
+USING TTL 86400;
+
+INSERT INTO NerdMovies JSON `\{``movie'': ``Serenity'', ``director'':
+``Joss Whedon'', ``year'': 2005}' +
+p. +
+The `INSERT` statement writes one or more columns for a given row in a
+table. Note that since a row is identified by its `PRIMARY KEY`, at
+least the columns composing it must be specified. The list of columns to
+insert to must be supplied when using the `VALUES` syntax. When using
+the `JSON` syntax, they are optional. See the section on
+link:#insertJson[`INSERT JSON`] for more details.
+
+Note that unlike in SQL, `INSERT` does not check the prior existence of
+the row by default: the row is created if none existed before, and
+updated otherwise. Furthermore, there is no mean to know which of
+creation or update happened.
+
+It is however possible to use the `IF NOT EXISTS` condition to only
+insert if the row does not exist prior to the insertion. But please note
+that using `IF NOT EXISTS` will incur a non negligible performance cost
+(internally, Paxos will be used) so this should be used sparingly.
+
+All updates for an `INSERT` are applied atomically and in isolation.
+
+Please refer to the link:#updateOptions[`UPDATE`] section for
+information on the `<option>` available and to the
+link:#collections[collections] section for use of
+`<collection-literal>`. Also note that `INSERT` does not support
+counters, while `UPDATE` does.
+
+[[updateStmt]]
+==== UPDATE
+
+_Syntax:_
+
+bc(syntax).. +
+::= UPDATE  +
+( USING ( AND )* )? +
+SET ( `,' )* +
+WHERE  +
+( IF ( AND condition )* )?
+
+::= `='  +
+| `=' (`+' | `-') ( | | ) +
+| `=' `+'  +
+| `[' `]' `='  +
+| `.' `='
+
+::=  +
+| IN  +
+| `[' `]'  +
+| `[' `]' IN  +
+| `.'  +
+| `.' IN
+
+::= `<' | `<=' | `=' | `!=' | `>=' | `>' +
+::= ( | `(' ( ( `,' )* )? `)')
+
+::= ( AND )*
+
+::= `='  +
+| `(' (`,' )* `)' `='  +
+| IN `(' ( ( `,' )* )? `)' +
+| IN  +
+| `(' (`,' )* `)' IN `(' ( ( `,' )* )? `)' +
+| `(' (`,' )* `)' IN
+
+::= TIMESTAMP  +
+| TTL  +
+p. +
+_Sample:_
+
+bc(sample).. +
+UPDATE NerdMovies USING TTL 400 +
+SET director = `Joss Whedon', +
+main_actor = `Nathan Fillion', +
+year = 2005 +
+WHERE movie = `Serenity';
+
+UPDATE UserActions SET total = total + 2 WHERE user =
+B70DE1D0-9908-4AE3-BE34-5573E5B09F14 AND action = `click'; +
+p. +
+The `UPDATE` statement writes one or more columns for a given row in a
+table. The `<where-clause>` is used to select the row to update and must
+include all columns composing the `PRIMARY KEY`. Other columns values
+are specified through `<assignment>` after the `SET` keyword.
+
+Note that unlike in SQL, `UPDATE` does not check the prior existence of
+the row by default (except through the use of `<condition>`, see below):
+the row is created if none existed before, and updated otherwise.
+Furthermore, there are no means to know whether a creation or update
+occurred.
+
+It is however possible to use the conditions on some columns through
+`IF`, in which case the row will not be updated unless the conditions
+are met. But, please note that using `IF` conditions will incur a
+non-negligible performance cost (internally, Paxos will be used) so this
+should be used sparingly.
+
+In an `UPDATE` statement, all updates within the same partition key are
+applied atomically and in isolation.
+
+The `c = c + 3` form of `<assignment>` is used to increment/decrement
+counters. The identifier after the `=' sign *must* be the same than the
+one before the `=' sign (Only increment/decrement is supported on
+counters, not the assignment of a specific value).
+
+The `id = id + <collection-literal>` and `id[value1] = value2` forms of
+`<assignment>` are for collections. Please refer to the
+link:#collections[relevant section] for more details.
+
+The `id.field = <term>` form of `<assignemt>` is for setting the value
+of a single field on a non-frozen user-defined types.
+
+[[updateOptions]]
+===== `<options>`
+
+The `UPDATE` and `INSERT` statements support the following options:
+
+* `TIMESTAMP`: sets the timestamp for the operation. If not specified,
+the coordinator will use the current time (in microseconds) at the start
+of statement execution as the timestamp. This is usually a suitable
+default.
+* `TTL`: specifies an optional Time To Live (in seconds) for the
+inserted values. If set, the inserted values are automatically removed
+from the database after the specified time. Note that the TTL concerns
+the inserted values, not the columns themselves. This means that any
+subsequent update of the column will also reset the TTL (to whatever TTL
+is specified in that update). By default, values never expire. A TTL of
+0 is equivalent to no TTL. If the table has a default_time_to_live, a
+TTL of 0 will remove the TTL for the inserted or updated values.
+
+[[deleteStmt]]
+==== DELETE
+
+_Syntax:_
+
+bc(syntax).. +
+::= DELETE ( ( `,' )* )? +
+FROM  +
+( USING TIMESTAMP )? +
+WHERE  +
+( IF ( EXISTS | ( ( AND )*) ) )?
+
+::=  +
+| `[' `]' +
+| `.'
+
+::= ( AND )*
+
+::=  +
+| `(' (`,' )* `)'  +
+| IN `(' ( ( `,' )* )? `)' +
+| IN  +
+| `(' (`,' )* `)' IN `(' ( ( `,' )* )? `)' +
+| `(' (`,' )* `)' IN
+
+::= `=' | `<' | `>' | `<=' | `>=' +
+::= ( | `(' ( ( `,' )* )? `)')
+
+::= ( | `!=')  +
+| IN  +
+| `[' `]' ( | `!=')  +
+| `[' `]' IN  +
+| `.' ( | `!=')  +
+| `.' IN
+
+_Sample:_
+
+bc(sample).. +
+DELETE FROM NerdMovies USING TIMESTAMP 1240003134 WHERE movie =
+`Serenity';
+
+DELETE phone FROM Users WHERE userid IN
+(C73DE1D3-AF08-40F3-B124-3FF3E5109F22,
+B70DE1D0-9908-4AE3-BE34-5573E5B09F14); +
+p. +
+The `DELETE` statement deletes columns and rows. If column names are
+provided directly after the `DELETE` keyword, only those columns are
+deleted from the row indicated by the `<where-clause>`. The `id[value]`
+syntax in `<selection>` is for non-frozen collections (please refer to
+the link:#collections[collection section] for more details). The
+`id.field` syntax is for the deletion of non-frozen user-defined types.
+Otherwise, whole rows are removed. The `<where-clause>` specifies which
+rows are to be deleted. Multiple rows may be deleted with one statement
+by using an `IN` clause. A range of rows may be deleted using an
+inequality operator (such as `>=`).
+
+`DELETE` supports the `TIMESTAMP` option with the same semantics as the
+link:#updateStmt[`UPDATE`] statement.
+
+In a `DELETE` statement, all deletions within the same partition key are
+applied atomically and in isolation.
+
+A `DELETE` operation can be conditional through the use of an `IF`
+clause, similar to `UPDATE` and `INSERT` statements. However, as with
+`INSERT` and `UPDATE` statements, this will incur a non-negligible
+performance cost (internally, Paxos will be used) and so should be used
+sparingly.
... 28227 lines suppressed ...

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org