You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by mc...@apache.org on 2021/04/25 10:47:11 UTC

[cassandra-website] branch trunk updated: Update "Beta" terminology to "RC" for 4.0-rc1 Also add src/doc/4.0-beta4

This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/trunk by this push:
     new 637ee64  Update "Beta" terminology to "RC" for 4.0-rc1  Also add src/doc/4.0-beta4
637ee64 is described below

commit 637ee64f1a3dfcc93111bdf403838652338cb808
Author: mck <mc...@apache.org>
AuthorDate: Sun Apr 25 12:46:25 2021 +0200

    Update "Beta" terminology to "RC" for 4.0-rc1
     Also add src/doc/4.0-beta4
---
 src/doc/4.0-beta4/.buildinfo                       |     4 +
 .../stress-lwt-example.yaml                        |    70 +
 .../stress-example.yaml                            |    44 +
 src/doc/4.0-beta4/_images/Figure_1_backups.jpg     |   Bin 0 -> 38551 bytes
 src/doc/4.0-beta4/_images/Figure_1_data_model.jpg  |   Bin 0 -> 17469 bytes
 src/doc/4.0-beta4/_images/Figure_1_guarantees.jpg  |   Bin 0 -> 17993 bytes
 src/doc/4.0-beta4/_images/Figure_1_read_repair.jpg |   Bin 0 -> 36919 bytes
 src/doc/4.0-beta4/_images/Figure_2_data_model.jpg  |   Bin 0 -> 20925 bytes
 src/doc/4.0-beta4/_images/Figure_2_read_repair.jpg |   Bin 0 -> 45595 bytes
 src/doc/4.0-beta4/_images/Figure_3_read_repair.jpg |   Bin 0 -> 43021 bytes
 src/doc/4.0-beta4/_images/Figure_4_read_repair.jpg |   Bin 0 -> 43021 bytes
 src/doc/4.0-beta4/_images/Figure_5_read_repair.jpg |   Bin 0 -> 42560 bytes
 src/doc/4.0-beta4/_images/Figure_6_read_repair.jpg |   Bin 0 -> 57489 bytes
 .../_images/data_modeling_chebotko_logical.png     |   Bin 0 -> 87366 bytes
 .../_images/data_modeling_chebotko_physical.png    |   Bin 0 -> 4553809 bytes
 .../_images/data_modeling_hotel_bucketing.png      |   Bin 0 -> 22009 bytes
 .../4.0-beta4/_images/data_modeling_hotel_erd.png  |   Bin 0 -> 233309 bytes
 .../_images/data_modeling_hotel_logical.png        |   Bin 0 -> 116998 bytes
 .../_images/data_modeling_hotel_physical.png       |   Bin 0 -> 119795 bytes
 .../_images/data_modeling_hotel_queries.png        |   Bin 0 -> 103940 bytes
 .../_images/data_modeling_hotel_relational.png     |   Bin 0 -> 102656 bytes
 .../_images/data_modeling_reservation_logical.png  |   Bin 0 -> 121750 bytes
 .../_images/data_modeling_reservation_physical.png |   Bin 0 -> 142416 bytes
 src/doc/4.0-beta4/_images/docs_commit.png          |   Bin 0 -> 104667 bytes
 src/doc/4.0-beta4/_images/docs_create_branch.png   |   Bin 0 -> 181860 bytes
 src/doc/4.0-beta4/_images/docs_create_file.png     |   Bin 0 -> 209110 bytes
 src/doc/4.0-beta4/_images/docs_editor.png          |   Bin 0 -> 106175 bytes
 src/doc/4.0-beta4/_images/docs_fork.png            |   Bin 0 -> 76159 bytes
 src/doc/4.0-beta4/_images/docs_pr.png              |   Bin 0 -> 156081 bytes
 src/doc/4.0-beta4/_images/docs_preview.png         |   Bin 0 -> 123826 bytes
 src/doc/4.0-beta4/_images/eclipse_debug0.png       |   Bin 0 -> 48174 bytes
 src/doc/4.0-beta4/_images/eclipse_debug1.png       |   Bin 0 -> 34446 bytes
 src/doc/4.0-beta4/_images/eclipse_debug2.png       |   Bin 0 -> 57032 bytes
 src/doc/4.0-beta4/_images/eclipse_debug3.png       |   Bin 0 -> 58677 bytes
 src/doc/4.0-beta4/_images/eclipse_debug4.png       |   Bin 0 -> 24793 bytes
 src/doc/4.0-beta4/_images/eclipse_debug5.png       |   Bin 0 -> 66632 bytes
 src/doc/4.0-beta4/_images/eclipse_debug6.png       |   Bin 0 -> 87568 bytes
 src/doc/4.0-beta4/_images/example-stress-graph.png |   Bin 0 -> 359103 bytes
 src/doc/4.0-beta4/_images/hints.svg                |     9 +
 src/doc/4.0-beta4/_images/ring.svg                 |    11 +
 src/doc/4.0-beta4/_images/vnodes.svg               |    11 +
 .../4.0-beta4/_sources/architecture/dynamo.rst.txt |   537 +
 .../_sources/architecture/guarantees.rst.txt       |    76 +
 .../4.0-beta4/_sources/architecture/index.rst.txt  |    29 +
 .../_sources/architecture/overview.rst.txt         |   114 +
 .../_sources/architecture/storage_engine.rst.txt   |   208 +
 src/doc/4.0-beta4/_sources/bugs.rst.txt            |    30 +
 .../configuration/cass_cl_archive_file.rst.txt     |    46 +
 .../configuration/cass_env_sh_file.rst.txt         |   132 +
 .../configuration/cass_jvm_options_file.rst.txt    |    10 +
 .../configuration/cass_logback_xml_file.rst.txt    |   157 +
 .../configuration/cass_rackdc_file.rst.txt         |    67 +
 .../_sources/configuration/cass_topo_file.rst.txt  |    48 +
 .../_sources/configuration/cass_yaml_file.rst.txt  |  2074 ++++
 .../configuration/cassandra_config_file.rst.txt    |  2103 ++++
 .../4.0-beta4/_sources/configuration/index.rst.txt |    31 +
 src/doc/4.0-beta4/_sources/contactus.rst.txt       |    50 +
 src/doc/4.0-beta4/_sources/cql/appendices.rst.txt  |   330 +
 src/doc/4.0-beta4/_sources/cql/changes.rst.txt     |   211 +
 src/doc/4.0-beta4/_sources/cql/ddl.rst.txt         |  1151 +++
 src/doc/4.0-beta4/_sources/cql/definitions.rst.txt |   234 +
 src/doc/4.0-beta4/_sources/cql/dml.rst.txt         |   522 +
 src/doc/4.0-beta4/_sources/cql/functions.rst.txt   |   581 ++
 src/doc/4.0-beta4/_sources/cql/index.rst.txt       |    47 +
 src/doc/4.0-beta4/_sources/cql/indexes.rst.txt     |    83 +
 src/doc/4.0-beta4/_sources/cql/json.rst.txt        |   115 +
 src/doc/4.0-beta4/_sources/cql/mvs.rst.txt         |   179 +
 src/doc/4.0-beta4/_sources/cql/operators.rst.txt   |    74 +
 src/doc/4.0-beta4/_sources/cql/security.rst.txt    |   538 +
 src/doc/4.0-beta4/_sources/cql/triggers.rst.txt    |    63 +
 src/doc/4.0-beta4/_sources/cql/types.rst.txt       |   562 +
 .../data_modeling/data_modeling_conceptual.rst.txt |    63 +
 .../data_modeling/data_modeling_logical.rst.txt    |   219 +
 .../data_modeling/data_modeling_physical.rst.txt   |   117 +
 .../data_modeling/data_modeling_queries.rst.txt    |    85 +
 .../data_modeling/data_modeling_rdbms.rst.txt      |   171 +
 .../data_modeling/data_modeling_refining.rst.txt   |   218 +
 .../data_modeling/data_modeling_schema.rst.txt     |   144 +
 .../data_modeling/data_modeling_tools.rst.txt      |    64 +
 .../4.0-beta4/_sources/data_modeling/index.rst.txt |    36 +
 .../4.0-beta4/_sources/data_modeling/intro.rst.txt |   146 +
 src/doc/4.0-beta4/_sources/development/ci.rst.txt  |    81 +
 .../_sources/development/code_style.rst.txt        |    94 +
 .../_sources/development/dependencies.rst.txt      |    53 +
 .../_sources/development/documentation.rst.txt     |   104 +
 .../_sources/development/gettingstarted.rst.txt    |    60 +
 .../_sources/development/how_to_commit.rst.txt     |    98 +
 .../_sources/development/how_to_review.rst.txt     |    73 +
 src/doc/4.0-beta4/_sources/development/ide.rst.txt |   185 +
 .../4.0-beta4/_sources/development/index.rst.txt   |    33 +
 .../4.0-beta4/_sources/development/patches.rst.txt |   141 +
 .../_sources/development/release_process.rst.txt   |   251 +
 .../4.0-beta4/_sources/development/testing.rst.txt |    98 +
 src/doc/4.0-beta4/_sources/faq/index.rst.txt       |   299 +
 .../_sources/getting_started/configuring.rst.txt   |    80 +
 .../_sources/getting_started/drivers.rst.txt       |   123 +
 .../_sources/getting_started/index.rst.txt         |    34 +
 .../_sources/getting_started/installing.rst.txt    |   324 +
 .../_sources/getting_started/production.rst.txt    |   156 +
 .../_sources/getting_started/querying.rst.txt      |    52 +
 src/doc/4.0-beta4/_sources/glossary.rst.txt        |    35 +
 src/doc/4.0-beta4/_sources/index.rst.txt           |    43 +
 .../4.0-beta4/_sources/new/auditlogging.rst.txt    |   461 +
 src/doc/4.0-beta4/_sources/new/fqllogging.rst.txt  |   595 ++
 src/doc/4.0-beta4/_sources/new/index.rst.txt       |    32 +
 src/doc/4.0-beta4/_sources/new/java11.rst.txt      |   274 +
 src/doc/4.0-beta4/_sources/new/messaging.rst.txt   |   257 +
 src/doc/4.0-beta4/_sources/new/streaming.rst.txt   |   162 +
 .../_sources/new/transientreplication.rst.txt      |   155 +
 .../4.0-beta4/_sources/new/virtualtables.rst.txt   |   342 +
 .../_sources/operating/audit_logging.rst.txt       |   236 +
 .../4.0-beta4/_sources/operating/backups.rst.txt   |   660 ++
 .../_sources/operating/bloom_filters.rst.txt       |    65 +
 .../_sources/operating/bulk_loading.rst.txt        |   660 ++
 src/doc/4.0-beta4/_sources/operating/cdc.rst.txt   |    96 +
 .../_sources/operating/compaction/index.rst.txt    |   301 +
 .../_sources/operating/compaction/lcs.rst.txt      |    90 +
 .../_sources/operating/compaction/stcs.rst.txt     |    58 +
 .../_sources/operating/compaction/twcs.rst.txt     |    76 +
 .../_sources/operating/compression.rst.txt         |   164 +
 .../4.0-beta4/_sources/operating/hardware.rst.txt  |    85 +
 src/doc/4.0-beta4/_sources/operating/hints.rst.txt |   279 +
 src/doc/4.0-beta4/_sources/operating/index.rst.txt |    39 +
 .../4.0-beta4/_sources/operating/metrics.rst.txt   |   773 ++
 .../_sources/operating/read_repair.rst.txt         |   169 +
 .../4.0-beta4/_sources/operating/repair.rst.txt    |   208 +
 .../4.0-beta4/_sources/operating/security.rst.txt  |   441 +
 .../4.0-beta4/_sources/operating/snitch.rst.txt    |    82 +
 .../_sources/operating/topo_changes.rst.txt        |   129 +
 src/doc/4.0-beta4/_sources/plugins/index.rst.txt   |    35 +
 .../_sources/tools/cassandra_stress.rst.txt        |   273 +
 src/doc/4.0-beta4/_sources/tools/cqlsh.rst.txt     |   458 +
 src/doc/4.0-beta4/_sources/tools/index.rst.txt     |    28 +
 .../_sources/tools/nodetool/assassinate.rst.txt    |    11 +
 .../_sources/tools/nodetool/bootstrap.rst.txt      |    11 +
 .../_sources/tools/nodetool/cleanup.rst.txt        |    11 +
 .../_sources/tools/nodetool/clearsnapshot.rst.txt  |    11 +
 .../_sources/tools/nodetool/clientstats.rst.txt    |    11 +
 .../_sources/tools/nodetool/compact.rst.txt        |    11 +
 .../tools/nodetool/compactionhistory.rst.txt       |    11 +
 .../tools/nodetool/compactionstats.rst.txt         |    11 +
 .../_sources/tools/nodetool/decommission.rst.txt   |    11 +
 .../tools/nodetool/describecluster.rst.txt         |    11 +
 .../_sources/tools/nodetool/describering.rst.txt   |    11 +
 .../tools/nodetool/disableauditlog.rst.txt         |    11 +
 .../tools/nodetool/disableautocompaction.rst.txt   |    11 +
 .../_sources/tools/nodetool/disablebackup.rst.txt  |    11 +
 .../_sources/tools/nodetool/disablebinary.rst.txt  |    11 +
 .../tools/nodetool/disablefullquerylog.rst.txt     |    11 +
 .../_sources/tools/nodetool/disablegossip.rst.txt  |    11 +
 .../_sources/tools/nodetool/disablehandoff.rst.txt |    11 +
 .../tools/nodetool/disablehintsfordc.rst.txt       |    11 +
 .../nodetool/disableoldprotocolversions.rst.txt    |    11 +
 .../_sources/tools/nodetool/drain.rst.txt          |    11 +
 .../_sources/tools/nodetool/enableauditlog.rst.txt |    11 +
 .../tools/nodetool/enableautocompaction.rst.txt    |    11 +
 .../_sources/tools/nodetool/enablebackup.rst.txt   |    11 +
 .../_sources/tools/nodetool/enablebinary.rst.txt   |    11 +
 .../tools/nodetool/enablefullquerylog.rst.txt      |    11 +
 .../_sources/tools/nodetool/enablegossip.rst.txt   |    11 +
 .../_sources/tools/nodetool/enablehandoff.rst.txt  |    11 +
 .../tools/nodetool/enablehintsfordc.rst.txt        |    11 +
 .../nodetool/enableoldprotocolversions.rst.txt     |    11 +
 .../tools/nodetool/failuredetector.rst.txt         |    11 +
 .../_sources/tools/nodetool/flush.rst.txt          |    11 +
 .../_sources/tools/nodetool/garbagecollect.rst.txt |    11 +
 .../_sources/tools/nodetool/gcstats.rst.txt        |    11 +
 .../nodetool/getbatchlogreplaythrottle.rst.txt     |    11 +
 .../tools/nodetool/getcompactionthreshold.rst.txt  |    11 +
 .../tools/nodetool/getcompactionthroughput.rst.txt |    11 +
 .../_sources/tools/nodetool/getconcurrency.rst.txt |    11 +
 .../tools/nodetool/getconcurrentcompactors.rst.txt |    11 +
 .../nodetool/getconcurrentviewbuilders.rst.txt     |    11 +
 .../_sources/tools/nodetool/getendpoints.rst.txt   |    11 +
 .../nodetool/getinterdcstreamthroughput.rst.txt    |    11 +
 .../tools/nodetool/getlogginglevels.rst.txt        |    11 +
 .../tools/nodetool/getmaxhintwindow.rst.txt        |    11 +
 .../_sources/tools/nodetool/getseeds.rst.txt       |    11 +
 .../_sources/tools/nodetool/getsstables.rst.txt    |    11 +
 .../tools/nodetool/getstreamthroughput.rst.txt     |    11 +
 .../_sources/tools/nodetool/gettimeout.rst.txt     |    11 +
 .../tools/nodetool/gettraceprobability.rst.txt     |    11 +
 .../_sources/tools/nodetool/gossipinfo.rst.txt     |    11 +
 .../4.0-beta4/_sources/tools/nodetool/help.rst.txt |    11 +
 .../_sources/tools/nodetool/import.rst.txt         |    11 +
 .../4.0-beta4/_sources/tools/nodetool/info.rst.txt |    11 +
 .../tools/nodetool/invalidatecountercache.rst.txt  |    11 +
 .../tools/nodetool/invalidatekeycache.rst.txt      |    11 +
 .../tools/nodetool/invalidaterowcache.rst.txt      |    11 +
 .../4.0-beta4/_sources/tools/nodetool/join.rst.txt |    11 +
 .../_sources/tools/nodetool/listsnapshots.rst.txt  |    11 +
 .../4.0-beta4/_sources/tools/nodetool/move.rst.txt |    11 +
 .../_sources/tools/nodetool/netstats.rst.txt       |    11 +
 .../_sources/tools/nodetool/nodetool.rst.txt       |   252 +
 .../_sources/tools/nodetool/pausehandoff.rst.txt   |    11 +
 .../_sources/tools/nodetool/profileload.rst.txt    |    11 +
 .../tools/nodetool/proxyhistograms.rst.txt         |    11 +
 .../_sources/tools/nodetool/rangekeysample.rst.txt |    11 +
 .../_sources/tools/nodetool/rebuild.rst.txt        |    11 +
 .../_sources/tools/nodetool/rebuild_index.rst.txt  |    11 +
 .../_sources/tools/nodetool/refresh.rst.txt        |    11 +
 .../tools/nodetool/refreshsizeestimates.rst.txt    |    11 +
 .../tools/nodetool/reloadlocalschema.rst.txt       |    11 +
 .../_sources/tools/nodetool/reloadseeds.rst.txt    |    11 +
 .../_sources/tools/nodetool/reloadssl.rst.txt      |    11 +
 .../_sources/tools/nodetool/reloadtriggers.rst.txt |    11 +
 .../tools/nodetool/relocatesstables.rst.txt        |    11 +
 .../_sources/tools/nodetool/removenode.rst.txt     |    11 +
 .../_sources/tools/nodetool/repair.rst.txt         |    11 +
 .../_sources/tools/nodetool/repair_admin.rst.txt   |    11 +
 .../_sources/tools/nodetool/replaybatchlog.rst.txt |    11 +
 .../tools/nodetool/resetfullquerylog.rst.txt       |    11 +
 .../tools/nodetool/resetlocalschema.rst.txt        |    11 +
 .../_sources/tools/nodetool/resumehandoff.rst.txt  |    11 +
 .../4.0-beta4/_sources/tools/nodetool/ring.rst.txt |    11 +
 .../_sources/tools/nodetool/scrub.rst.txt          |    11 +
 .../nodetool/setbatchlogreplaythrottle.rst.txt     |    11 +
 .../tools/nodetool/setcachecapacity.rst.txt        |    11 +
 .../tools/nodetool/setcachekeystosave.rst.txt      |    11 +
 .../tools/nodetool/setcompactionthreshold.rst.txt  |    11 +
 .../tools/nodetool/setcompactionthroughput.rst.txt |    11 +
 .../_sources/tools/nodetool/setconcurrency.rst.txt |    11 +
 .../tools/nodetool/setconcurrentcompactors.rst.txt |    11 +
 .../nodetool/setconcurrentviewbuilders.rst.txt     |    11 +
 .../nodetool/sethintedhandoffthrottlekb.rst.txt    |    11 +
 .../nodetool/setinterdcstreamthroughput.rst.txt    |    11 +
 .../tools/nodetool/setlogginglevel.rst.txt         |    11 +
 .../tools/nodetool/setmaxhintwindow.rst.txt        |    11 +
 .../tools/nodetool/setstreamthroughput.rst.txt     |    11 +
 .../_sources/tools/nodetool/settimeout.rst.txt     |    11 +
 .../tools/nodetool/settraceprobability.rst.txt     |    11 +
 .../4.0-beta4/_sources/tools/nodetool/sjk.rst.txt  |    11 +
 .../_sources/tools/nodetool/snapshot.rst.txt       |    11 +
 .../_sources/tools/nodetool/status.rst.txt         |    11 +
 .../tools/nodetool/statusautocompaction.rst.txt    |    11 +
 .../_sources/tools/nodetool/statusbackup.rst.txt   |    11 +
 .../_sources/tools/nodetool/statusbinary.rst.txt   |    11 +
 .../_sources/tools/nodetool/statusgossip.rst.txt   |    11 +
 .../_sources/tools/nodetool/statushandoff.rst.txt  |    11 +
 .../4.0-beta4/_sources/tools/nodetool/stop.rst.txt |    11 +
 .../_sources/tools/nodetool/stopdaemon.rst.txt     |    11 +
 .../tools/nodetool/tablehistograms.rst.txt         |    11 +
 .../_sources/tools/nodetool/tablestats.rst.txt     |    11 +
 .../_sources/tools/nodetool/toppartitions.rst.txt  |    11 +
 .../_sources/tools/nodetool/tpstats.rst.txt        |    11 +
 .../_sources/tools/nodetool/truncatehints.rst.txt  |    11 +
 .../tools/nodetool/upgradesstables.rst.txt         |    11 +
 .../_sources/tools/nodetool/verify.rst.txt         |    11 +
 .../_sources/tools/nodetool/version.rst.txt        |    11 +
 .../tools/nodetool/viewbuildstatus.rst.txt         |    11 +
 .../4.0-beta4/_sources/tools/sstable/index.rst.txt |    39 +
 .../_sources/tools/sstable/sstabledump.rst.txt     |   294 +
 .../tools/sstable/sstableexpiredblockers.rst.txt   |    48 +
 .../tools/sstable/sstablelevelreset.rst.txt        |    82 +
 .../_sources/tools/sstable/sstableloader.rst.txt   |   273 +
 .../_sources/tools/sstable/sstablemetadata.rst.txt |   300 +
 .../tools/sstable/sstableofflinerelevel.rst.txt    |    95 +
 .../tools/sstable/sstablerepairedset.rst.txt       |    79 +
 .../_sources/tools/sstable/sstablescrub.rst.txt    |    93 +
 .../_sources/tools/sstable/sstablesplit.rst.txt    |    93 +
 .../_sources/tools/sstable/sstableupgrade.rst.txt  |   137 +
 .../_sources/tools/sstable/sstableutil.rst.txt     |    91 +
 .../_sources/tools/sstable/sstableverify.rst.txt   |    91 +
 .../_sources/troubleshooting/finding_nodes.rst.txt |   149 +
 .../_sources/troubleshooting/index.rst.txt         |    39 +
 .../_sources/troubleshooting/reading_logs.rst.txt  |   267 +
 .../_sources/troubleshooting/use_nodetool.rst.txt  |   245 +
 .../_sources/troubleshooting/use_tools.rst.txt     |   542 +
 src/doc/4.0-beta4/_static/ajax-loader.gif          |   Bin 0 -> 673 bytes
 src/doc/4.0-beta4/_static/basic.css                |   676 ++
 src/doc/4.0-beta4/_static/comment-bright.png       |   Bin 0 -> 756 bytes
 src/doc/4.0-beta4/_static/comment-close.png        |   Bin 0 -> 829 bytes
 src/doc/4.0-beta4/_static/comment.png              |   Bin 0 -> 641 bytes
 src/doc/4.0-beta4/_static/doctools.js              |   315 +
 src/doc/4.0-beta4/_static/documentation_options.js |    10 +
 src/doc/4.0-beta4/_static/down-pressed.png         |   Bin 0 -> 222 bytes
 src/doc/4.0-beta4/_static/down.png                 |   Bin 0 -> 202 bytes
 src/doc/4.0-beta4/_static/extra.css                |    76 +
 src/doc/4.0-beta4/_static/file.png                 |   Bin 0 -> 286 bytes
 src/doc/4.0-beta4/_static/jquery-3.2.1.js          | 10253 +++++++++++++++++++
 src/doc/4.0-beta4/_static/jquery.js                |     4 +
 src/doc/4.0-beta4/_static/language_data.js         |   297 +
 src/doc/4.0-beta4/_static/minus.png                |   Bin 0 -> 90 bytes
 src/doc/4.0-beta4/_static/plus.png                 |   Bin 0 -> 90 bytes
 src/doc/4.0-beta4/_static/pygments.css             |    69 +
 src/doc/4.0-beta4/_static/searchtools.js           |   481 +
 src/doc/4.0-beta4/_static/underscore-1.3.1.js      |   999 ++
 src/doc/4.0-beta4/_static/underscore.js            |    31 +
 src/doc/4.0-beta4/_static/up-pressed.png           |   Bin 0 -> 214 bytes
 src/doc/4.0-beta4/_static/up.png                   |   Bin 0 -> 203 bytes
 src/doc/4.0-beta4/_static/websupport.js            |   808 ++
 src/doc/4.0-beta4/architecture/dynamo.html         |   563 +
 src/doc/4.0-beta4/architecture/guarantees.html     |   175 +
 src/doc/4.0-beta4/architecture/index.html          |   143 +
 src/doc/4.0-beta4/architecture/overview.html       |   198 +
 src/doc/4.0-beta4/architecture/storage_engine.html |   294 +
 src/doc/4.0-beta4/bugs.html                        |   110 +
 .../configuration/cass_cl_archive_file.html        |   149 +
 .../4.0-beta4/configuration/cass_env_sh_file.html  |   251 +
 .../configuration/cass_jvm_options_file.html       |   120 +
 .../configuration/cass_logback_xml_file.html       |   248 +
 .../4.0-beta4/configuration/cass_rackdc_file.html  |   175 +
 .../4.0-beta4/configuration/cass_topo_file.html    |   155 +
 .../4.0-beta4/configuration/cass_yaml_file.html    |  1978 ++++
 .../configuration/cassandra_config_file.html       |  1806 ++++
 src/doc/4.0-beta4/configuration/index.html         |   123 +
 src/doc/4.0-beta4/contactus.html                   |   127 +
 src/doc/4.0-beta4/cql/appendices.html              |   568 +
 src/doc/4.0-beta4/cql/changes.html                 |   364 +
 src/doc/4.0-beta4/cql/ddl.html                     |  1485 +++
 src/doc/4.0-beta4/cql/definitions.html             |   317 +
 src/doc/4.0-beta4/cql/dml.html                     |   561 +
 src/doc/4.0-beta4/cql/functions.html               |   706 ++
 src/doc/4.0-beta4/cql/index.html                   |   248 +
 src/doc/4.0-beta4/cql/indexes.html                 |   171 +
 src/doc/4.0-beta4/cql/json.html                    |   318 +
 src/doc/4.0-beta4/cql/mvs.html                     |   261 +
 src/doc/4.0-beta4/cql/operators.html               |   301 +
 src/doc/4.0-beta4/cql/security.html                |   743 ++
 src/doc/4.0-beta4/cql/triggers.html                |   156 +
 src/doc/4.0-beta4/cql/types.html                   |   702 ++
 .../data_modeling/data_modeling_conceptual.html    |   151 +
 .../data_modeling/data_modeling_logical.html       |   285 +
 .../data_modeling/data_modeling_physical.html      |   200 +
 .../data_modeling/data_modeling_queries.html       |   171 +
 .../data_modeling/data_modeling_rdbms.html         |   252 +
 .../data_modeling/data_modeling_refining.html      |   288 +
 .../data_modeling/data_modeling_schema.html        |   236 +
 .../data_modeling/data_modeling_tools.html         |   157 +
 src/doc/4.0-beta4/data_modeling/index.html         |   154 +
 src/doc/4.0-beta4/data_modeling/intro.html         |   230 +
 src/doc/4.0-beta4/development/ci.html              |   167 +
 src/doc/4.0-beta4/development/code_style.html      |   215 +
 src/doc/4.0-beta4/development/dependencies.html    |   155 +
 src/doc/4.0-beta4/development/documentation.html   |   193 +
 src/doc/4.0-beta4/development/gettingstarted.html  |   161 +
 src/doc/4.0-beta4/development/how_to_commit.html   |   219 +
 src/doc/4.0-beta4/development/how_to_review.html   |   179 +
 src/doc/4.0-beta4/development/ide.html             |   268 +
 src/doc/4.0-beta4/development/index.html           |   185 +
 src/doc/4.0-beta4/development/patches.html         |   274 +
 src/doc/4.0-beta4/development/release_process.html |   362 +
 src/doc/4.0-beta4/development/testing.html         |   185 +
 src/doc/4.0-beta4/faq/index.html                   |   318 +
 src/doc/4.0-beta4/genindex.html                    |    95 +
 src/doc/4.0-beta4/getting_started/configuring.html |   176 +
 src/doc/4.0-beta4/getting_started/drivers.html     |   248 +
 src/doc/4.0-beta4/getting_started/index.html       |   165 +
 src/doc/4.0-beta4/getting_started/installing.html  |   388 +
 src/doc/4.0-beta4/getting_started/production.html  |   246 +
 src/doc/4.0-beta4/getting_started/querying.html    |   147 +
 src/doc/4.0-beta4/glossary.html                    |   119 +
 src/doc/4.0-beta4/index.html                       |    86 +
 src/doc/4.0-beta4/new/auditlogging.html            |   567 +
 src/doc/4.0-beta4/new/fqllogging.html              |   665 ++
 src/doc/4.0-beta4/new/index.html                   |   190 +
 src/doc/4.0-beta4/new/java11.html                  |   354 +
 src/doc/4.0-beta4/new/messaging.html               |   344 +
 src/doc/4.0-beta4/new/streaming.html               |   260 +
 src/doc/4.0-beta4/new/transientreplication.html    |   228 +
 src/doc/4.0-beta4/new/virtualtables.html           |   427 +
 src/doc/4.0-beta4/objects.inv                      |   Bin 0 -> 9791 bytes
 src/doc/4.0-beta4/operating/audit_logging.html     |   281 +
 src/doc/4.0-beta4/operating/backups.html           |   666 ++
 src/doc/4.0-beta4/operating/bloom_filters.html     |   162 +
 src/doc/4.0-beta4/operating/bulk_loading.html      |   680 ++
 src/doc/4.0-beta4/operating/cdc.html               |   194 +
 src/doc/4.0-beta4/operating/compaction/index.html  |   387 +
 src/doc/4.0-beta4/operating/compaction/lcs.html    |   147 +
 src/doc/4.0-beta4/operating/compaction/stcs.html   |   122 +
 src/doc/4.0-beta4/operating/compaction/twcs.html   |   140 +
 src/doc/4.0-beta4/operating/compression.html       |   294 +
 src/doc/4.0-beta4/operating/hardware.html          |   189 +
 src/doc/4.0-beta4/operating/hints.html             |   402 +
 src/doc/4.0-beta4/operating/index.html             |   251 +
 src/doc/4.0-beta4/operating/metrics.html           |  1486 +++
 src/doc/4.0-beta4/operating/read_repair.html       |   267 +
 src/doc/4.0-beta4/operating/repair.html            |   278 +
 src/doc/4.0-beta4/operating/security.html          |   474 +
 src/doc/4.0-beta4/operating/snitch.html            |   180 +
 src/doc/4.0-beta4/operating/topo_changes.html      |   222 +
 src/doc/4.0-beta4/plugins/index.html               |   117 +
 src/doc/4.0-beta4/search.html                      |   105 +
 src/doc/4.0-beta4/searchindex.js                   |     1 +
 src/doc/4.0-beta4/tools/cassandra_stress.html      |   355 +
 src/doc/4.0-beta4/tools/cqlsh.html                 |   488 +
 src/doc/4.0-beta4/tools/index.html                 |   258 +
 src/doc/4.0-beta4/tools/nodetool/assassinate.html  |   134 +
 src/doc/4.0-beta4/tools/nodetool/bootstrap.html    |   133 +
 src/doc/4.0-beta4/tools/nodetool/cleanup.html      |   139 +
 .../4.0-beta4/tools/nodetool/clearsnapshot.html    |   142 +
 src/doc/4.0-beta4/tools/nodetool/clientstats.html  |   135 +
 src/doc/4.0-beta4/tools/nodetool/compact.html      |   151 +
 .../tools/nodetool/compactionhistory.html          |   129 +
 .../4.0-beta4/tools/nodetool/compactionstats.html  |   129 +
 src/doc/4.0-beta4/tools/nodetool/decommission.html |   129 +
 .../4.0-beta4/tools/nodetool/describecluster.html  |   126 +
 src/doc/4.0-beta4/tools/nodetool/describering.html |   133 +
 .../4.0-beta4/tools/nodetool/disableauditlog.html  |   125 +
 .../tools/nodetool/disableautocompaction.html      |   135 +
 .../4.0-beta4/tools/nodetool/disablebackup.html    |   125 +
 .../4.0-beta4/tools/nodetool/disablebinary.html    |   125 +
 .../tools/nodetool/disablefullquerylog.html        |   125 +
 .../4.0-beta4/tools/nodetool/disablegossip.html    |   126 +
 .../4.0-beta4/tools/nodetool/disablehandoff.html   |   125 +
 .../tools/nodetool/disablehintsfordc.html          |   134 +
 .../tools/nodetool/disableoldprotocolversions.html |   125 +
 src/doc/4.0-beta4/tools/nodetool/drain.html        |   126 +
 .../4.0-beta4/tools/nodetool/enableauditlog.html   |   159 +
 .../tools/nodetool/enableautocompaction.html       |   135 +
 src/doc/4.0-beta4/tools/nodetool/enablebackup.html |   125 +
 src/doc/4.0-beta4/tools/nodetool/enablebinary.html |   125 +
 .../tools/nodetool/enablefullquerylog.html         |   157 +
 src/doc/4.0-beta4/tools/nodetool/enablegossip.html |   125 +
 .../4.0-beta4/tools/nodetool/enablehandoff.html    |   126 +
 .../4.0-beta4/tools/nodetool/enablehintsfordc.html |   135 +
 .../tools/nodetool/enableoldprotocolversions.html  |   125 +
 .../4.0-beta4/tools/nodetool/failuredetector.html  |   126 +
 src/doc/4.0-beta4/tools/nodetool/flush.html        |   134 +
 .../4.0-beta4/tools/nodetool/garbagecollect.html   |   144 +
 src/doc/4.0-beta4/tools/nodetool/gcstats.html      |   125 +
 .../tools/nodetool/getbatchlogreplaythrottle.html  |   127 +
 .../tools/nodetool/getcompactionthreshold.html     |   135 +
 .../tools/nodetool/getcompactionthroughput.html    |   126 +
 .../4.0-beta4/tools/nodetool/getconcurrency.html   |   134 +
 .../tools/nodetool/getconcurrentcompactors.html    |   126 +
 .../tools/nodetool/getconcurrentviewbuilders.html  |   126 +
 src/doc/4.0-beta4/tools/nodetool/getendpoints.html |   135 +
 .../tools/nodetool/getinterdcstreamthroughput.html |   126 +
 .../4.0-beta4/tools/nodetool/getlogginglevels.html |   125 +
 .../4.0-beta4/tools/nodetool/getmaxhintwindow.html |   125 +
 src/doc/4.0-beta4/tools/nodetool/getseeds.html     |   126 +
 src/doc/4.0-beta4/tools/nodetool/getsstables.html  |   137 +
 .../tools/nodetool/getstreamthroughput.html        |   126 +
 src/doc/4.0-beta4/tools/nodetool/gettimeout.html   |   135 +
 .../tools/nodetool/gettraceprobability.html        |   125 +
 src/doc/4.0-beta4/tools/nodetool/gossipinfo.html   |   125 +
 src/doc/4.0-beta4/tools/nodetool/help.html         |   112 +
 src/doc/4.0-beta4/tools/nodetool/import.html       |   160 +
 src/doc/4.0-beta4/tools/nodetool/info.html         |   128 +
 .../tools/nodetool/invalidatecountercache.html     |   125 +
 .../tools/nodetool/invalidatekeycache.html         |   125 +
 .../tools/nodetool/invalidaterowcache.html         |   125 +
 src/doc/4.0-beta4/tools/nodetool/join.html         |   125 +
 .../4.0-beta4/tools/nodetool/listsnapshots.html    |   128 +
 src/doc/4.0-beta4/tools/nodetool/move.html         |   133 +
 src/doc/4.0-beta4/tools/nodetool/netstats.html     |   130 +
 src/doc/4.0-beta4/tools/nodetool/nodetool.html     |   244 +
 src/doc/4.0-beta4/tools/nodetool/pausehandoff.html |   125 +
 src/doc/4.0-beta4/tools/nodetool/profileload.html  |   144 +
 .../4.0-beta4/tools/nodetool/proxyhistograms.html  |   126 +
 .../4.0-beta4/tools/nodetool/rangekeysample.html   |   126 +
 src/doc/4.0-beta4/tools/nodetool/rebuild.html      |   150 +
 .../4.0-beta4/tools/nodetool/rebuild_index.html    |   135 +
 src/doc/4.0-beta4/tools/nodetool/refresh.html      |   135 +
 .../tools/nodetool/refreshsizeestimates.html       |   125 +
 .../tools/nodetool/reloadlocalschema.html          |   125 +
 src/doc/4.0-beta4/tools/nodetool/reloadseeds.html  |   126 +
 src/doc/4.0-beta4/tools/nodetool/reloadssl.html    |   125 +
 .../4.0-beta4/tools/nodetool/reloadtriggers.html   |   125 +
 .../4.0-beta4/tools/nodetool/relocatesstables.html |   138 +
 src/doc/4.0-beta4/tools/nodetool/removenode.html   |   136 +
 src/doc/4.0-beta4/tools/nodetool/repair.html       |   199 +
 src/doc/4.0-beta4/tools/nodetool/repair_admin.html |   221 +
 .../4.0-beta4/tools/nodetool/replaybatchlog.html   |   125 +
 .../tools/nodetool/resetfullquerylog.html          |   127 +
 .../4.0-beta4/tools/nodetool/resetlocalschema.html |   125 +
 .../4.0-beta4/tools/nodetool/resumehandoff.html    |   125 +
 src/doc/4.0-beta4/tools/nodetool/ring.html         |   138 +
 src/doc/4.0-beta4/tools/nodetool/scrub.html        |   159 +
 .../tools/nodetool/setbatchlogreplaythrottle.html  |   136 +
 .../4.0-beta4/tools/nodetool/setcachecapacity.html |   135 +
 .../tools/nodetool/setcachekeystosave.html         |   137 +
 .../tools/nodetool/setcompactionthreshold.html     |   135 +
 .../tools/nodetool/setcompactionthroughput.html    |   135 +
 .../4.0-beta4/tools/nodetool/setconcurrency.html   |   136 +
 .../tools/nodetool/setconcurrentcompactors.html    |   135 +
 .../tools/nodetool/setconcurrentviewbuilders.html  |   135 +
 .../tools/nodetool/sethintedhandoffthrottlekb.html |   135 +
 .../tools/nodetool/setinterdcstreamthroughput.html |   135 +
 .../4.0-beta4/tools/nodetool/setlogginglevel.html  |   138 +
 .../4.0-beta4/tools/nodetool/setmaxhintwindow.html |   134 +
 .../tools/nodetool/setstreamthroughput.html        |   135 +
 src/doc/4.0-beta4/tools/nodetool/settimeout.html   |   138 +
 .../tools/nodetool/settraceprobability.html        |   136 +
 src/doc/4.0-beta4/tools/nodetool/sjk.html          |   134 +
 src/doc/4.0-beta4/tools/nodetool/snapshot.html     |   152 +
 src/doc/4.0-beta4/tools/nodetool/status.html       |   137 +
 .../tools/nodetool/statusautocompaction.html       |   138 +
 src/doc/4.0-beta4/tools/nodetool/statusbackup.html |   125 +
 src/doc/4.0-beta4/tools/nodetool/statusbinary.html |   125 +
 src/doc/4.0-beta4/tools/nodetool/statusgossip.html |   125 +
 .../4.0-beta4/tools/nodetool/statushandoff.html    |   126 +
 src/doc/4.0-beta4/tools/nodetool/stop.html         |   142 +
 src/doc/4.0-beta4/tools/nodetool/stopdaemon.html   |   125 +
 .../4.0-beta4/tools/nodetool/tablehistograms.html  |   134 +
 src/doc/4.0-beta4/tools/nodetool/tablestats.html   |   169 +
 .../4.0-beta4/tools/nodetool/toppartitions.html    |   143 +
 src/doc/4.0-beta4/tools/nodetool/tpstats.html      |   129 +
 .../4.0-beta4/tools/nodetool/truncatehints.html    |   136 +
 .../4.0-beta4/tools/nodetool/upgradesstables.html  |   145 +
 src/doc/4.0-beta4/tools/nodetool/verify.html       |   154 +
 src/doc/4.0-beta4/tools/nodetool/version.html      |   125 +
 .../4.0-beta4/tools/nodetool/viewbuildstatus.html  |   134 +
 src/doc/4.0-beta4/tools/sstable/index.html         |   229 +
 src/doc/4.0-beta4/tools/sstable/sstabledump.html   |   404 +
 .../tools/sstable/sstableexpiredblockers.html      |   149 +
 .../4.0-beta4/tools/sstable/sstablelevelreset.html |   175 +
 src/doc/4.0-beta4/tools/sstable/sstableloader.html |   409 +
 .../4.0-beta4/tools/sstable/sstablemetadata.html   |   473 +
 .../tools/sstable/sstableofflinerelevel.html       |   190 +
 .../tools/sstable/sstablerepairedset.html          |   193 +
 src/doc/4.0-beta4/tools/sstable/sstablescrub.html  |   211 +
 src/doc/4.0-beta4/tools/sstable/sstablesplit.html  |   202 +
 .../4.0-beta4/tools/sstable/sstableupgrade.html    |   249 +
 src/doc/4.0-beta4/tools/sstable/sstableutil.html   |   205 +
 src/doc/4.0-beta4/tools/sstable/sstableverify.html |   205 +
 .../4.0-beta4/troubleshooting/finding_nodes.html   |   241 +
 src/doc/4.0-beta4/troubleshooting/index.html       |   148 +
 .../4.0-beta4/troubleshooting/reading_logs.html    |   351 +
 .../4.0-beta4/troubleshooting/use_nodetool.html    |   321 +
 src/doc/4.0-beta4/troubleshooting/use_tools.html   |   609 ++
 src/download.md                                    |     4 +-
 523 files changed, 92535 insertions(+), 2 deletions(-)

diff --git a/src/doc/4.0-beta4/.buildinfo b/src/doc/4.0-beta4/.buildinfo
new file mode 100644
index 0000000..d532232
--- /dev/null
+++ b/src/doc/4.0-beta4/.buildinfo
@@ -0,0 +1,4 @@
+# Sphinx build info version 1
+# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
+config: ac2992d75630bb35b876dc91c1d5aa64
+tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/src/doc/4.0-beta4/_downloads/073727311784b6e183b3e78dbd702329/stress-lwt-example.yaml b/src/doc/4.0-beta4/_downloads/073727311784b6e183b3e78dbd702329/stress-lwt-example.yaml
new file mode 100644
index 0000000..fc5db08
--- /dev/null
+++ b/src/doc/4.0-beta4/_downloads/073727311784b6e183b3e78dbd702329/stress-lwt-example.yaml
@@ -0,0 +1,70 @@
+# Keyspace Name
+keyspace: stresscql
+
+# The CQL for creating a keyspace (optional if it already exists)
+# Would almost always be network topology unless running something locall
+keyspace_definition: |
+  CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
+
+# Table name
+table: blogposts
+
+# The CQL for creating a table you wish to stress (optional if it already exists)
+table_definition: |
+  CREATE TABLE blogposts (
+        domain text,
+        published_date timeuuid,
+        url text,
+        author text,
+        title text,
+        body text,
+        PRIMARY KEY(domain, published_date)
+  ) WITH CLUSTERING ORDER BY (published_date DESC) 
+    AND compaction = { 'class':'LeveledCompactionStrategy' } 
+    AND comment='A table to hold blog posts'
+
+### Column Distribution Specifications ###
+ 
+columnspec:
+  - name: domain
+    size: gaussian(5..100)       #domain names are relatively short
+    population: uniform(1..10M)  #10M possible domains to pick from
+
+  - name: published_date
+    cluster: fixed(1000)         #under each domain we will have max 1000 posts
+
+  - name: url
+    size: uniform(30..300)       
+
+  - name: title                  #titles shouldn't go beyond 200 chars
+    size: gaussian(10..200)
+
+  - name: author
+    size: uniform(5..20)         #author names should be short
+
+  - name: body
+    size: gaussian(100..5000)    #the body of the blog post can be long
+   
+### Batch Ratio Distribution Specifications ###
+
+insert:
+  partitions: fixed(1)            # Our partition key is the domain so only insert one per batch
+
+  select:    fixed(1)/1000        # We have 1000 posts per domain so 1/1000 will allow 1 post per batch
+
+  batchtype: UNLOGGED             # Unlogged batches
+
+
+#
+# A list of queries you wish to run against the schema
+#
+queries:
+   singlepost:
+      cql: select * from blogposts where domain = ? LIMIT 1
+      fields: samerow
+   regularupdate:
+      cql: update blogposts set author = ? where domain = ? and published_date = ?
+      fields: samerow
+   updatewithlwt:
+      cql: update blogposts set author = ? where domain = ? and published_date = ? IF body = ? AND url = ?
+      fields: samerow
diff --git a/src/doc/4.0-beta4/_downloads/0bad10109f737a1dc8fae9db51a00e36/stress-example.yaml b/src/doc/4.0-beta4/_downloads/0bad10109f737a1dc8fae9db51a00e36/stress-example.yaml
new file mode 100644
index 0000000..17161af
--- /dev/null
+++ b/src/doc/4.0-beta4/_downloads/0bad10109f737a1dc8fae9db51a00e36/stress-example.yaml
@@ -0,0 +1,44 @@
+spacenam: example # idenitifier for this spec if running with multiple yaml files
+keyspace: example
+
+# Would almost always be network topology unless running something locally
+keyspace_definition: |
+  CREATE KEYSPACE example WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
+
+table: staff_activities
+
+# The table under test. Start with a partition per staff member
+# Is this a good idea?
+table_definition: |
+  CREATE TABLE staff_activities (
+        name text,
+        when timeuuid,
+        what text,
+        PRIMARY KEY(name, when)
+  ) 
+
+columnspec:
+  - name: name
+    size: uniform(5..10) # The names of the staff members are between 5-10 characters
+    population: uniform(1..10) # 10 possible staff members to pick from 
+  - name: when
+    cluster: uniform(20..500) # Staff members do between 20 and 500 events
+  - name: what
+    size: normal(10..100,50)
+
+insert:
+  # we only update a single partition in any given insert 
+  partitions: fixed(1) 
+  # we want to insert a single row per partition and we have between 20 and 500
+  # rows per partition
+  select: fixed(1)/500 
+  batchtype: UNLOGGED             # Single partition unlogged batches are essentially noops
+
+queries:
+   events:
+      cql: select *  from staff_activities where name = ?
+      fields: samerow
+   latest_event:
+      cql: select * from staff_activities where name = ?  LIMIT 1
+      fields: samerow
+
diff --git a/src/doc/4.0-beta4/_images/Figure_1_backups.jpg b/src/doc/4.0-beta4/_images/Figure_1_backups.jpg
new file mode 100644
index 0000000..160013d
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_1_backups.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_1_data_model.jpg b/src/doc/4.0-beta4/_images/Figure_1_data_model.jpg
new file mode 100644
index 0000000..a3b330e
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_1_data_model.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_1_guarantees.jpg b/src/doc/4.0-beta4/_images/Figure_1_guarantees.jpg
new file mode 100644
index 0000000..859342d
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_1_guarantees.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_1_read_repair.jpg b/src/doc/4.0-beta4/_images/Figure_1_read_repair.jpg
new file mode 100644
index 0000000..d771550
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_1_read_repair.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_2_data_model.jpg b/src/doc/4.0-beta4/_images/Figure_2_data_model.jpg
new file mode 100644
index 0000000..7acdeac
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_2_data_model.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_2_read_repair.jpg b/src/doc/4.0-beta4/_images/Figure_2_read_repair.jpg
new file mode 100644
index 0000000..29a912b
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_2_read_repair.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_3_read_repair.jpg b/src/doc/4.0-beta4/_images/Figure_3_read_repair.jpg
new file mode 100644
index 0000000..f5cc189
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_3_read_repair.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_4_read_repair.jpg b/src/doc/4.0-beta4/_images/Figure_4_read_repair.jpg
new file mode 100644
index 0000000..25bdb34
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_4_read_repair.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_5_read_repair.jpg b/src/doc/4.0-beta4/_images/Figure_5_read_repair.jpg
new file mode 100644
index 0000000..d9c0485
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_5_read_repair.jpg differ
diff --git a/src/doc/4.0-beta4/_images/Figure_6_read_repair.jpg b/src/doc/4.0-beta4/_images/Figure_6_read_repair.jpg
new file mode 100644
index 0000000..6bb4d1e
Binary files /dev/null and b/src/doc/4.0-beta4/_images/Figure_6_read_repair.jpg differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_chebotko_logical.png b/src/doc/4.0-beta4/_images/data_modeling_chebotko_logical.png
new file mode 100644
index 0000000..e54b5f2
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_chebotko_logical.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_chebotko_physical.png b/src/doc/4.0-beta4/_images/data_modeling_chebotko_physical.png
new file mode 100644
index 0000000..bfdaec5
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_chebotko_physical.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_hotel_bucketing.png b/src/doc/4.0-beta4/_images/data_modeling_hotel_bucketing.png
new file mode 100644
index 0000000..8b53e38
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_hotel_bucketing.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_hotel_erd.png b/src/doc/4.0-beta4/_images/data_modeling_hotel_erd.png
new file mode 100644
index 0000000..e86fe68
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_hotel_erd.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_hotel_logical.png b/src/doc/4.0-beta4/_images/data_modeling_hotel_logical.png
new file mode 100644
index 0000000..e920f12
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_hotel_logical.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_hotel_physical.png b/src/doc/4.0-beta4/_images/data_modeling_hotel_physical.png
new file mode 100644
index 0000000..2d20a6d
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_hotel_physical.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_hotel_queries.png b/src/doc/4.0-beta4/_images/data_modeling_hotel_queries.png
new file mode 100644
index 0000000..2434db3
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_hotel_queries.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_hotel_relational.png b/src/doc/4.0-beta4/_images/data_modeling_hotel_relational.png
new file mode 100644
index 0000000..43e784e
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_hotel_relational.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_reservation_logical.png b/src/doc/4.0-beta4/_images/data_modeling_reservation_logical.png
new file mode 100644
index 0000000..0460633
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_reservation_logical.png differ
diff --git a/src/doc/4.0-beta4/_images/data_modeling_reservation_physical.png b/src/doc/4.0-beta4/_images/data_modeling_reservation_physical.png
new file mode 100644
index 0000000..1e6e76c
Binary files /dev/null and b/src/doc/4.0-beta4/_images/data_modeling_reservation_physical.png differ
diff --git a/src/doc/4.0-beta4/_images/docs_commit.png b/src/doc/4.0-beta4/_images/docs_commit.png
new file mode 100644
index 0000000..d90d96a
Binary files /dev/null and b/src/doc/4.0-beta4/_images/docs_commit.png differ
diff --git a/src/doc/4.0-beta4/_images/docs_create_branch.png b/src/doc/4.0-beta4/_images/docs_create_branch.png
new file mode 100644
index 0000000..a04cb54
Binary files /dev/null and b/src/doc/4.0-beta4/_images/docs_create_branch.png differ
diff --git a/src/doc/4.0-beta4/_images/docs_create_file.png b/src/doc/4.0-beta4/_images/docs_create_file.png
new file mode 100644
index 0000000..b51e370
Binary files /dev/null and b/src/doc/4.0-beta4/_images/docs_create_file.png differ
diff --git a/src/doc/4.0-beta4/_images/docs_editor.png b/src/doc/4.0-beta4/_images/docs_editor.png
new file mode 100644
index 0000000..5b9997b
Binary files /dev/null and b/src/doc/4.0-beta4/_images/docs_editor.png differ
diff --git a/src/doc/4.0-beta4/_images/docs_fork.png b/src/doc/4.0-beta4/_images/docs_fork.png
new file mode 100644
index 0000000..20a592a
Binary files /dev/null and b/src/doc/4.0-beta4/_images/docs_fork.png differ
diff --git a/src/doc/4.0-beta4/_images/docs_pr.png b/src/doc/4.0-beta4/_images/docs_pr.png
new file mode 100644
index 0000000..211eb25
Binary files /dev/null and b/src/doc/4.0-beta4/_images/docs_pr.png differ
diff --git a/src/doc/4.0-beta4/_images/docs_preview.png b/src/doc/4.0-beta4/_images/docs_preview.png
new file mode 100644
index 0000000..207f0ac
Binary files /dev/null and b/src/doc/4.0-beta4/_images/docs_preview.png differ
diff --git a/src/doc/4.0-beta4/_images/eclipse_debug0.png b/src/doc/4.0-beta4/_images/eclipse_debug0.png
new file mode 100644
index 0000000..79fc5fd
Binary files /dev/null and b/src/doc/4.0-beta4/_images/eclipse_debug0.png differ
diff --git a/src/doc/4.0-beta4/_images/eclipse_debug1.png b/src/doc/4.0-beta4/_images/eclipse_debug1.png
new file mode 100644
index 0000000..87b8756
Binary files /dev/null and b/src/doc/4.0-beta4/_images/eclipse_debug1.png differ
diff --git a/src/doc/4.0-beta4/_images/eclipse_debug2.png b/src/doc/4.0-beta4/_images/eclipse_debug2.png
new file mode 100644
index 0000000..df4eddb
Binary files /dev/null and b/src/doc/4.0-beta4/_images/eclipse_debug2.png differ
diff --git a/src/doc/4.0-beta4/_images/eclipse_debug3.png b/src/doc/4.0-beta4/_images/eclipse_debug3.png
new file mode 100644
index 0000000..2317814
Binary files /dev/null and b/src/doc/4.0-beta4/_images/eclipse_debug3.png differ
diff --git a/src/doc/4.0-beta4/_images/eclipse_debug4.png b/src/doc/4.0-beta4/_images/eclipse_debug4.png
new file mode 100644
index 0000000..5063d48
Binary files /dev/null and b/src/doc/4.0-beta4/_images/eclipse_debug4.png differ
diff --git a/src/doc/4.0-beta4/_images/eclipse_debug5.png b/src/doc/4.0-beta4/_images/eclipse_debug5.png
new file mode 100644
index 0000000..ab68e68
Binary files /dev/null and b/src/doc/4.0-beta4/_images/eclipse_debug5.png differ
diff --git a/src/doc/4.0-beta4/_images/eclipse_debug6.png b/src/doc/4.0-beta4/_images/eclipse_debug6.png
new file mode 100644
index 0000000..61ef30b
Binary files /dev/null and b/src/doc/4.0-beta4/_images/eclipse_debug6.png differ
diff --git a/src/doc/4.0-beta4/_images/example-stress-graph.png b/src/doc/4.0-beta4/_images/example-stress-graph.png
new file mode 100644
index 0000000..a65b08b
Binary files /dev/null and b/src/doc/4.0-beta4/_images/example-stress-graph.png differ
diff --git a/src/doc/4.0-beta4/_images/hints.svg b/src/doc/4.0-beta4/_images/hints.svg
new file mode 100644
index 0000000..5e952e7
--- /dev/null
+++ b/src/doc/4.0-beta4/_images/hints.svg
@@ -0,0 +1,9 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="661.2000122070312" height="422.26666259765625" style="
+        width:661.2000122070312px;
+        height:422.26666259765625px;
+        background: transparent;
+        fill: none;
+">
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M40,60 C40,43.43 53.43,30 70,30 C86.57,30 100,43.43 100,60 C100,76.57 86.57,90 70,90 C53.43,90 40,76.57 40,60 Z" style="stroke-width: 1px; stroke: rgb(0, 0, 0); fill: none;"/></g><g class="arrow-line"><path class="connection real" stroke-dasharray="" d="  M70,300 L70,387" style="stroke: rgb(0, 0, 0); s [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="660" height="421.066650390625" style="width:660px;height:421.066650390625px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g style="transform:matrix(1,0,0,1,47.266693115234375,65.81666564941406);"><path d="M342 330L365 330C373 395 380 432 389 458C365 473 330 482 293 482C248 483 175 463 118 400C64 352 25 241 25 136C25 40 67 -11 147 -11C201 -11 249 9 304 54L354 95L346 115L331 105C259 57 221 40 186 40C130 40 101 80 101 15 [...]
+</svg>
diff --git a/src/doc/4.0-beta4/_images/ring.svg b/src/doc/4.0-beta4/_images/ring.svg
new file mode 100644
index 0000000..d0db8c5
--- /dev/null
+++ b/src/doc/4.0-beta4/_images/ring.svg
@@ -0,0 +1,11 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="651" height="709.4583740234375" style="
+        width:651px;
+        height:709.4583740234375px;
+        background: transparent;
+        fill: none;
+">
+        
+        
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M223.5,655 C223.5,634.84 239.84,618.5 260,618.5 C280.16,618.5 296.5,634.84 296.5,655 C296.5,675.16 280.16,691.5 260,691.5 C239.84,691.5 223.5,675.16 223.5,655 Z" style="stroke-width: 1; stroke: rgb(103, 148, 135); fill: rgb(103, 148, 135);"/></g><g class="composite-shape"><path class="real" d=" M229.26 [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="649" height="707.4583740234375" style="width:649px;height:707.4583740234375px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g><g><g><g style="transform:matrix(1,0,0,1,12.171875,40.31333587646485);"><path d="M175 386L316 386L316 444L175 444L175 571L106 571L106 444L19 444L19 386L103 386L103 119C103 59 117 -11 186 -11C256 -11 307 14 332 27L316 86C290 65 258 53 226 53C189 53 175 83 175 136ZM829 220C829 354 729 461 610 461 [...]
+</svg>
diff --git a/src/doc/4.0-beta4/_images/vnodes.svg b/src/doc/4.0-beta4/_images/vnodes.svg
new file mode 100644
index 0000000..71b4fa2
--- /dev/null
+++ b/src/doc/4.0-beta4/_images/vnodes.svg
@@ -0,0 +1,11 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="651" height="384.66668701171875" style="
+        width:651px;
+        height:384.66668701171875px;
+        background: transparent;
+        fill: none;
+">
+        
+        
+        <svg xmlns="http://www.w3.org/2000/svg" class="role-diagram-draw-area"><g class="shapes-region" style="stroke: black; fill: none;"><g class="composite-shape"><path class="real" d=" M40.4,190 C40.4,107.38 107.38,40.4 190,40.4 C272.62,40.4 339.6,107.38 339.6,190 C339.6,272.62 272.62,339.6 190,339.6 C107.38,339.6 40.4,272.62 40.4,190 Z" style="stroke-width: 1; stroke: rgba(0, 0, 0, 0.52); fill: none; stroke-dasharray: 1.125, 3.35;"/></g><g class="composite-shape"><path class="real"  [...]
+        <svg xmlns="http://www.w3.org/2000/svg" width="649" height="382.66668701171875" style="width:649px;height:382.66668701171875px;font-family:Asana-Math, Asana;background:transparent;"><g><g><g><g><g><g style="transform:matrix(1,0,0,1,178.65625,348.9985620117188);"><path d="M125 390L69 107C68 99 56 61 56 31C56 6 67 -9 86 -9C121 -9 156 11 234 74L265 99L255 117L210 86C181 66 161 56 150 56C141 56 136 64 136 76C136 102 150 183 179 328L192 390L299 390L310 440C272 436 238 434 200 434C216  [...]
+</svg>
diff --git a/src/doc/4.0-beta4/_sources/architecture/dynamo.rst.txt b/src/doc/4.0-beta4/_sources/architecture/dynamo.rst.txt
new file mode 100644
index 0000000..5b17d9a
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/architecture/dynamo.rst.txt
@@ -0,0 +1,537 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+Dynamo
+======
+
+Apache Cassandra relies on a number of techniques from Amazon's `Dynamo
+<http://courses.cse.tamu.edu/caverlee/csce438/readings/dynamo-paper.pdf>`_
+distributed storage key-value system. Each node in the Dynamo system has three
+main components:
+
+- Request coordination over a partitioned dataset
+- Ring membership and failure detection
+- A local persistence (storage) engine
+
+Cassandra primarily draws from the first two clustering components,
+while using a storage engine based on a Log Structured Merge Tree
+(`LSM <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.44.2782&rep=rep1&type=pdf>`_).
+In particular, Cassandra relies on Dynamo style:
+
+- Dataset partitioning using consistent hashing
+- Multi-master replication using versioned data and tunable consistency
+- Distributed cluster membership and failure detection via a gossip protocol
+- Incremental scale-out on commodity hardware
+
+Cassandra was designed this way to meet large-scale (PiB+) business-critical
+storage requirements. In particular, as applications demanded full global
+replication of petabyte scale datasets along with always available low-latency
+reads and writes, it became imperative to design a new kind of database model
+as the relational database systems of the time struggled to meet the new
+requirements of global scale applications.
+
+Dataset Partitioning: Consistent Hashing
+----------------------------------------
+
+Cassandra achieves horizontal scalability by
+`partitioning <https://en.wikipedia.org/wiki/Partition_(database)>`_
+all data stored in the system using a hash function. Each partition is replicated
+to multiple physical nodes, often across failure domains such as racks and even
+datacenters. As every replica can independently accept mutations to every key
+that it owns, every key must be versioned. Unlike in the original Dynamo paper
+where deterministic versions and vector clocks were used to reconcile concurrent
+updates to a key, Cassandra uses a simpler last write wins model where every
+mutation is timestamped (including deletes) and then the latest version of data
+is the "winning" value. Formally speaking, Cassandra uses a Last-Write-Wins Element-Set
+conflict-free replicated data type for each CQL row (a.k.a `LWW-Element-Set CRDT
+<https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type#LWW-Element-Set_(Last-Write-Wins-Element-Set)>`_)
+to resolve conflicting mutations on replica sets.
+
+ .. _consistent-hashing-token-ring:
+
+Consistent Hashing using a Token Ring
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Cassandra partitions data over storage nodes using a special form of hashing
+called `consistent hashing <https://en.wikipedia.org/wiki/Consistent_hashing>`_.
+In naive data hashing, you typically allocate keys to buckets by taking a hash
+of the key modulo the number of buckets. For example, if you want to distribute
+data to 100 nodes using naive hashing you might assign every node to a bucket
+between 0 and 100, hash the input key modulo 100, and store the data on the
+associated bucket. In this naive scheme, however, adding a single node might
+invalidate almost all of the mappings.
+
+Cassandra instead maps every node to one or more tokens on a continuous hash
+ring, and defines ownership by hashing a key onto the ring and then "walking"
+the ring in one direction, similar to the `Chord
+<https://pdos.csail.mit.edu/papers/chord:sigcomm01/chord_sigcomm.pdf>`_
+algorithm. The main difference of consistent hashing to naive data hashing is
+that when the number of nodes (buckets) to hash into changes, consistent
+hashing only has to move a small fraction of the keys.
+
+For example, if we have an eight node cluster with evenly spaced tokens, and
+a replication factor (RF) of 3, then to find the owning nodes for a key we
+first hash that key to generate a token (which is just the hash of the key),
+and then we "walk" the ring in a clockwise fashion until we encounter three
+distinct nodes, at which point we have found all the replicas of that key.
+This example of an eight node cluster with `RF=3` can be visualized as follows:
+
+.. figure:: images/ring.svg
+   :scale: 75 %
+   :alt: Dynamo Ring
+
+You can see that in a Dynamo like system, ranges of keys, also known as **token
+ranges**, map to the same physical set of nodes. In this example, all keys that
+fall in the token range excluding token 1 and including token 2 (`range(t1, t2]`)
+are stored on nodes 2, 3 and 4.
+
+Multiple Tokens per Physical Node (a.k.a. `vnodes`)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Simple single token consistent hashing works well if you have many physical
+nodes to spread data over, but with evenly spaced tokens and a small number of
+physical nodes, incremental scaling (adding just a few nodes of capacity) is
+difficult because there are no token selections for new nodes that can leave
+the ring balanced. Cassandra seeks to avoid token imbalance because uneven
+token ranges lead to uneven request load. For example, in the previous example
+there is no way to add a ninth token without causing imbalance; instead we
+would have to insert ``8`` tokens in the midpoints of the existing ranges.
+
+The Dynamo paper advocates for the use of "virtual nodes" to solve this
+imbalance problem. Virtual nodes solve the problem by assigning multiple
+tokens in the token ring to each physical node. By allowing a single physical
+node to take multiple positions in the ring, we can make small clusters look
+larger and therefore even with a single physical node addition we can make it
+look like we added many more nodes, effectively taking many smaller pieces of
+data from more ring neighbors when we add even a single node.
+
+Cassandra introduces some nomenclature to handle these concepts:
+
+- **Token**: A single position on the `dynamo` style hash ring.
+- **Endpoint**: A single physical IP and port on the network.
+- **Host ID**: A unique identifier for a single "physical" node, usually
+  present at one `Endpoint` and containing one or more `Tokens`.
+- **Virtual Node** (or **vnode**): A `Token` on the hash ring owned by the same
+  physical node, one with the same `Host ID`.
+
+The mapping of **Tokens** to **Endpoints** gives rise to the **Token Map**
+where Cassandra keeps track of what ring positions map to which physical
+endpoints.  For example, in the following figure we can represent an eight node
+cluster using only four physical nodes by assigning two tokens to every node:
+
+.. figure:: images/vnodes.svg
+   :scale: 75 %
+   :alt: Virtual Tokens Ring
+
+
+Multiple tokens per physical node provide the following benefits:
+
+1. When a new node is added it accepts approximately equal amounts of data from
+   other nodes in the ring, resulting in equal distribution of data across the
+   cluster.
+2. When a node is decommissioned, it loses data roughly equally to other members
+   of the ring, again keeping equal distribution of data across the cluster.
+3. If a node becomes unavailable, query load (especially token aware query load),
+   is evenly distributed across many other nodes.
+
+Multiple tokens, however, can also have disadvantages:
+
+1. Every token introduces up to ``2 * (RF - 1)`` additional neighbors on the
+   token ring, which means that there are more combinations of node failures
+   where we lose availability for a portion of the token ring. The more tokens
+   you have, `the higher the probability of an outage
+   <https://jolynch.github.io/pdf/cassandra-availability-virtual.pdf>`_.
+2. Cluster-wide maintenance operations are often slowed. For example, as the
+   number of tokens per node is increased, the number of discrete repair
+   operations the cluster must do also increases.
+3. Performance of operations that span token ranges could be affected.
+
+Note that in Cassandra ``2.x``, the only token allocation algorithm available
+was picking random tokens, which meant that to keep balance the default number
+of tokens per node had to be quite high, at ``256``. This had the effect of
+coupling many physical endpoints together, increasing the risk of
+unavailability. That is why in ``3.x +`` the new deterministic token allocator
+was added which intelligently picks tokens such that the ring is optimally
+balanced while requiring a much lower number of tokens per physical node.
+
+
+Multi-master Replication: Versioned Data and Tunable Consistency
+----------------------------------------------------------------
+
+Cassandra replicates every partition of data to many nodes across the cluster
+to maintain high availability and durability. When a mutation occurs, the
+coordinator hashes the partition key to determine the token range the data
+belongs to and then replicates the mutation to the replicas of that data
+according to the :ref:`Replication Strategy <replication-strategy>`.
+
+All replication strategies have the notion of a **replication factor** (``RF``),
+which indicates to Cassandra how many copies of the partition should exist.
+For example with a ``RF=3`` keyspace, the data will be written to three
+distinct **replicas**. Replicas are always chosen such that they are distinct
+physical nodes which is achieved by skipping virtual nodes if needed.
+Replication strategies may also choose to skip nodes present in the same failure
+domain such as racks or datacenters so that Cassandra clusters can tolerate
+failures of whole racks and even datacenters of nodes.
+
+.. _replication-strategy:
+
+Replication Strategy
+^^^^^^^^^^^^^^^^^^^^
+
+Cassandra supports pluggable **replication strategies**, which determine which
+physical nodes act as replicas for a given token range. Every keyspace of
+data has its own replication strategy. All production deployments should use
+the :ref:`network-topology-strategy` while the :ref:`simple-strategy` replication
+strategy is useful only for testing clusters where you do not yet know the
+datacenter layout of the cluster.
+
+.. _network-topology-strategy:
+
+``NetworkTopologyStrategy``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``NetworkTopologyStrategy`` allows a replication factor to be specified for each
+datacenter in the cluster. Even if your cluster only uses a single datacenter,
+``NetworkTopologyStrategy`` should be preferred over ``SimpleStrategy`` to make it
+easier to add new physical or virtual datacenters to the cluster later.
+
+In addition to allowing the replication factor to be specified individually by
+datacenter, ``NetworkTopologyStrategy`` also attempts to choose replicas within a
+datacenter from different racks as specified by the :ref:`Snitch <snitch>`. If
+the number of racks is greater than or equal to the replication factor for the
+datacenter, each replica is guaranteed to be chosen from a different rack.
+Otherwise, each rack will hold at least one replica, but some racks may hold
+more than one. Note that this rack-aware behavior has some potentially
+`surprising implications
+<https://issues.apache.org/jira/browse/CASSANDRA-3810>`_.  For example, if
+there are not an even number of nodes in each rack, the data load on the
+smallest rack may be much higher.  Similarly, if a single node is bootstrapped
+into a brand new rack, it will be considered a replica for the entire ring.
+For this reason, many operators choose to configure all nodes in a single
+availability zone or similar failure domain as a single "rack".
+
+.. _simple-strategy:
+
+``SimpleStrategy``
+~~~~~~~~~~~~~~~~~~
+
+``SimpleStrategy`` allows a single integer ``replication_factor`` to be defined. This determines the number of nodes that
+should contain a copy of each row.  For example, if ``replication_factor`` is 3, then three different nodes should store
+a copy of each row.
+
+``SimpleStrategy`` treats all nodes identically, ignoring any configured datacenters or racks.  To determine the replicas
+for a token range, Cassandra iterates through the tokens in the ring, starting with the token range of interest.  For
+each token, it checks whether the owning node has been added to the set of replicas, and if it has not, it is added to
+the set.  This process continues until ``replication_factor`` distinct nodes have been added to the set of replicas.
+
+.. _transient-replication:
+
+Transient Replication
+~~~~~~~~~~~~~~~~~~~~~
+
+Transient replication is an experimental feature in Cassandra 4.0 not present
+in the original Dynamo paper. It allows you to configure a subset of replicas
+to only replicate data that hasn't been incrementally repaired. This allows you
+to decouple data redundancy from availability. For instance, if you have a
+keyspace replicated at rf 3, and alter it to rf 5 with 2 transient replicas,
+you go from being able to tolerate one failed replica to being able to tolerate
+two, without corresponding increase in storage usage. This is because 3 nodes
+will replicate all the data for a given token range, and the other 2 will only
+replicate data that hasn't been incrementally repaired.
+
+To use transient replication, you first need to enable it in
+``cassandra.yaml``. Once enabled, both ``SimpleStrategy`` and
+``NetworkTopologyStrategy`` can be configured to transiently replicate data.
+You configure it by specifying replication factor as
+``<total_replicas>/<transient_replicas`` Both ``SimpleStrategy`` and
+``NetworkTopologyStrategy`` support configuring transient replication.
+
+Transiently replicated keyspaces only support tables created with read_repair
+set to ``NONE`` and monotonic reads are not currently supported.  You also
+can't use ``LWT``, logged batches, or counters in 4.0. You will possibly never be
+able to use materialized views with transiently replicated keyspaces and
+probably never be able to use secondary indices with them.
+
+Transient replication is an experimental feature that may not be ready for
+production use. The expected audience is experienced users of Cassandra
+capable of fully validating a deployment of their particular application. That
+means being able check that operations like reads, writes, decommission,
+remove, rebuild, repair, and replace all work with your queries, data,
+configuration, operational practices, and availability requirements.
+
+It is anticipated that ``4.next`` will support monotonic reads with transient
+replication as well as LWT, logged batches, and counters.
+
+Data Versioning
+^^^^^^^^^^^^^^^
+
+Cassandra uses mutation timestamp versioning to guarantee eventual consistency of
+data. Specifically all mutations that enter the system do so with a timestamp
+provided either from a client clock or, absent a client provided timestamp,
+from the coordinator node's clock. Updates resolve according to the conflict
+resolution rule of last write wins. Cassandra's correctness does depend on
+these clocks, so make sure a proper time synchronization process is running
+such as NTP.
+
+Cassandra applies separate mutation timestamps to every column of every row
+within a CQL partition. Rows are guaranteed to be unique by primary key, and
+each column in a row resolve concurrent mutations according to last-write-wins
+conflict resolution. This means that updates to different primary keys within a
+partition can actually resolve without conflict! Furthermore the CQL collection
+types such as maps and sets use this same conflict free mechanism, meaning
+that concurrent updates to maps and sets are guaranteed to resolve as well.
+
+Replica Synchronization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+As replicas in Cassandra can accept mutations independently, it is possible
+for some replicas to have newer data than others. Cassandra has many best-effort
+techniques to drive convergence of replicas including
+`Replica read repair <read-repair>` in the read path and
+`Hinted handoff <hints>` in the write path.
+
+These techniques are only best-effort, however, and to guarantee eventual
+consistency Cassandra implements `anti-entropy repair <repair>` where replicas
+calculate hierarchical hash-trees over their datasets called `Merkle Trees
+<https://en.wikipedia.org/wiki/Merkle_tree>`_ that can then be compared across
+replicas to identify mismatched data. Like the original Dynamo paper Cassandra
+supports "full" repairs where replicas hash their entire dataset, create Merkle
+trees, send them to each other and sync any ranges that don't match.
+
+Unlike the original Dynamo paper, Cassandra also implements sub-range repair
+and incremental repair. Sub-range repair allows Cassandra to increase the
+resolution of the hash trees (potentially down to the single partition level)
+by creating a larger number of trees that span only a portion of the data
+range.  Incremental repair allows Cassandra to only repair the partitions that
+have changed since the last repair.
+
+Tunable Consistency
+^^^^^^^^^^^^^^^^^^^
+
+Cassandra supports a per-operation tradeoff between consistency and
+availability through **Consistency Levels**. Cassandra's consistency levels
+are a version of Dynamo's ``R + W > N`` consistency mechanism where operators
+could configure the number of nodes that must participate in reads (``R``)
+and writes (``W``) to be larger than the replication factor (``N``). In
+Cassandra, you instead choose from a menu of common consistency levels which
+allow the operator to pick ``R`` and ``W`` behavior without knowing the
+replication factor. Generally writes will be visible to subsequent reads when
+the read consistency level contains enough nodes to guarantee a quorum intersection
+with the write consistency level.
+
+The following consistency levels are available:
+
+``ONE``
+  Only a single replica must respond.
+
+``TWO``
+  Two replicas must respond.
+
+``THREE``
+  Three replicas must respond.
+
+``QUORUM``
+  A majority (n/2 + 1) of the replicas must respond.
+
+``ALL``
+  All of the replicas must respond.
+
+``LOCAL_QUORUM``
+  A majority of the replicas in the local datacenter (whichever datacenter the coordinator is in) must respond.
+
+``EACH_QUORUM``
+  A majority of the replicas in each datacenter must respond.
+
+``LOCAL_ONE``
+  Only a single replica must respond.  In a multi-datacenter cluster, this also gaurantees that read requests are not
+  sent to replicas in a remote datacenter.
+
+``ANY``
+  A single replica may respond, or the coordinator may store a hint. If a hint is stored, the coordinator will later
+  attempt to replay the hint and deliver the mutation to the replicas.  This consistency level is only accepted for
+  write operations.
+
+Write operations **are always sent to all replicas**, regardless of consistency
+level. The consistency level simply controls how many responses the coordinator
+waits for before responding to the client.
+
+For read operations, the coordinator generally only issues read commands to
+enough replicas to satisfy the consistency level. The one exception to this is
+when speculative retry may issue a redundant read request to an extra replica
+if the original replicas have not responded within a specified time window.
+
+Picking Consistency Levels
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is common to pick read and write consistency levels such that the replica
+sets overlap, resulting in all acknowledged writes being visible to subsequent
+reads. This is typically expressed in the same terms Dynamo does, in that ``W +
+R > RF``, where ``W`` is the write consistency level, ``R`` is the read
+consistency level, and ``RF`` is the replication factor.  For example, if ``RF
+= 3``, a ``QUORUM`` request will require responses from at least ``2/3``
+replicas.  If ``QUORUM`` is used for both writes and reads, at least one of the
+replicas is guaranteed to participate in *both* the write and the read request,
+which in turn guarantees that the quorums will overlap and the write will be
+visible to the read.
+
+In a multi-datacenter environment, ``LOCAL_QUORUM`` can be used to provide a
+weaker but still useful guarantee: reads are guaranteed to see the latest write
+from within the same datacenter. This is often sufficient as clients homed to
+a single datacenter will read their own writes.
+
+If this type of strong consistency isn't required, lower consistency levels
+like ``LOCAL_ONE`` or ``ONE`` may be used to improve throughput, latency, and
+availability. With replication spanning multiple datacenters, ``LOCAL_ONE`` is
+typically less available than ``ONE`` but is faster as a rule. Indeed ``ONE``
+will succeed if a single replica is available in any datacenter.
+
+Distributed Cluster Membership and Failure Detection
+----------------------------------------------------
+
+The replication protocols and dataset partitioning rely on knowing which nodes
+are alive and dead in the cluster so that write and read operations can be
+optimally routed. In Cassandra liveness information is shared in a distributed
+fashion through a failure detection mechanism based on a gossip protocol.
+
+.. _gossip:
+
+Gossip
+^^^^^^
+
+Gossip is how Cassandra propagates basic cluster bootstrapping information such
+as endpoint membership and internode network protocol versions. In Cassandra's
+gossip system, nodes exchange state information not only about themselves but
+also about other nodes they know about. This information is versioned with a
+vector clock of ``(generation, version)`` tuples, where the generation is a
+monotonic timestamp and version is a logical clock the increments roughly every
+second. These logical clocks allow Cassandra gossip to ignore old versions of
+cluster state just by inspecting the logical clocks presented with gossip
+messages.
+
+Every node in the Cassandra cluster runs the gossip task independently and
+periodically. Every second, every node in the cluster:
+
+1. Updates the local node's heartbeat state (the version) and constructs the
+   node's local view of the cluster gossip endpoint state.
+2. Picks a random other node in the cluster to exchange gossip endpoint state
+   with.
+3. Probabilistically attempts to gossip with any unreachable nodes (if one exists)
+4. Gossips with a seed node if that didn't happen in step 2.
+
+When an operator first bootstraps a Cassandra cluster they designate certain
+nodes as "seed" nodes. Any node can be a seed node and the only difference
+between seed and non-seed nodes is seed nodes are allowed to bootstrap into the
+ring without seeing any other seed nodes. Furthermore, once a cluster is
+bootstrapped, seed nodes become "hotspots" for gossip due to step 4 above.
+
+As non-seed nodes must be able to contact at least one seed node in order to
+bootstrap into the cluster, it is common to include multiple seed nodes, often
+one for each rack or datacenter. Seed nodes are often chosen using existing
+off-the-shelf service discovery mechanisms.
+
+.. note::
+   Nodes do not have to agree on the seed nodes, and indeed once a cluster is
+   bootstrapped, newly launched nodes can be configured to use any existing
+   nodes as "seeds". The only advantage to picking the same nodes as seeds
+   is it increases their usefullness as gossip hotspots.
+
+Currently, gossip also propagates token metadata and schema *version*
+information. This information forms the control plane for scheduling data
+movements and schema pulls. For example, if a node sees a mismatch in schema
+version in gossip state, it will schedule a schema sync task with the other
+nodes. As token information propagates via gossip it is also the control plane
+for teaching nodes which endpoints own what data.
+
+Ring Membership and Failure Detection
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Gossip forms the basis of ring membership, but the **failure detector**
+ultimately makes decisions about if nodes are ``UP`` or ``DOWN``. Every node in
+Cassandra runs a variant of the `Phi Accrual Failure Detector
+<https://www.computer.org/csdl/proceedings-article/srds/2004/22390066/12OmNvT2phv>`_,
+in which every node is constantly making an independent decision of if their
+peer nodes are available or not. This decision is primarily based on received
+heartbeat state. For example, if a node does not see an increasing heartbeat
+from a node for a certain amount of time, the failure detector "convicts" that
+node, at which point Cassandra will stop routing reads to it (writes will
+typically be written to hints). If/when the node starts heartbeating again,
+Cassandra will try to reach out and connect, and if it can open communication
+channels it will mark that node as available.
+
+.. note::
+   UP and DOWN state are local node decisions and are not propagated with
+   gossip. Heartbeat state is propagated with gossip, but nodes will not
+   consider each other as "UP" until they can successfully message each other
+   over an actual network channel.
+
+Cassandra will never remove a node from gossip state without explicit
+instruction from an operator via a decommission operation or a new node
+bootstrapping with a ``replace_address_first_boot`` option. This choice is
+intentional to allow Cassandra nodes to temporarily fail without causing data
+to needlessly re-balance. This also helps to prevent simultaneous range
+movements, where multiple replicas of a token range are moving at the same
+time, which can violate monotonic consistency and can even cause data loss.
+
+Incremental Scale-out on Commodity Hardware
+--------------------------------------------
+
+Cassandra scales-out to meet the requirements of growth in data size and
+request rates. Scaling-out means adding additional nodes to the ring, and
+every additional node brings linear improvements in compute and storage. In
+contrast, scaling-up implies adding more capacity to the existing database
+nodes. Cassandra is also capable of scale-up, and in certain environments it
+may be preferable depending on the deployment. Cassandra gives operators the
+flexibility to chose either scale-out or scale-up.
+
+One key aspect of Dynamo that Cassandra follows is to attempt to run on
+commodity hardware, and many engineering choices are made under this
+assumption. For example, Cassandra assumes nodes can fail at any time,
+auto-tunes to make the best use of CPU and memory resources available and makes
+heavy use of advanced compression and caching techniques to get the most
+storage out of limited memory and storage capabilities.
+
+Simple Query Model
+^^^^^^^^^^^^^^^^^^
+
+Cassandra, like Dynamo, chooses not to provide cross-partition transactions
+that are common in SQL Relational Database Management Systems (RDBMS). This
+both gives the programmer a simpler read and write API, and allows Cassandra to
+more easily scale horizontally since multi-partition transactions spanning
+multiple nodes are notoriously difficult to implement and typically very
+latent.
+
+Instead, Cassanda chooses to offer fast, consistent, latency at any scale for
+single partition operations, allowing retrieval of entire partitions or only
+subsets of partitions based on primary key filters. Furthermore, Cassandra does
+support single partition compare and swap functionality via the lightweight
+transaction CQL API.
+
+Simple Interface for Storing Records
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Cassandra, in a slight departure from Dynamo, chooses a storage interface that
+is more sophisticated then "simple key value" stores but significantly less
+complex than SQL relational data models.  Cassandra presents a wide-column
+store interface, where partitions of data contain multiple rows, each of which
+contains a flexible set of individually typed columns. Every row is uniquely
+identified by the partition key and one or more clustering keys, and every row
+can have as many columns as needed.
+
+This allows users to flexibly add new columns to existing datasets as new
+requirements surface. Schema changes involve only metadata changes and run
+fully concurrently with live workloads. Therefore, users can safely add columns
+to existing Cassandra databases while remaining confident that query
+performance will not degrade.
diff --git a/src/doc/4.0-beta4/_sources/architecture/guarantees.rst.txt b/src/doc/4.0-beta4/_sources/architecture/guarantees.rst.txt
new file mode 100644
index 0000000..3cff808
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/architecture/guarantees.rst.txt
@@ -0,0 +1,76 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. _guarantees:
+
+Guarantees
+==============
+Apache Cassandra is a highly scalable and reliable database.  Cassandra is used in web based applications that serve large number of clients and the quantity of data processed is web-scale  (Petabyte) large.  Cassandra   makes some guarantees about its scalability, availability and reliability. To fully understand the inherent limitations of a storage system in an environment in which a certain level of network partition failure is to be expected and taken into account when designing the [...]
+
+What is CAP?
+^^^^^^^^^^^^^
+According to the CAP theorem it is not possible for a distributed data store to provide more than two of the following guarantees simultaneously.
+
+- Consistency: Consistency implies that every read receives the most recent write or errors out
+- Availability: Availability implies that every request receives a response. It is not guaranteed that the response contains the most recent write or data.
+- Partition tolerance: Partition tolerance refers to the tolerance of a storage system to failure of a network partition.  Even if some of the messages are dropped or delayed the system continues to operate.
+
+CAP theorem implies that when using a network partition, with the inherent risk of partition failure, one has to choose between consistency and availability and both cannot be guaranteed at the same time. CAP theorem is illustrated in Figure 1.
+
+.. figure:: Figure_1_guarantees.jpg
+
+Figure 1. CAP Theorem
+
+High availability is a priority in web based applications and to this objective Cassandra chooses Availability and Partition Tolerance from the CAP guarantees, compromising on data Consistency to some extent.
+
+Cassandra makes the following guarantees.
+
+- High Scalability
+- High Availability
+- Durability
+- Eventual Consistency of writes to a single table
+- Lightweight transactions with linearizable consistency
+- Batched writes across multiple tables are guaranteed to succeed completely or not at all
+- Secondary indexes are guaranteed to be consistent with their local replicas data
+
+High Scalability
+^^^^^^^^^^^^^^^^^
+Cassandra is a highly scalable storage system in which nodes may be added/removed as needed. Using gossip-based protocol a unified and consistent membership  list is kept at each node.
+
+High Availability
+^^^^^^^^^^^^^^^^^^^
+Cassandra guarantees high availability of data by  implementing a fault-tolerant storage system. Failure detection in a node is detected using a gossip-based protocol.
+
+Durability
+^^^^^^^^^^^^
+Cassandra guarantees data durability by using replicas. Replicas are multiple copies of a data stored on different nodes in a cluster. In a multi-datacenter environment the replicas may be stored on different datacenters. If one replica is lost due to unrecoverable  node/datacenter failure the data is not completely lost as replicas are still available.
+
+Eventual Consistency
+^^^^^^^^^^^^^^^^^^^^^^
+Meeting the requirements of performance, reliability, scalability and high availability in production Cassandra is an eventually consistent storage system. Eventually consistent implies that all updates reach all replicas eventually. Divergent versions of the same data may exist temporarily but they are eventually reconciled to a consistent state. Eventual consistency is a tradeoff to achieve high availability and it involves some read and write latencies.
+
+Lightweight transactions with linearizable consistency
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Data must be read and written in a sequential order. Paxos consensus protocol is used to implement lightweight transactions. Paxos protocol implements lightweight transactions that are able to handle concurrent operations using linearizable consistency. Linearizable consistency is sequential consistency with real-time constraints and it ensures transaction isolation with compare and set (CAS) transaction. With CAS replica data is compared and data that is found to be out of date is set t [...]
+
+Batched Writes
+^^^^^^^^^^^^^^^
+
+The guarantee for batched writes across multiple tables is that they will eventually succeed, or none will.  Batch data is first written to batchlog system data, and when the batch data has been successfully stored in the cluster the batchlog data is removed.  The batch is replicated to another node to ensure the full batch completes in the event the coordinator node fails.
+
+Secondary Indexes
+^^^^^^^^^^^^^^^^^^
+A secondary index is an index on a column and is used to query a table that is normally not queryable. Secondary indexes when built are guaranteed to be consistent with their local replicas.
diff --git a/src/doc/4.0-beta4/_sources/architecture/index.rst.txt b/src/doc/4.0-beta4/_sources/architecture/index.rst.txt
new file mode 100644
index 0000000..58eda13
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/architecture/index.rst.txt
@@ -0,0 +1,29 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+Architecture
+============
+
+This section describes the general architecture of Apache Cassandra.
+
+.. toctree::
+   :maxdepth: 2
+
+   overview
+   dynamo
+   storage_engine
+   guarantees
+
diff --git a/src/doc/4.0-beta4/_sources/architecture/overview.rst.txt b/src/doc/4.0-beta4/_sources/architecture/overview.rst.txt
new file mode 100644
index 0000000..e5fcbe3
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/architecture/overview.rst.txt
@@ -0,0 +1,114 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. _overview:
+
+Overview
+========
+
+Apache Cassandra is an open source, distributed, NoSQL database. It presents
+a partitioned wide column storage model with eventually consistent semantics.
+
+Apache Cassandra was initially designed at `Facebook
+<https://www.cs.cornell.edu/projects/ladis2009/papers/lakshman-ladis2009.pdf>`_
+using a staged event-driven architecture (`SEDA
+<http://www.sosp.org/2001/papers/welsh.pdf>`_) to implement a combination of
+Amazon’s `Dynamo
+<http://courses.cse.tamu.edu/caverlee/csce438/readings/dynamo-paper.pdf>`_
+distributed storage and replication techniques combined with Google's `Bigtable
+<https://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdf>`_
+data and storage engine model. Dynamo and Bigtable were both developed to meet
+emerging requirements for scalable, reliable and highly available storage
+systems, but each had areas that could be improved.
+
+Cassandra was designed as a best in class combination of both systems to meet
+emerging large scale, both in data footprint and query volume, storage
+requirements. As applications began to require full global replication and
+always available low-latency reads and writes, it became imperative to design a
+new kind of database model as the relational database systems of the time
+struggled to meet the new requirements of global scale applications.
+
+Systems like Cassandra are designed for these challenges and seek the
+following design objectives:
+
+- Full multi-master database replication
+- Global availability at low latency
+- Scaling out on commodity hardware
+- Linear throughput increase with each additional processor
+- Online load balancing and cluster growth
+- Partitioned key-oriented queries
+- Flexible schema
+
+Features
+--------
+
+Cassandra provides the Cassandra Query Language (CQL), an SQL-like language,
+to create and update database schema and access data. CQL allows users to
+organize data within a cluster of Cassandra nodes using:
+
+- **Keyspace**: defines how a dataset is replicated, for example in which
+  datacenters and how many copies. Keyspaces contain tables.
+- **Table**: defines the typed schema for a collection of partitions. Cassandra
+  tables have flexible addition of new columns to tables with zero downtime.
+  Tables contain partitions, which contain partitions, which contain columns.
+- **Partition**: defines the mandatory part of the primary key all rows in
+  Cassandra must have. All performant queries supply the partition key in
+  the query.
+- **Row**: contains a collection of columns identified by a unique primary key
+  made up of the partition key and optionally additional clustering keys.
+- **Column**: A single datum with a type which belong to a row.
+
+CQL supports numerous advanced features over a partitioned dataset such as:
+
+- Single partition lightweight transactions with atomic compare and set
+  semantics.
+- User-defined types, functions and aggregates
+- Collection types including sets, maps, and lists.
+- Local secondary indices
+- (Experimental) materialized views
+
+Cassandra explicitly chooses not to implement operations that require cross
+partition coordination as they are typically slow and hard to provide highly
+available global semantics. For example Cassandra does not support:
+
+- Cross partition transactions
+- Distributed joins
+- Foreign keys or referential integrity.
+
+Operating
+---------
+
+Apache Cassandra configuration settings are configured in the ``cassandra.yaml``
+file that can be edited by hand or with the aid of configuration management tools.
+Some settings can be manipulated live using an online interface, but others
+require a restart of the database to take effect.
+
+Cassandra provides tools for managing a cluster. The ``nodetool`` command
+interacts with Cassandra's live control interface, allowing runtime manipulation
+of many settings from ``cassandra.yaml``. The ``auditlogviewer`` is used
+to view the audit logs. The  ``fqltool`` is used to view, replay and compare
+full query logs.  The ``auditlogviewer`` and ``fqltool`` are new tools in
+Apache Cassandra 4.0.
+
+In addition, Cassandra supports out of the box atomic snapshot functionality,
+which presents a point in time snapshot of Cassandra's data for easy
+integration with many backup tools. Cassandra also supports incremental backups
+where data can be backed up as it is written.
+
+Apache Cassandra 4.0 has added several new features including virtual tables.
+transient replication, audit logging, full query logging, and support for Java
+11. Two of these features are experimental: transient replication and Java 11
+support.
diff --git a/src/doc/4.0-beta4/_sources/architecture/storage_engine.rst.txt b/src/doc/4.0-beta4/_sources/architecture/storage_engine.rst.txt
new file mode 100644
index 0000000..23b738d
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/architecture/storage_engine.rst.txt
@@ -0,0 +1,208 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+Storage Engine
+--------------
+
+.. _commit-log:
+
+CommitLog
+^^^^^^^^^
+
+Commitlogs are an append only log of all mutations local to a Cassandra node. Any data written to Cassandra will first be written to a commit log before being written to a memtable. This provides durability in the case of unexpected shutdown. On startup, any mutations in the commit log will be applied to memtables.
+
+All mutations write optimized by storing in commitlog segments, reducing the number of seeks needed to write to disk. Commitlog Segments are limited by the "commitlog_segment_size_in_mb" option, once the size is reached, a new commitlog segment is created. Commitlog segments can be archived, deleted, or recycled once all its data has been flushed to SSTables.  Commitlog segments are truncated when Cassandra has written data older than a certain point to the SSTables. Running "nodetool dr [...]
+
+- ``commitlog_segment_size_in_mb``: The default size is 32, which is almost always fine, but if you are archiving commitlog segments (see commitlog_archiving.properties), then you probably want a finer granularity of archiving; 8 or 16 MB is reasonable. Max mutation size is also configurable via max_mutation_size_in_kb setting in cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 1024.
+
+***NOTE: If max_mutation_size_in_kb is set explicitly then commitlog_segment_size_in_mb must be set to at least twice the size of max_mutation_size_in_kb / 1024***
+
+*Default Value:* 32
+
+Commitlogs are an append only log of all mutations local to a Cassandra node. Any data written to Cassandra will first be written to a commit log before being written to a memtable. This provides durability in the case of unexpected shutdown. On startup, any mutations in the commit log will be applied.
+
+- ``commitlog_sync``: may be either “periodic” or “batch.”
+
+  - ``batch``: In batch mode, Cassandra won’t ack writes until the commit log has been fsynced to disk. It will wait "commitlog_sync_batch_window_in_ms" milliseconds between fsyncs. This window should be kept short because the writer threads will be unable to do extra work while waiting. You may need to increase concurrent_writes for the same reason.
+
+    - ``commitlog_sync_batch_window_in_ms``: Time to wait between "batch" fsyncs
+    *Default Value:* 2
+
+  - ``periodic``: In periodic mode, writes are immediately ack'ed, and the CommitLog is simply synced every "commitlog_sync_period_in_ms" milliseconds.
+
+    - ``commitlog_sync_period_in_ms``: Time to wait between "periodic" fsyncs
+    *Default Value:* 10000
+
+*Default Value:* batch
+
+*** NOTE: In the event of an unexpected shutdown, Cassandra can lose up to the sync period or more if the sync is delayed. If using "batch" mode, it is recommended to store commitlogs in a separate, dedicated device.**
+
+
+- ``commitlog_directory``: This option is commented out by default When running on magnetic HDD, this should be a separate spindle than the data directories. If not set, the default directory is $CASSANDRA_HOME/data/commitlog.
+
+*Default Value:* /var/lib/cassandra/commitlog
+
+- ``commitlog_compression``: Compression to apply to the commitlog. If omitted, the commit log will be written uncompressed. LZ4, Snappy, Deflate and Zstd compressors are supported.
+
+(Default Value: (complex option)::
+
+    #   - class_name: LZ4Compressor
+    #     parameters:
+    #         -
+
+- ``commitlog_total_space_in_mb``: Total space to use for commit logs on disk.
+
+If space gets above this value, Cassandra will flush every dirty CF in the oldest segment and remove it. So a small total commitlog space will tend to cause more flush activity on less-active columnfamilies.
+
+The default value is the smaller of 8192, and 1/4 of the total space of the commitlog volume.
+
+*Default Value:* 8192
+
+.. _memtables:
+
+Memtables
+^^^^^^^^^
+
+Memtables are in-memory structures where Cassandra buffers writes.  In general, there is one active memtable per table.
+Eventually, memtables are flushed onto disk and become immutable `SSTables`_.  This can be triggered in several
+ways:
+
+- The memory usage of the memtables exceeds the configured threshold  (see ``memtable_cleanup_threshold``)
+- The :ref:`commit-log` approaches its maximum size, and forces memtable flushes in order to allow commitlog segments to
+  be freed
+
+Memtables may be stored entirely on-heap or partially off-heap, depending on ``memtable_allocation_type``.
+
+SSTables
+^^^^^^^^
+
+SSTables are the immutable data files that Cassandra uses for persisting data on disk.
+
+As SSTables are flushed to disk from :ref:`memtables` or are streamed from other nodes, Cassandra triggers compactions
+which combine multiple SSTables into one.  Once the new SSTable has been written, the old SSTables can be removed.
+
+Each SSTable is comprised of multiple components stored in separate files:
+
+``Data.db``
+  The actual data, i.e. the contents of rows.
+
+``Index.db``
+  An index from partition keys to positions in the ``Data.db`` file.  For wide partitions, this may also include an
+  index to rows within a partition.
+
+``Summary.db``
+  A sampling of (by default) every 128th entry in the ``Index.db`` file.
+
+``Filter.db``
+  A Bloom Filter of the partition keys in the SSTable.
+
+``CompressionInfo.db``
+  Metadata about the offsets and lengths of compression chunks in the ``Data.db`` file.
+
+``Statistics.db``
+  Stores metadata about the SSTable, including information about timestamps, tombstones, clustering keys, compaction,
+  repair, compression, TTLs, and more.
+
+``Digest.crc32``
+  A CRC-32 digest of the ``Data.db`` file.
+
+``TOC.txt``
+  A plain text list of the component files for the SSTable.
+
+Within the ``Data.db`` file, rows are organized by partition.  These partitions are sorted in token order (i.e. by a
+hash of the partition key when the default partitioner, ``Murmur3Partition``, is used).  Within a partition, rows are
+stored in the order of their clustering keys.
+
+SSTables can be optionally compressed using block-based compression.
+
+SSTable Versions
+^^^^^^^^^^^^^^^^
+
+This section was created using the following
+`gist <https://gist.github.com/shyamsalimkumar/49a61e5bc6f403d20c55>`_
+which utilized this original
+`source <http://www.bajb.net/2013/03/cassandra-sstable-format-version-numbers/>`_.
+
+The version numbers, to date are:
+
+Version 0
+~~~~~~~~~
+
+* b (0.7.0): added version to sstable filenames
+* c (0.7.0): bloom filter component computes hashes over raw key bytes instead of strings
+* d (0.7.0): row size in data component becomes a long instead of int
+* e (0.7.0): stores undecorated keys in data and index components
+* f (0.7.0): switched bloom filter implementations in data component
+* g (0.8): tracks flushed-at context in metadata component
+
+Version 1
+~~~~~~~~~
+
+* h (1.0): tracks max client timestamp in metadata component
+* hb (1.0.3): records compression ration in metadata component
+* hc (1.0.4): records partitioner in metadata component
+* hd (1.0.10): includes row tombstones in maxtimestamp
+* he (1.1.3): includes ancestors generation in metadata component
+* hf (1.1.6): marker that replay position corresponds to 1.1.5+ millis-based id (see CASSANDRA-4782)
+* ia (1.2.0):
+
+  * column indexes are promoted to the index file
+  * records estimated histogram of deletion times in tombstones
+  * bloom filter (keys and columns) upgraded to Murmur3
+* ib (1.2.1): tracks min client timestamp in metadata component
+* ic (1.2.5): omits per-row bloom filter of column names
+
+Version 2
+~~~~~~~~~
+
+* ja (2.0.0):
+
+  * super columns are serialized as composites (note that there is no real format change, this is mostly a marker to know if we should expect super columns or not. We do need a major version bump however, because we should not allow streaming of super columns into this new format)
+  * tracks max local deletiontime in sstable metadata
+  * records bloom_filter_fp_chance in metadata component
+  * remove data size and column count from data file (CASSANDRA-4180)
+  * tracks max/min column values (according to comparator)
+* jb (2.0.1):
+
+  * switch from crc32 to adler32 for compression checksums
+  * checksum the compressed data
+* ka (2.1.0):
+
+  * new Statistics.db file format
+  * index summaries can be downsampled and the sampling level is persisted
+  * switch uncompressed checksums to adler32
+  * tracks presense of legacy (local and remote) counter shards
+* la (2.2.0): new file name format
+* lb (2.2.7): commit log lower bound included
+
+Version 3
+~~~~~~~~~
+
+* ma (3.0.0):
+
+  * swap bf hash order
+  * store rows natively
+* mb (3.0.7, 3.7): commit log lower bound included
+* mc (3.0.8, 3.9): commit log intervals included
+
+Example Code
+~~~~~~~~~~~~
+
+The following example is useful for finding all sstables that do not match the "ib" SSTable version
+
+.. code-block:: bash
+
+    find /var/lib/cassandra/data/ -type f | grep -v -- -ib- | grep -v "/snapshots"
diff --git a/src/doc/4.0-beta4/_sources/bugs.rst.txt b/src/doc/4.0-beta4/_sources/bugs.rst.txt
new file mode 100644
index 0000000..32d676f
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/bugs.rst.txt
@@ -0,0 +1,30 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+Reporting Bugs
+==============
+
+If you encounter a problem with Cassandra, the first places to ask for help are the :ref:`user mailing list
+<mailing-lists>` and the ``cassandra`` :ref:`Slack room <slack>`.
+
+If, after having asked for help, you suspect that you have found a bug in Cassandra, you should report it by opening a
+ticket through the `Apache Cassandra JIRA <https://issues.apache.org/jira/browse/CASSANDRA>`__. Please provide as much
+details as you can on your problem, and don't forget to indicate which version of Cassandra you are running and on which
+environment.
+
+Further details on how to contribute can be found at our :doc:`development/index` section. Please note that the source of
+this documentation is part of the Cassandra git repository and hence contributions to the documentation should follow the
+same path.
diff --git a/src/doc/4.0-beta4/_sources/configuration/cass_cl_archive_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cass_cl_archive_file.rst.txt
new file mode 100644
index 0000000..fc14440
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cass_cl_archive_file.rst.txt
@@ -0,0 +1,46 @@
+.. _cassandra-cl-archive:
+
+commitlog-archiving.properties file 
+================================
+
+The ``commitlog-archiving.properties`` configuration file can optionally set commands that are executed when archiving or restoring a commitlog segment. 
+
+===========================
+Options
+===========================
+
+``archive_command=<command>``
+------
+One command can be inserted with %path and %name arguments. %path is the fully qualified path of the commitlog segment to archive. %name is the filename of the commitlog. STDOUT, STDIN, or multiple commands cannot be executed. If multiple commands are required, add a pointer to a script in this option.
+
+**Example:** archive_command=/bin/ln %path /backup/%name
+
+**Default value:** blank
+
+``restore_command=<command>``
+------
+One command can be inserted with %from and %to arguments. %from is the fully qualified path to an archived commitlog segment using the specified restore directories. %to defines the directory to the live commitlog location.
+
+**Example:** restore_command=/bin/cp -f %from %to
+
+**Default value:** blank
+
+``restore_directories=<directory>``
+------
+Defines the directory to scan the recovery files into.
+
+**Default value:** blank
+
+``restore_point_in_time=<timestamp>``
+------
+Restore mutations created up to and including this timestamp in GMT in the format ``yyyy:MM:dd HH:mm:ss``.  Recovery will continue through the segment when the first client-supplied timestamp greater than this time is encountered, but only mutations less than or equal to this timestamp will be applied.
+
+**Example:** 2020:04:31 20:43:12
+
+**Default value:** blank
+
+``precision=<timestamp_precision>``
+------
+Precision of the timestamp used in the inserts. Choice is generally MILLISECONDS or MICROSECONDS
+
+**Default value:** MICROSECONDS
diff --git a/src/doc/4.0-beta4/_sources/configuration/cass_env_sh_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cass_env_sh_file.rst.txt
new file mode 100644
index 0000000..457f39f
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cass_env_sh_file.rst.txt
@@ -0,0 +1,132 @@
+.. _cassandra-envsh:
+
+cassandra-env.sh file 
+=====================
+
+The ``cassandra-env.sh`` bash script file can be used to pass additional options to the Java virtual machine (JVM), such as maximum and minimum heap size, rather than setting them in the environment. If the JVM settings are static and do not need to be computed from the node characteristics, the :ref:`cassandra-jvm-options` files should be used instead. For example, commonly computed values are the heap sizes, using the system values.
+
+For example, add ``JVM_OPTS="$JVM_OPTS -Dcassandra.load_ring_state=false"`` to the ``cassandra_env.sh`` file
+and run the command-line ``cassandra`` to start. The option is set from the ``cassandra-env.sh`` file, and is equivalent to starting Cassandra with the command-line option ``cassandra -Dcassandra.load_ring_state=false``.
+
+The ``-D`` option specifies the start-up parameters in both the command line and ``cassandra-env.sh`` file. The following options are available:
+
+``cassandra.auto_bootstrap=false``
+----------------------------------
+Facilitates setting auto_bootstrap to false on initial set-up of the cluster. The next time you start the cluster, you do not need to change the ``cassandra.yaml`` file on each node to revert to true, the default value.
+
+``cassandra.available_processors=<number_of_processors>``
+---------------------------------------------------------
+In a multi-instance deployment, multiple Cassandra instances will independently assume that all CPU processors are available to it. This setting allows you to specify a smaller set of processors.
+
+``cassandra.boot_without_jna=true``
+-----------------------------------
+If JNA fails to initialize, Cassandra fails to boot. Use this command to boot Cassandra without JNA.
+
+``cassandra.config=<directory>``
+--------------------------------
+The directory location of the ``cassandra.yaml file``. The default location depends on the type of installation.
+
+``cassandra.ignore_dynamic_snitch_severity=true|false`` 
+-------------------------------------------------------
+Setting this property to true causes the dynamic snitch to ignore the severity indicator from gossip when scoring nodes.  Explore failure detection and recovery and dynamic snitching for more information.
+
+**Default:** false
+
+``cassandra.initial_token=<token>``
+-----------------------------------
+Use when virtual nodes (vnodes) are not used. Sets the initial partitioner token for a node the first time the node is started. 
+Note: Vnodes are highly recommended as they automatically select tokens.
+
+**Default:** disabled
+
+``cassandra.join_ring=true|false``
+----------------------------------
+Set to false to start Cassandra on a node but not have the node join the cluster. 
+You can use ``nodetool join`` and a JMX call to join the ring afterwards.
+
+**Default:** true
+
+``cassandra.load_ring_state=true|false``
+----------------------------------------
+Set to false to clear all gossip state for the node on restart. 
+
+**Default:** true
+
+``cassandra.metricsReporterConfigFile=<filename>``
+--------------------------------------------------
+Enable pluggable metrics reporter. Explore pluggable metrics reporting for more information.
+
+``cassandra.partitioner=<partitioner>``
+---------------------------------------
+Set the partitioner. 
+
+**Default:** org.apache.cassandra.dht.Murmur3Partitioner
+
+``cassandra.prepared_statements_cache_size_in_bytes=<cache_size>``
+------------------------------------------------------------------
+Set the cache size for prepared statements.
+
+``cassandra.replace_address=<listen_address of dead node>|<broadcast_address of dead node>``
+--------------------------------------------------------------------------------------------
+To replace a node that has died, restart a new node in its place specifying the ``listen_address`` or ``broadcast_address`` that the new node is assuming. The new node must not have any data in its data directory, the same state as before bootstrapping.
+Note: The ``broadcast_address`` defaults to the ``listen_address`` except when using the ``Ec2MultiRegionSnitch``.
+
+``cassandra.replayList=<table>``
+--------------------------------
+Allow restoring specific tables from an archived commit log.
+
+``cassandra.ring_delay_ms=<number_of_ms>``
+------------------------------------------
+Defines the amount of time a node waits to hear from other nodes before formally joining the ring. 
+
+**Default:** 1000ms
+
+``cassandra.native_transport_port=<port>``
+------------------------------------------
+Set the port on which the CQL native transport listens for clients. 
+
+**Default:** 9042
+
+``cassandra.rpc_port=<port>``
+-----------------------------
+Set the port for the Thrift RPC service, which is used for client connections. 
+
+**Default:** 9160
+
+``cassandra.storage_port=<port>``
+---------------------------------
+Set the port for inter-node communication. 
+
+**Default:** 7000
+
+``cassandra.ssl_storage_port=<port>``
+-------------------------------------
+Set the SSL port for encrypted communication. 
+
+**Default:** 7001
+
+``cassandra.start_native_transport=true|false``
+-----------------------------------------------
+Enable or disable the native transport server. See ``start_native_transport`` in ``cassandra.yaml``. 
+
+**Default:** true
+
+``cassandra.start_rpc=true|false``
+----------------------------------
+Enable or disable the Thrift RPC server. 
+
+**Default:** true
+
+``cassandra.triggers_dir=<directory>``
+--------------------------------------
+Set the default location for the trigger JARs. 
+
+**Default:** conf/triggers
+
+``cassandra.write_survey=true``
+-------------------------------
+For testing new compaction and compression strategies. It allows you to experiment with different strategies and benchmark write performance differences without affecting the production workload.
+
+``consistent.rangemovement=true|false``
+---------------------------------------
+Set to true makes Cassandra perform bootstrap safely without violating consistency. False disables this.
diff --git a/src/doc/4.0-beta4/_sources/configuration/cass_jvm_options_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cass_jvm_options_file.rst.txt
new file mode 100644
index 0000000..f5a6326
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cass_jvm_options_file.rst.txt
@@ -0,0 +1,10 @@
+.. _cassandra-jvm-options:
+
+jvm-* files 
+===========
+
+Several files for JVM configuration are included in Cassandra. The ``jvm-server.options`` file, and corresponding files ``jvm8-server.options`` and ``jvm11-server.options`` are the main file for settings that affect the operation of the Cassandra JVM on cluster nodes. The file includes startup parameters, general JVM settings such as garbage collection, and heap settings. The ``jvm-clients.options`` and corresponding ``jvm8-clients.options`` and ``jvm11-clients.options`` files can be use [...]
+
+See each file for examples of settings.
+
+.. note:: The ``jvm-*`` files replace the :ref:`cassandra-envsh` file used in Cassandra versions prior to Cassandra 3.0. The ``cassandra-env.sh`` bash script file is still useful if JVM settings must be dynamically calculated based on system settings. The ``jvm-*`` files only store static JVM settings.
diff --git a/src/doc/4.0-beta4/_sources/configuration/cass_logback_xml_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cass_logback_xml_file.rst.txt
new file mode 100644
index 0000000..3de1c77
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cass_logback_xml_file.rst.txt
@@ -0,0 +1,157 @@
+.. _cassandra-logback-xml:
+
+logback.xml file 
+================================
+
+The ``logback.xml`` configuration file can optionally set logging levels for the logs written to ``system.log`` and ``debug.log``. The logging levels can also be set using ``nodetool setlogginglevels``.
+
+===========================
+Options
+===========================
+
+``appender name="<appender_choice>"...</appender>``
+------
+
+Specify log type and settings. Possible appender names are: ``SYSTEMLOG``, ``DEBUGLOG``, ``ASYNCDEBUGLOG``, and ``STDOUT``. ``SYSTEMLOG`` ensures that WARN and ERROR message are written synchronously to the specified file. ``DEBUGLOG`` and  ``ASYNCDEBUGLOG`` ensure that DEBUG messages are written either synchronously or asynchronously, respectively, to the specified file. ``STDOUT`` writes all messages to the console in a human-readable format.
+
+**Example:** <appender name="SYSTEMLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
+
+``<file> <filename> </file>``
+------
+
+Specify the filename for a log.
+
+**Example:** <file>${cassandra.logdir}/system.log</file>
+
+``<level> <log_level> </level>``
+------
+
+Specify the level for a log. Part of the filter. Levels are: ``ALL``, ``TRACE``, ``DEBUG``, ``INFO``, ``WARN``, ``ERROR``, ``OFF``. ``TRACE`` creates the most verbose log, ``ERROR`` the least.
+
+.. note::
+Note: Increasing logging levels can generate heavy logging output on a moderately trafficked cluster.
+You can use the ``nodetool getlogginglevels`` command to see the current logging configuration.
+
+**Default:** INFO
+
+**Example:** <level>INFO</level>
+
+``<rollingPolicy class="<rolling_policy_choice>" <fileNamePattern><pattern_info></fileNamePattern> ... </rollingPolicy>``
+------
+
+Specify the policy for rolling logs over to an archive.
+
+**Example:** <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
+
+``<fileNamePattern> <pattern_info> </fileNamePattern>``
+------
+
+Specify the pattern information for rolling over the log to archive. Part of the rolling policy.
+
+**Example:** <fileNamePattern>${cassandra.logdir}/system.log.%d{yyyy-MM-dd}.%i.zip</fileNamePattern>
+
+``<maxFileSize> <size> </maxFileSize>``
+------
+
+Specify the maximum file size to trigger rolling a log. Part of the rolling policy.
+
+**Example:** <maxFileSize>50MB</maxFileSize>
+
+``<maxHistory> <number_of_days> </maxHistory>``
+------
+
+Specify the maximum history in days to trigger rolling a log. Part of the rolling policy.
+
+**Example:** <maxHistory>7</maxHistory>
+
+``<encoder> <pattern>...</pattern> </encoder>``
+------
+
+Specify the format of the message. Part of the rolling policy.
+
+**Example:** <maxHistory>7</maxHistory>
+**Example:** <encoder> <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern> </encoder>
+
+Contents of default ``logback.xml``
+-----------------------
+
+.. code-block:: XML
+
+	<configuration scan="true" scanPeriod="60 seconds">
+	  <jmxConfigurator />
+
+	  <!-- No shutdown hook; we run it ourselves in StorageService after shutdown -->
+
+	  <!-- SYSTEMLOG rolling file appender to system.log (INFO level) -->
+
+	  <appender name="SYSTEMLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
+	    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
+      <level>INFO</level>
+	    </filter>
+	    <file>${cassandra.logdir}/system.log</file>
+	    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
+	      <!-- rollover daily -->
+	      <fileNamePattern>${cassandra.logdir}/system.log.%d{yyyy-MM-dd}.%i.zip</fileNamePattern>
+	      <!-- each file should be at most 50MB, keep 7 days worth of history, but at most 5GB -->
+	      <maxFileSize>50MB</maxFileSize>
+	      <maxHistory>7</maxHistory>
+	      <totalSizeCap>5GB</totalSizeCap>
+	    </rollingPolicy>
+	    <encoder>
+	      <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern>
+	    </encoder>
+	  </appender>
+
+	  <!-- DEBUGLOG rolling file appender to debug.log (all levels) -->
+
+	  <appender name="DEBUGLOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
+	    <file>${cassandra.logdir}/debug.log</file>
+	    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
+	      <!-- rollover daily -->
+	      <fileNamePattern>${cassandra.logdir}/debug.log.%d{yyyy-MM-dd}.%i.zip</fileNamePattern>
+	      <!-- each file should be at most 50MB, keep 7 days worth of history, but at most 5GB -->
+	      <maxFileSize>50MB</maxFileSize>
+	      <maxHistory>7</maxHistory>
+	      <totalSizeCap>5GB</totalSizeCap>
+	    </rollingPolicy>
+	    <encoder>
+	      <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern>
+	    </encoder>
+	  </appender>
+
+	  <!-- ASYNCLOG assynchronous appender to debug.log (all levels) -->
+
+	  <appender name="ASYNCDEBUGLOG" class="ch.qos.logback.classic.AsyncAppender">
+	    <queueSize>1024</queueSize>
+	    <discardingThreshold>0</discardingThreshold>
+	    <includeCallerData>true</includeCallerData>
+	    <appender-ref ref="DEBUGLOG" />
+	  </appender>
+
+	  <!-- STDOUT console appender to stdout (INFO level) -->
+
+	  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
+	    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
+	      <level>INFO</level>
+	    </filter>
+	    <encoder>
+	      <pattern>%-5level [%thread] %date{ISO8601} %F:%L - %msg%n</pattern>
+	    </encoder>
+	  </appender>
+
+	  <!-- Uncomment bellow and corresponding appender-ref to activate logback metrics
+	  <appender name="LogbackMetrics" class="com.codahale.metrics.logback.InstrumentedAppender" />
+	   -->
+
+	  <root level="INFO">
+	    <appender-ref ref="SYSTEMLOG" />
+	    <appender-ref ref="STDOUT" />
+	    <appender-ref ref="ASYNCDEBUGLOG" /> <!-- Comment this line to disable debug.log -->
+	    <!--
+	    <appender-ref ref="LogbackMetrics" />
+	    -->
+	  </root>
+
+	  <logger name="org.apache.cassandra" level="DEBUG"/>
+	  <logger name="com.thinkaurelius.thrift" level="ERROR"/>
+	</configuration>
diff --git a/src/doc/4.0-beta4/_sources/configuration/cass_rackdc_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cass_rackdc_file.rst.txt
new file mode 100644
index 0000000..9921092
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cass_rackdc_file.rst.txt
@@ -0,0 +1,67 @@
+.. _cassandra-rackdc:
+
+cassandra-rackdc.properties file 
+================================
+
+Several :term:`snitch` options use the ``cassandra-rackdc.properties`` configuration file to determine which :term:`datacenters` and racks cluster nodes belong to. Information about the 
+network topology allows requests to be routed efficiently and to distribute replicas evenly. The following snitches can be configured here:
+
+- GossipingPropertyFileSnitch
+- AWS EC2 single-region snitch
+- AWS EC2 multi-region snitch
+
+The GossipingPropertyFileSnitch is recommended for production. This snitch uses the datacenter and rack information configured in a local node's ``cassandra-rackdc.properties``
+file and propagates the information to other nodes using :term:`gossip`. It is the default snitch and the settings in this properties file are enabled.
+
+The AWS EC2 snitches are configured for clusters in AWS. This snitch uses the ``cassandra-rackdc.properties`` options to designate one of two AWS EC2 datacenter and rack naming conventions:
+
+- legacy: Datacenter name is the part of the availability zone name preceding the last "-" when the zone ends in -1 and includes the number if not -1. Rack name is the portion of the availability zone name following  the last "-".
+
+          Examples: us-west-1a => dc: us-west, rack: 1a; us-west-2b => dc: us-west-2, rack: 2b;
+
+- standard: Datacenter name is the standard AWS region name, including the number. Rack name is the region plus the availability zone letter.
+
+          Examples: us-west-1a => dc: us-west-1, rack: us-west-1a; us-west-2b => dc: us-west-2, rack: us-west-2b;
+
+Either snitch can set to use the local or internal IP address when multiple datacenters are not communicating.
+
+===========================
+GossipingPropertyFileSnitch
+===========================
+
+``dc``
+------
+Name of the datacenter. The value is case-sensitive.
+
+**Default value:** DC1
+
+``rack``
+--------
+Rack designation. The value is case-sensitive.
+
+**Default value:** RAC1 
+
+===========================
+AWS EC2 snitch
+===========================
+
+``ec2_naming_scheme``
+---------------------
+Datacenter and rack naming convention. Options are ``legacy`` or ``standard`` (default). **This option is commented out by default.** 
+
+**Default value:** standard
+
+
+.. NOTE::
+          YOU MUST USE THE ``legacy`` VALUE IF YOU ARE UPGRADING A PRE-4.0 CLUSTER.
+
+===========================
+Either snitch
+===========================
+
+``prefer_local``
+----------------
+Option to use the local or internal IP address when communication is not across different datacenters. **This option is commented out by default.**
+
+**Default value:** true
+
diff --git a/src/doc/4.0-beta4/_sources/configuration/cass_topo_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cass_topo_file.rst.txt
new file mode 100644
index 0000000..264addc
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cass_topo_file.rst.txt
@@ -0,0 +1,48 @@
+.. _cassandra-topology:
+
+cassandra-topologies.properties file 
+================================
+
+The ``PropertyFileSnitch`` :term:`snitch` option uses the ``cassandra-topologies.properties`` configuration file to determine which :term:`datacenters` and racks cluster nodes belong to. If other snitches are used, the 
+:ref:cassandra_rackdc must be used. The snitch determines network topology (proximity by rack and datacenter) so that requests are routed efficiently and allows the database to distribute replicas evenly.
+
+Include every node in the cluster in the properties file, defining your datacenter names as in the keyspace definition. The datacenter and rack names are case-sensitive.
+
+The ``cassandra-topologies.properties`` file must be copied identically to every node in the cluster.
+
+
+===========================
+Example
+===========================
+This example uses three datacenters:
+
+.. code-block:: bash
+
+   # datacenter One
+
+   175.56.12.105=DC1:RAC1
+   175.50.13.200=DC1:RAC1
+   175.54.35.197=DC1:RAC1
+
+   120.53.24.101=DC1:RAC2
+   120.55.16.200=DC1:RAC2
+   120.57.102.103=DC1:RAC2
+
+   # datacenter Two
+
+   110.56.12.120=DC2:RAC1
+   110.50.13.201=DC2:RAC1
+   110.54.35.184=DC2:RAC1
+
+   50.33.23.120=DC2:RAC2
+   50.45.14.220=DC2:RAC2
+   50.17.10.203=DC2:RAC2
+
+   # datacenter Three
+
+   172.106.12.120=DC3:RAC1
+   172.106.12.121=DC3:RAC1
+   172.106.12.122=DC3:RAC1
+
+   # default for unknown nodes 
+   default =DC3:RAC1
diff --git a/src/doc/4.0-beta4/_sources/configuration/cass_yaml_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cass_yaml_file.rst.txt
new file mode 100644
index 0000000..24e3be0
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cass_yaml_file.rst.txt
@@ -0,0 +1,2074 @@
+.. _cassandra-yaml:
+
+cassandra.yaml file configuration 
+=================================
+
+``cluster_name``
+----------------
+The name of the cluster. This is mainly used to prevent machines in
+one logical cluster from joining another.
+
+*Default Value:* 'Test Cluster'
+
+``num_tokens``
+--------------
+
+This defines the number of tokens randomly assigned to this node on the ring
+The more tokens, relative to other nodes, the larger the proportion of data
+that this node will store. We recommend all nodes to have the same number
+of tokens assuming they have equal hardware capability.
+
+If you leave this unspecified, Cassandra will use the default of 1 token for legacy compatibility,
+and will use the initial_token as described below.
+
+Specifying initial_token will override this setting on the node's initial start,
+on subsequent starts, this setting will apply even if initial token is set.
+
+We recommend setting ``allocate_tokens_for_local_replication_factor`` in conjunction with this setting to ensure even allocation.
+
+*Default Value:* 256
+
+``allocate_tokens_for_keyspace``
+--------------------------------
+*This option is commented out by default.*
+
+Triggers automatic allocation of num_tokens tokens for this node. The allocation
+algorithm attempts to choose tokens in a way that optimizes replicated load over
+the nodes in the datacenter for the replica factor.
+
+The load assigned to each node will be close to proportional to its number of
+vnodes.
+
+Only supported with the Murmur3Partitioner.
+
+Replica factor is determined via the replication strategy used by the specified
+keyspace.
+
+We recommend using the ``allocate_tokens_for_local_replication_factor`` setting instead for operational simplicity.
+
+*Default Value:* KEYSPACE
+
+``allocate_tokens_for_local_replication_factor``
+------------------------------------------------
+*This option is commented out by default.*
+
+Tokens will be allocated based on this replication factor, regardless of keyspace or datacenter.
+
+*Default Value:* 3
+
+``initial_token``
+-----------------
+*This option is commented out by default.*
+
+initial_token allows you to specify tokens manually.  While you can use it with
+vnodes (num_tokens > 1, above) -- in which case you should provide a 
+comma-separated list -- it's primarily used when adding nodes to legacy clusters 
+that do not have vnodes enabled.
+
+``hinted_handoff_enabled``
+--------------------------
+
+May either be "true" or "false" to enable globally
+
+*Default Value:* true
+
+``hinted_handoff_disabled_datacenters``
+---------------------------------------
+*This option is commented out by default.*
+
+When hinted_handoff_enabled is true, a black list of data centers that will not
+perform hinted handoff
+
+*Default Value (complex option)*::
+
+    #    - DC1
+    #    - DC2
+
+``max_hint_window_in_ms``
+-------------------------
+This defines the maximum amount of time a dead host will have hints
+generated.  After it has been dead this long, new hints for it will not be
+created until it has been seen alive and gone down again.
+
+*Default Value:* 10800000 # 3 hours
+
+``hinted_handoff_throttle_in_kb``
+---------------------------------
+
+Maximum throttle in KBs per second, per delivery thread.  This will be
+reduced proportionally to the number of nodes in the cluster.  (If there
+are two nodes in the cluster, each delivery thread will use the maximum
+rate; if there are three, each will throttle to half of the maximum,
+since we expect two nodes to be delivering hints simultaneously.)
+
+*Default Value:* 1024
+
+``max_hints_delivery_threads``
+------------------------------
+
+Number of threads with which to deliver hints;
+Consider increasing this number when you have multi-dc deployments, since
+cross-dc handoff tends to be slower
+
+*Default Value:* 2
+
+``hints_directory``
+-------------------
+*This option is commented out by default.*
+
+Directory where Cassandra should store hints.
+If not set, the default directory is $CASSANDRA_HOME/data/hints.
+
+*Default Value:*  /var/lib/cassandra/hints
+
+``hints_flush_period_in_ms``
+----------------------------
+
+How often hints should be flushed from the internal buffers to disk.
+Will *not* trigger fsync.
+
+*Default Value:* 10000
+
+``max_hints_file_size_in_mb``
+-----------------------------
+
+Maximum size for a single hints file, in megabytes.
+
+*Default Value:* 128
+
+``hints_compression``
+---------------------
+*This option is commented out by default.*
+
+Compression to apply to the hint files. If omitted, hints files
+will be written uncompressed. LZ4, Snappy, and Deflate compressors
+are supported.
+
+*Default Value (complex option)*::
+
+    #   - class_name: LZ4Compressor
+    #     parameters:
+    #         -
+
+``batchlog_replay_throttle_in_kb``
+----------------------------------
+Maximum throttle in KBs per second, total. This will be
+reduced proportionally to the number of nodes in the cluster.
+
+*Default Value:* 1024
+
+``authenticator``
+-----------------
+
+Authentication backend, implementing IAuthenticator; used to identify users
+Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthenticator,
+PasswordAuthenticator}.
+
+- AllowAllAuthenticator performs no checks - set it to disable authentication.
+- PasswordAuthenticator relies on username/password pairs to authenticate
+  users. It keeps usernames and hashed passwords in system_auth.roles table.
+  Please increase system_auth keyspace replication factor if you use this authenticator.
+  If using PasswordAuthenticator, CassandraRoleManager must also be used (see below)
+
+*Default Value:* AllowAllAuthenticator
+
+``authorizer``
+--------------
+
+Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
+Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthorizer,
+CassandraAuthorizer}.
+
+- AllowAllAuthorizer allows any action to any user - set it to disable authorization.
+- CassandraAuthorizer stores permissions in system_auth.role_permissions table. Please
+  increase system_auth keyspace replication factor if you use this authorizer.
+
+*Default Value:* AllowAllAuthorizer
+
+``role_manager``
+----------------
+
+Part of the Authentication & Authorization backend, implementing IRoleManager; used
+to maintain grants and memberships between roles.
+Out of the box, Cassandra provides org.apache.cassandra.auth.CassandraRoleManager,
+which stores role information in the system_auth keyspace. Most functions of the
+IRoleManager require an authenticated login, so unless the configured IAuthenticator
+actually implements authentication, most of this functionality will be unavailable.
+
+- CassandraRoleManager stores role data in the system_auth keyspace. Please
+  increase system_auth keyspace replication factor if you use this role manager.
+
+*Default Value:* CassandraRoleManager
+
+``network_authorizer``
+----------------------
+
+Network authorization backend, implementing INetworkAuthorizer; used to restrict user
+access to certain DCs
+Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllNetworkAuthorizer,
+CassandraNetworkAuthorizer}.
+
+- AllowAllNetworkAuthorizer allows access to any DC to any user - set it to disable authorization.
+- CassandraNetworkAuthorizer stores permissions in system_auth.network_permissions table. Please
+  increase system_auth keyspace replication factor if you use this authorizer.
+
+*Default Value:* AllowAllNetworkAuthorizer
+
+``roles_validity_in_ms``
+------------------------
+
+Validity period for roles cache (fetching granted roles can be an expensive
+operation depending on the role manager, CassandraRoleManager is one example)
+Granted roles are cached for authenticated sessions in AuthenticatedUser and
+after the period specified here, become eligible for (async) reload.
+Defaults to 2000, set to 0 to disable caching entirely.
+Will be disabled automatically for AllowAllAuthenticator.
+
+*Default Value:* 2000
+
+``roles_update_interval_in_ms``
+-------------------------------
+*This option is commented out by default.*
+
+Refresh interval for roles cache (if enabled).
+After this interval, cache entries become eligible for refresh. Upon next
+access, an async reload is scheduled and the old value returned until it
+completes. If roles_validity_in_ms is non-zero, then this must be
+also.
+Defaults to the same value as roles_validity_in_ms.
+
+*Default Value:* 2000
+
+``permissions_validity_in_ms``
+------------------------------
+
+Validity period for permissions cache (fetching permissions can be an
+expensive operation depending on the authorizer, CassandraAuthorizer is
+one example). Defaults to 2000, set to 0 to disable.
+Will be disabled automatically for AllowAllAuthorizer.
+
+*Default Value:* 2000
+
+``permissions_update_interval_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+Refresh interval for permissions cache (if enabled).
+After this interval, cache entries become eligible for refresh. Upon next
+access, an async reload is scheduled and the old value returned until it
+completes. If permissions_validity_in_ms is non-zero, then this must be
+also.
+Defaults to the same value as permissions_validity_in_ms.
+
+*Default Value:* 2000
+
+``credentials_validity_in_ms``
+------------------------------
+
+Validity period for credentials cache. This cache is tightly coupled to
+the provided PasswordAuthenticator implementation of IAuthenticator. If
+another IAuthenticator implementation is configured, this cache will not
+be automatically used and so the following settings will have no effect.
+Please note, credentials are cached in their encrypted form, so while
+activating this cache may reduce the number of queries made to the
+underlying table, it may not  bring a significant reduction in the
+latency of individual authentication attempts.
+Defaults to 2000, set to 0 to disable credentials caching.
+
+*Default Value:* 2000
+
+``credentials_update_interval_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+Refresh interval for credentials cache (if enabled).
+After this interval, cache entries become eligible for refresh. Upon next
+access, an async reload is scheduled and the old value returned until it
+completes. If credentials_validity_in_ms is non-zero, then this must be
+also.
+Defaults to the same value as credentials_validity_in_ms.
+
+*Default Value:* 2000
+
+``partitioner``
+---------------
+
+The partitioner is responsible for distributing groups of rows (by
+partition key) across nodes in the cluster. The partitioner can NOT be
+changed without reloading all data.  If you are adding nodes or upgrading,
+you should set this to the same partitioner that you are currently using.
+
+The default partitioner is the Murmur3Partitioner. Older partitioners
+such as the RandomPartitioner, ByteOrderedPartitioner, and
+OrderPreservingPartitioner have been included for backward compatibility only.
+For new clusters, you should NOT change this value.
+
+
+*Default Value:* org.apache.cassandra.dht.Murmur3Partitioner
+
+``data_file_directories``
+-------------------------
+*This option is commented out by default.*
+
+Directories where Cassandra should store data on disk. If multiple
+directories are specified, Cassandra will spread data evenly across 
+them by partitioning the token ranges.
+If not set, the default directory is $CASSANDRA_HOME/data/data.
+
+*Default Value (complex option)*::
+
+    #     - /var/lib/cassandra/data
+
+``commitlog_directory``
+-----------------------
+*This option is commented out by default.*
+commit log.  when running on magnetic HDD, this should be a
+separate spindle than the data directories.
+If not set, the default directory is $CASSANDRA_HOME/data/commitlog.
+
+*Default Value:*  /var/lib/cassandra/commitlog
+
+``cdc_enabled``
+---------------
+
+Enable / disable CDC functionality on a per-node basis. This modifies the logic used
+for write path allocation rejection (standard: never reject. cdc: reject Mutation
+containing a CDC-enabled table if at space limit in cdc_raw_directory).
+
+*Default Value:* false
+
+``cdc_raw_directory``
+---------------------
+*This option is commented out by default.*
+
+CommitLogSegments are moved to this directory on flush if cdc_enabled: true and the
+segment contains mutations for a CDC-enabled table. This should be placed on a
+separate spindle than the data directories. If not set, the default directory is
+$CASSANDRA_HOME/data/cdc_raw.
+
+*Default Value:*  /var/lib/cassandra/cdc_raw
+
+``disk_failure_policy``
+-----------------------
+
+Policy for data disk failures:
+
+die
+  shut down gossip and client transports and kill the JVM for any fs errors or
+  single-sstable errors, so the node can be replaced.
+
+stop_paranoid
+  shut down gossip and client transports even for single-sstable errors,
+  kill the JVM for errors during startup.
+
+stop
+  shut down gossip and client transports, leaving the node effectively dead, but
+  can still be inspected via JMX, kill the JVM for errors during startup.
+
+best_effort
+   stop using the failed disk and respond to requests based on
+   remaining available sstables.  This means you WILL see obsolete
+   data at CL.ONE!
+
+ignore
+   ignore fatal errors and let requests fail, as in pre-1.2 Cassandra
+
+*Default Value:* stop
+
+``commit_failure_policy``
+-------------------------
+
+Policy for commit disk failures:
+
+die
+  shut down the node and kill the JVM, so the node can be replaced.
+
+stop
+  shut down the node, leaving the node effectively dead, but
+  can still be inspected via JMX.
+
+stop_commit
+  shutdown the commit log, letting writes collect but
+  continuing to service reads, as in pre-2.0.5 Cassandra
+
+ignore
+  ignore fatal errors and let the batches fail
+
+*Default Value:* stop
+
+``prepared_statements_cache_size_mb``
+-------------------------------------
+
+Maximum size of the native protocol prepared statement cache
+
+Valid values are either "auto" (omitting the value) or a value greater 0.
+
+Note that specifying a too large value will result in long running GCs and possbily
+out-of-memory errors. Keep the value at a small fraction of the heap.
+
+If you constantly see "prepared statements discarded in the last minute because
+cache limit reached" messages, the first step is to investigate the root cause
+of these messages and check whether prepared statements are used correctly -
+i.e. use bind markers for variable parts.
+
+Do only change the default value, if you really have more prepared statements than
+fit in the cache. In most cases it is not neccessary to change this value.
+Constantly re-preparing statements is a performance penalty.
+
+Default value ("auto") is 1/256th of the heap or 10MB, whichever is greater
+
+``key_cache_size_in_mb``
+------------------------
+
+Maximum size of the key cache in memory.
+
+Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the
+minimum, sometimes more. The key cache is fairly tiny for the amount of
+time it saves, so it's worthwhile to use it at large numbers.
+The row cache saves even more time, but must contain the entire row,
+so it is extremely space-intensive. It's best to only use the
+row cache if you have hot rows or static rows.
+
+NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
+
+Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache.
+
+``key_cache_save_period``
+-------------------------
+
+Duration in seconds after which Cassandra should
+save the key cache. Caches are saved to saved_caches_directory as
+specified in this configuration file.
+
+Saved caches greatly improve cold-start speeds, and is relatively cheap in
+terms of I/O for the key cache. Row cache saving is much more expensive and
+has limited use.
+
+Default is 14400 or 4 hours.
+
+*Default Value:* 14400
+
+``key_cache_keys_to_save``
+--------------------------
+*This option is commented out by default.*
+
+Number of keys from the key cache to save
+Disabled by default, meaning all keys are going to be saved
+
+*Default Value:* 100
+
+``row_cache_class_name``
+------------------------
+*This option is commented out by default.*
+
+Row cache implementation class name. Available implementations:
+
+org.apache.cassandra.cache.OHCProvider
+  Fully off-heap row cache implementation (default).
+
+org.apache.cassandra.cache.SerializingCacheProvider
+  This is the row cache implementation availabile
+  in previous releases of Cassandra.
+
+*Default Value:* org.apache.cassandra.cache.OHCProvider
+
+``row_cache_size_in_mb``
+------------------------
+
+Maximum size of the row cache in memory.
+Please note that OHC cache implementation requires some additional off-heap memory to manage
+the map structures and some in-flight memory during operations before/after cache entries can be
+accounted against the cache capacity. This overhead is usually small compared to the whole capacity.
+Do not specify more memory that the system can afford in the worst usual situation and leave some
+headroom for OS block level cache. Do never allow your system to swap.
+
+Default value is 0, to disable row caching.
+
+*Default Value:* 0
+
+``row_cache_save_period``
+-------------------------
+
+Duration in seconds after which Cassandra should save the row cache.
+Caches are saved to saved_caches_directory as specified in this configuration file.
+
+Saved caches greatly improve cold-start speeds, and is relatively cheap in
+terms of I/O for the key cache. Row cache saving is much more expensive and
+has limited use.
+
+Default is 0 to disable saving the row cache.
+
+*Default Value:* 0
+
+``row_cache_keys_to_save``
+--------------------------
+*This option is commented out by default.*
+
+Number of keys from the row cache to save.
+Specify 0 (which is the default), meaning all keys are going to be saved
+
+*Default Value:* 100
+
+``counter_cache_size_in_mb``
+----------------------------
+
+Maximum size of the counter cache in memory.
+
+Counter cache helps to reduce counter locks' contention for hot counter cells.
+In case of RF = 1 a counter cache hit will cause Cassandra to skip the read before
+write entirely. With RF > 1 a counter cache hit will still help to reduce the duration
+of the lock hold, helping with hot counter cell updates, but will not allow skipping
+the read entirely. Only the local (clock, count) tuple of a counter cell is kept
+in memory, not the whole counter, so it's relatively cheap.
+
+NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
+
+Default value is empty to make it "auto" (min(2.5% of Heap (in MB), 50MB)). Set to 0 to disable counter cache.
+NOTE: if you perform counter deletes and rely on low gcgs, you should disable the counter cache.
+
+``counter_cache_save_period``
+-----------------------------
+
+Duration in seconds after which Cassandra should
+save the counter cache (keys only). Caches are saved to saved_caches_directory as
+specified in this configuration file.
+
+Default is 7200 or 2 hours.
+
+*Default Value:* 7200
+
+``counter_cache_keys_to_save``
+------------------------------
+*This option is commented out by default.*
+
+Number of keys from the counter cache to save
+Disabled by default, meaning all keys are going to be saved
+
+*Default Value:* 100
+
+``saved_caches_directory``
+--------------------------
+*This option is commented out by default.*
+
+saved caches
+If not set, the default directory is $CASSANDRA_HOME/data/saved_caches.
+
+*Default Value:*  /var/lib/cassandra/saved_caches
+
+``commitlog_sync_batch_window_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+commitlog_sync may be either "periodic", "group", or "batch." 
+
+When in batch mode, Cassandra won't ack writes until the commit log
+has been flushed to disk.  Each incoming write will trigger the flush task.
+commitlog_sync_batch_window_in_ms is a deprecated value. Previously it had
+almost no value, and is being removed.
+
+
+*Default Value:* 2
+
+``commitlog_sync_group_window_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+group mode is similar to batch mode, where Cassandra will not ack writes
+until the commit log has been flushed to disk. The difference is group
+mode will wait up to commitlog_sync_group_window_in_ms between flushes.
+
+
+*Default Value:* 1000
+
+``commitlog_sync``
+------------------
+
+the default option is "periodic" where writes may be acked immediately
+and the CommitLog is simply synced every commitlog_sync_period_in_ms
+milliseconds.
+
+*Default Value:* periodic
+
+``commitlog_sync_period_in_ms``
+-------------------------------
+
+*Default Value:* 10000
+
+``periodic_commitlog_sync_lag_block_in_ms``
+-------------------------------------------
+*This option is commented out by default.*
+
+When in periodic commitlog mode, the number of milliseconds to block writes
+while waiting for a slow disk flush to complete.
+
+``commitlog_segment_size_in_mb``
+--------------------------------
+
+The size of the individual commitlog file segments.  A commitlog
+segment may be archived, deleted, or recycled once all the data
+in it (potentially from each columnfamily in the system) has been
+flushed to sstables.
+
+The default size is 32, which is almost always fine, but if you are
+archiving commitlog segments (see commitlog_archiving.properties),
+then you probably want a finer granularity of archiving; 8 or 16 MB
+is reasonable.
+Max mutation size is also configurable via max_mutation_size_in_kb setting in
+cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 1024.
+This should be positive and less than 2048.
+
+NOTE: If max_mutation_size_in_kb is set explicitly then commitlog_segment_size_in_mb must
+be set to at least twice the size of max_mutation_size_in_kb / 1024
+
+
+*Default Value:* 32
+
+``commitlog_compression``
+-------------------------
+*This option is commented out by default.*
+
+Compression to apply to the commit log. If omitted, the commit log
+will be written uncompressed.  LZ4, Snappy, and Deflate compressors
+are supported.
+
+*Default Value (complex option)*::
+
+    #   - class_name: LZ4Compressor
+    #     parameters:
+    #         -
+
+``table``
+---------
+*This option is commented out by default.*
+Compression to apply to SSTables as they flush for compressed tables.
+Note that tables without compression enabled do not respect this flag.
+
+As high ratio compressors like LZ4HC, Zstd, and Deflate can potentially
+block flushes for too long, the default is to flush with a known fast
+compressor in those cases. Options are:
+
+none : Flush without compressing blocks but while still doing checksums.
+fast : Flush with a fast compressor. If the table is already using a
+       fast compressor that compressor is used.
+
+*Default Value:* Always flush with the same compressor that the table uses. This
+
+``flush_compression``
+---------------------
+*This option is commented out by default.*
+       was the pre 4.0 behavior.
+
+
+*Default Value:* fast
+
+``seed_provider``
+-----------------
+
+any class that implements the SeedProvider interface and has a
+constructor that takes a Map<String, String> of parameters will do.
+
+*Default Value (complex option)*::
+
+        # Addresses of hosts that are deemed contact points. 
+        # Cassandra nodes use this list of hosts to find each other and learn
+        # the topology of the ring.  You must change this if you are running
+        # multiple nodes!
+        - class_name: org.apache.cassandra.locator.SimpleSeedProvider
+          parameters:
+              # seeds is actually a comma-delimited list of addresses.
+              # Ex: "<ip1>,<ip2>,<ip3>"
+              - seeds: "127.0.0.1:7000"
+
+``concurrent_reads``
+--------------------
+For workloads with more data than can fit in memory, Cassandra's
+bottleneck will be reads that need to fetch data from
+disk. "concurrent_reads" should be set to (16 * number_of_drives) in
+order to allow the operations to enqueue low enough in the stack
+that the OS and drives can reorder them. Same applies to
+"concurrent_counter_writes", since counter writes read the current
+values before incrementing and writing them back.
+
+On the other hand, since writes are almost never IO bound, the ideal
+number of "concurrent_writes" is dependent on the number of cores in
+your system; (8 * number_of_cores) is a good rule of thumb.
+
+*Default Value:* 32
+
+``concurrent_writes``
+---------------------
+
+*Default Value:* 32
+
+``concurrent_counter_writes``
+-----------------------------
+
+*Default Value:* 32
+
+``concurrent_materialized_view_writes``
+---------------------------------------
+
+For materialized view writes, as there is a read involved, so this should
+be limited by the less of concurrent reads or concurrent writes.
+
+*Default Value:* 32
+
+``file_cache_size_in_mb``
+-------------------------
+*This option is commented out by default.*
+
+Maximum memory to use for sstable chunk cache and buffer pooling.
+32MB of this are reserved for pooling buffers, the rest is used as an
+cache that holds uncompressed sstable chunks.
+Defaults to the smaller of 1/4 of heap or 512MB. This pool is allocated off-heap,
+so is in addition to the memory allocated for heap. The cache also has on-heap
+overhead which is roughly 128 bytes per chunk (i.e. 0.2% of the reserved size
+if the default 64k chunk size is used).
+Memory is only allocated when needed.
+
+*Default Value:* 512
+
+``buffer_pool_use_heap_if_exhausted``
+-------------------------------------
+*This option is commented out by default.*
+
+Flag indicating whether to allocate on or off heap when the sstable buffer
+pool is exhausted, that is when it has exceeded the maximum memory
+file_cache_size_in_mb, beyond which it will not cache buffers but allocate on request.
+
+
+*Default Value:* true
+
+``disk_optimization_strategy``
+------------------------------
+*This option is commented out by default.*
+
+The strategy for optimizing disk read
+Possible values are:
+ssd (for solid state disks, the default)
+spinning (for spinning disks)
+
+*Default Value:* ssd
+
+``memtable_heap_space_in_mb``
+-----------------------------
+*This option is commented out by default.*
+
+Total permitted memory to use for memtables. Cassandra will stop
+accepting writes when the limit is exceeded until a flush completes,
+and will trigger a flush based on memtable_cleanup_threshold
+If omitted, Cassandra will set both to 1/4 the size of the heap.
+
+*Default Value:* 2048
+
+``memtable_offheap_space_in_mb``
+--------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 2048
+
+``memtable_cleanup_threshold``
+------------------------------
+*This option is commented out by default.*
+
+memtable_cleanup_threshold is deprecated. The default calculation
+is the only reasonable choice. See the comments on  memtable_flush_writers
+for more information.
+
+Ratio of occupied non-flushing memtable size to total permitted size
+that will trigger a flush of the largest memtable. Larger mct will
+mean larger flushes and hence less compaction, but also less concurrent
+flush activity which can make it difficult to keep your disks fed
+under heavy write load.
+
+memtable_cleanup_threshold defaults to 1 / (memtable_flush_writers + 1)
+
+*Default Value:* 0.11
+
+``memtable_allocation_type``
+----------------------------
+
+Specify the way Cassandra allocates and manages memtable memory.
+Options are:
+
+heap_buffers
+  on heap nio buffers
+
+offheap_buffers
+  off heap (direct) nio buffers
+
+offheap_objects
+   off heap objects
+
+*Default Value:* heap_buffers
+
+``repair_session_space_in_mb``
+------------------------------
+*This option is commented out by default.*
+
+Limit memory usage for Merkle tree calculations during repairs. The default
+is 1/16th of the available heap. The main tradeoff is that smaller trees
+have less resolution, which can lead to over-streaming data. If you see heap
+pressure during repairs, consider lowering this, but you cannot go below
+one megabyte. If you see lots of over-streaming, consider raising
+this or using subrange repair.
+
+For more details see https://issues.apache.org/jira/browse/CASSANDRA-14096.
+
+
+``commitlog_total_space_in_mb``
+-------------------------------
+*This option is commented out by default.*
+
+Total space to use for commit logs on disk.
+
+If space gets above this value, Cassandra will flush every dirty CF
+in the oldest segment and remove it.  So a small total commitlog space
+will tend to cause more flush activity on less-active columnfamilies.
+
+The default value is the smaller of 8192, and 1/4 of the total space
+of the commitlog volume.
+
+
+*Default Value:* 8192
+
+``memtable_flush_writers``
+--------------------------
+*This option is commented out by default.*
+
+This sets the number of memtable flush writer threads per disk
+as well as the total number of memtables that can be flushed concurrently.
+These are generally a combination of compute and IO bound.
+
+Memtable flushing is more CPU efficient than memtable ingest and a single thread
+can keep up with the ingest rate of a whole server on a single fast disk
+until it temporarily becomes IO bound under contention typically with compaction.
+At that point you need multiple flush threads. At some point in the future
+it may become CPU bound all the time.
+
+You can tell if flushing is falling behind using the MemtablePool.BlockedOnAllocation
+metric which should be 0, but will be non-zero if threads are blocked waiting on flushing
+to free memory.
+
+memtable_flush_writers defaults to two for a single data directory.
+This means that two  memtables can be flushed concurrently to the single data directory.
+If you have multiple data directories the default is one memtable flushing at a time
+but the flush will use a thread per data directory so you will get two or more writers.
+
+Two is generally enough to flush on a fast disk [array] mounted as a single data directory.
+Adding more flush writers will result in smaller more frequent flushes that introduce more
+compaction overhead.
+
+There is a direct tradeoff between number of memtables that can be flushed concurrently
+and flush size and frequency. More is not better you just need enough flush writers
+to never stall waiting for flushing to free memory.
+
+
+*Default Value:* 2
+
+``cdc_total_space_in_mb``
+-------------------------
+*This option is commented out by default.*
+
+Total space to use for change-data-capture logs on disk.
+
+If space gets above this value, Cassandra will throw WriteTimeoutException
+on Mutations including tables with CDC enabled. A CDCCompactor is responsible
+for parsing the raw CDC logs and deleting them when parsing is completed.
+
+The default value is the min of 4096 mb and 1/8th of the total space
+of the drive where cdc_raw_directory resides.
+
+*Default Value:* 4096
+
+``cdc_free_space_check_interval_ms``
+------------------------------------
+*This option is commented out by default.*
+
+When we hit our cdc_raw limit and the CDCCompactor is either running behind
+or experiencing backpressure, we check at the following interval to see if any
+new space for cdc-tracked tables has been made available. Default to 250ms
+
+*Default Value:* 250
+
+``index_summary_capacity_in_mb``
+--------------------------------
+
+A fixed memory pool size in MB for for SSTable index summaries. If left
+empty, this will default to 5% of the heap size. If the memory usage of
+all index summaries exceeds this limit, SSTables with low read rates will
+shrink their index summaries in order to meet this limit.  However, this
+is a best-effort process. In extreme conditions Cassandra may need to use
+more than this amount of memory.
+
+``index_summary_resize_interval_in_minutes``
+--------------------------------------------
+
+How frequently index summaries should be resampled.  This is done
+periodically to redistribute memory from the fixed-size pool to sstables
+proportional their recent read rates.  Setting to -1 will disable this
+process, leaving existing index summaries at their current sampling level.
+
+*Default Value:* 60
+
+``trickle_fsync``
+-----------------
+
+Whether to, when doing sequential writing, fsync() at intervals in
+order to force the operating system to flush the dirty
+buffers. Enable this to avoid sudden dirty buffer flushing from
+impacting read latencies. Almost always a good idea on SSDs; not
+necessarily on platters.
+
+*Default Value:* false
+
+``trickle_fsync_interval_in_kb``
+--------------------------------
+
+*Default Value:* 10240
+
+``storage_port``
+----------------
+
+TCP port, for commands and data
+For security reasons, you should not expose this port to the internet.  Firewall it if needed.
+
+*Default Value:* 7000
+
+``ssl_storage_port``
+--------------------
+
+SSL port, for legacy encrypted communication. This property is unused unless enabled in
+server_encryption_options (see below). As of cassandra 4.0, this property is deprecated
+as a single port can be used for either/both secure and insecure connections.
+For security reasons, you should not expose this port to the internet. Firewall it if needed.
+
+*Default Value:* 7001
+
+``listen_address``
+------------------
+
+Address or interface to bind to and tell other Cassandra nodes to connect to.
+You _must_ change this if you want multiple nodes to be able to communicate!
+
+Set listen_address OR listen_interface, not both.
+
+Leaving it blank leaves it up to InetAddress.getLocalHost(). This
+will always do the Right Thing _if_ the node is properly configured
+(hostname, name resolution, etc), and the Right Thing is to use the
+address associated with the hostname (it might not be).
+
+Setting listen_address to 0.0.0.0 is always wrong.
+
+
+*Default Value:* localhost
+
+``listen_interface``
+--------------------
+*This option is commented out by default.*
+
+Set listen_address OR listen_interface, not both. Interfaces must correspond
+to a single address, IP aliasing is not supported.
+
+*Default Value:* eth0
+
+``listen_interface_prefer_ipv6``
+--------------------------------
+*This option is commented out by default.*
+
+If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address
+you can specify which should be chosen using listen_interface_prefer_ipv6. If false the first ipv4
+address will be used. If true the first ipv6 address will be used. Defaults to false preferring
+ipv4. If there is only one address it will be selected regardless of ipv4/ipv6.
+
+*Default Value:* false
+
+``broadcast_address``
+---------------------
+*This option is commented out by default.*
+
+Address to broadcast to other Cassandra nodes
+Leaving this blank will set it to the same value as listen_address
+
+*Default Value:* 1.2.3.4
+
+``listen_on_broadcast_address``
+-------------------------------
+*This option is commented out by default.*
+
+When using multiple physical network interfaces, set this
+to true to listen on broadcast_address in addition to
+the listen_address, allowing nodes to communicate in both
+interfaces.
+Ignore this property if the network configuration automatically
+routes  between the public and private networks such as EC2.
+
+*Default Value:* false
+
+``internode_authenticator``
+---------------------------
+*This option is commented out by default.*
+
+Internode authentication backend, implementing IInternodeAuthenticator;
+used to allow/disallow connections from peer nodes.
+
+*Default Value:* org.apache.cassandra.auth.AllowAllInternodeAuthenticator
+
+``start_native_transport``
+--------------------------
+
+Whether to start the native transport server.
+The address on which the native transport is bound is defined by rpc_address.
+
+*Default Value:* true
+
+``native_transport_port``
+-------------------------
+port for the CQL native transport to listen for clients on
+For security reasons, you should not expose this port to the internet.  Firewall it if needed.
+
+*Default Value:* 9042
+
+``native_transport_port_ssl``
+-----------------------------
+*This option is commented out by default.*
+Enabling native transport encryption in client_encryption_options allows you to either use
+encryption for the standard port or to use a dedicated, additional port along with the unencrypted
+standard native_transport_port.
+Enabling client encryption and keeping native_transport_port_ssl disabled will use encryption
+for native_transport_port. Setting native_transport_port_ssl to a different value
+from native_transport_port will use encryption for native_transport_port_ssl while
+keeping native_transport_port unencrypted.
+
+*Default Value:* 9142
+
+``native_transport_max_threads``
+--------------------------------
+*This option is commented out by default.*
+The maximum threads for handling requests (note that idle threads are stopped
+after 30 seconds so there is not corresponding minimum setting).
+
+*Default Value:* 128
+
+``native_transport_max_frame_size_in_mb``
+-----------------------------------------
+*This option is commented out by default.*
+
+The maximum size of allowed frame. Frame (requests) larger than this will
+be rejected as invalid. The default is 256MB. If you're changing this parameter,
+you may want to adjust max_value_size_in_mb accordingly. This should be positive and less than 2048.
+
+*Default Value:* 256
+
+``native_transport_frame_block_size_in_kb``
+-------------------------------------------
+*This option is commented out by default.*
+
+If checksumming is enabled as a protocol option, denotes the size of the chunks into which frame
+are bodies will be broken and checksummed.
+
+*Default Value:* 32
+
+``native_transport_max_concurrent_connections``
+-----------------------------------------------
+*This option is commented out by default.*
+
+The maximum number of concurrent client connections.
+The default is -1, which means unlimited.
+
+*Default Value:* -1
+
+``native_transport_max_concurrent_connections_per_ip``
+------------------------------------------------------
+*This option is commented out by default.*
+
+The maximum number of concurrent client connections per source ip.
+The default is -1, which means unlimited.
+
+*Default Value:* -1
+
+``native_transport_allow_older_protocols``
+------------------------------------------
+
+Controls whether Cassandra honors older, yet currently supported, protocol versions.
+The default is true, which means all supported protocols will be honored.
+
+*Default Value:* true
+
+``native_transport_idle_timeout_in_ms``
+---------------------------------------
+*This option is commented out by default.*
+
+Controls when idle client connections are closed. Idle connections are ones that had neither reads
+nor writes for a time period.
+
+Clients may implement heartbeats by sending OPTIONS native protocol message after a timeout, which
+will reset idle timeout timer on the server side. To close idle client connections, corresponding
+values for heartbeat intervals have to be set on the client side.
+
+Idle connection timeouts are disabled by default.
+
+*Default Value:* 60000
+
+``rpc_address``
+---------------
+
+The address or interface to bind the native transport server to.
+
+Set rpc_address OR rpc_interface, not both.
+
+Leaving rpc_address blank has the same effect as on listen_address
+(i.e. it will be based on the configured hostname of the node).
+
+Note that unlike listen_address, you can specify 0.0.0.0, but you must also
+set broadcast_rpc_address to a value other than 0.0.0.0.
+
+For security reasons, you should not expose this port to the internet.  Firewall it if needed.
+
+*Default Value:* localhost
+
+``rpc_interface``
+-----------------
+*This option is commented out by default.*
+
+Set rpc_address OR rpc_interface, not both. Interfaces must correspond
+to a single address, IP aliasing is not supported.
+
+*Default Value:* eth1
+
+``rpc_interface_prefer_ipv6``
+-----------------------------
+*This option is commented out by default.*
+
+If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address
+you can specify which should be chosen using rpc_interface_prefer_ipv6. If false the first ipv4
+address will be used. If true the first ipv6 address will be used. Defaults to false preferring
+ipv4. If there is only one address it will be selected regardless of ipv4/ipv6.
+
+*Default Value:* false
+
+``broadcast_rpc_address``
+-------------------------
+*This option is commented out by default.*
+
+RPC address to broadcast to drivers and other Cassandra nodes. This cannot
+be set to 0.0.0.0. If left blank, this will be set to the value of
+rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must
+be set.
+
+*Default Value:* 1.2.3.4
+
+``rpc_keepalive``
+-----------------
+
+enable or disable keepalive on rpc/native connections
+
+*Default Value:* true
+
+``internode_send_buff_size_in_bytes``
+-------------------------------------
+*This option is commented out by default.*
+
+Uncomment to set socket buffer size for internode communication
+Note that when setting this, the buffer size is limited by net.core.wmem_max
+and when not setting it it is defined by net.ipv4.tcp_wmem
+See also:
+/proc/sys/net/core/wmem_max
+/proc/sys/net/core/rmem_max
+/proc/sys/net/ipv4/tcp_wmem
+/proc/sys/net/ipv4/tcp_wmem
+and 'man tcp'
+
+``internode_recv_buff_size_in_bytes``
+-------------------------------------
+*This option is commented out by default.*
+
+Uncomment to set socket buffer size for internode communication
+Note that when setting this, the buffer size is limited by net.core.wmem_max
+and when not setting it it is defined by net.ipv4.tcp_wmem
+
+``incremental_backups``
+-----------------------
+
+Set to true to have Cassandra create a hard link to each sstable
+flushed or streamed locally in a backups/ subdirectory of the
+keyspace data.  Removing these links is the operator's
+responsibility.
+
+*Default Value:* false
+
+``snapshot_before_compaction``
+------------------------------
+
+Whether or not to take a snapshot before each compaction.  Be
+careful using this option, since Cassandra won't clean up the
+snapshots for you.  Mostly useful if you're paranoid when there
+is a data format change.
+
+*Default Value:* false
+
+``auto_snapshot``
+-----------------
+
+Whether or not a snapshot is taken of the data before keyspace truncation
+or dropping of column families. The STRONGLY advised default of true 
+should be used to provide data safety. If you set this flag to false, you will
+lose data on truncation or drop.
+
+*Default Value:* true
+
+``column_index_size_in_kb``
+---------------------------
+
+Granularity of the collation index of rows within a partition.
+Increase if your rows are large, or if you have a very large
+number of rows per partition.  The competing goals are these:
+
+- a smaller granularity means more index entries are generated
+  and looking up rows withing the partition by collation column
+  is faster
+- but, Cassandra will keep the collation index in memory for hot
+  rows (as part of the key cache), so a larger granularity means
+  you can cache more hot rows
+
+*Default Value:* 64
+
+``column_index_cache_size_in_kb``
+---------------------------------
+
+Per sstable indexed key cache entries (the collation index in memory
+mentioned above) exceeding this size will not be held on heap.
+This means that only partition information is held on heap and the
+index entries are read from disk.
+
+Note that this size refers to the size of the
+serialized index information and not the size of the partition.
+
+*Default Value:* 2
+
+``concurrent_compactors``
+-------------------------
+*This option is commented out by default.*
+
+Number of simultaneous compactions to allow, NOT including
+validation "compactions" for anti-entropy repair.  Simultaneous
+compactions can help preserve read performance in a mixed read/write
+workload, by mitigating the tendency of small sstables to accumulate
+during a single long running compactions. The default is usually
+fine and if you experience problems with compaction running too
+slowly or too fast, you should look at
+compaction_throughput_mb_per_sec first.
+
+concurrent_compactors defaults to the smaller of (number of disks,
+number of cores), with a minimum of 2 and a maximum of 8.
+
+If your data directories are backed by SSD, you should increase this
+to the number of cores.
+
+*Default Value:* 1
+
+``concurrent_validations``
+--------------------------
+*This option is commented out by default.*
+
+Number of simultaneous repair validations to allow. Default is unbounded
+Values less than one are interpreted as unbounded (the default)
+
+*Default Value:* 0
+
+``concurrent_materialized_view_builders``
+-----------------------------------------
+
+Number of simultaneous materialized view builder tasks to allow.
+
+*Default Value:* 1
+
+``compaction_throughput_mb_per_sec``
+------------------------------------
+
+Throttles compaction to the given total throughput across the entire
+system. The faster you insert data, the faster you need to compact in
+order to keep the sstable count down, but in general, setting this to
+16 to 32 times the rate you are inserting data is more than sufficient.
+Setting this to 0 disables throttling. Note that this account for all types
+of compaction, including validation compaction.
+
+*Default Value:* 16
+
+``sstable_preemptive_open_interval_in_mb``
+------------------------------------------
+
+When compacting, the replacement sstable(s) can be opened before they
+are completely written, and used in place of the prior sstables for
+any range that has been written. This helps to smoothly transfer reads 
+between the sstables, reducing page cache churn and keeping hot rows hot
+
+*Default Value:* 50
+
+``stream_entire_sstables``
+--------------------------
+*This option is commented out by default.*
+
+When enabled, permits Cassandra to zero-copy stream entire eligible
+SSTables between nodes, including every component.
+This speeds up the network transfer significantly subject to
+throttling specified by stream_throughput_outbound_megabits_per_sec.
+Enabling this will reduce the GC pressure on sending and receiving node.
+When unset, the default is enabled. While this feature tries to keep the
+disks balanced, it cannot guarantee it. This feature will be automatically
+disabled if internode encryption is enabled. Currently this can be used with
+Leveled Compaction. Once CASSANDRA-14586 is fixed other compaction strategies
+will benefit as well when used in combination with CASSANDRA-6696.
+
+*Default Value:* true
+
+``stream_throughput_outbound_megabits_per_sec``
+-----------------------------------------------
+*This option is commented out by default.*
+
+Throttles all outbound streaming file transfers on this node to the
+given total throughput in Mbps. This is necessary because Cassandra does
+mostly sequential IO when streaming data during bootstrap or repair, which
+can lead to saturating the network connection and degrading rpc performance.
+When unset, the default is 200 Mbps or 25 MB/s.
+
+*Default Value:* 200
+
+``inter_dc_stream_throughput_outbound_megabits_per_sec``
+--------------------------------------------------------
+*This option is commented out by default.*
+
+Throttles all streaming file transfer between the datacenters,
+this setting allows users to throttle inter dc stream throughput in addition
+to throttling all network stream traffic as configured with
+stream_throughput_outbound_megabits_per_sec
+When unset, the default is 200 Mbps or 25 MB/s
+
+*Default Value:* 200
+
+``read_request_timeout_in_ms``
+------------------------------
+
+How long the coordinator should wait for read operations to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 5000
+
+``range_request_timeout_in_ms``
+-------------------------------
+How long the coordinator should wait for seq or index scans to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 10000
+
+``write_request_timeout_in_ms``
+-------------------------------
+How long the coordinator should wait for writes to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 2000
+
+``counter_write_request_timeout_in_ms``
+---------------------------------------
+How long the coordinator should wait for counter writes to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 5000
+
+``cas_contention_timeout_in_ms``
+--------------------------------
+How long a coordinator should continue to retry a CAS operation
+that contends with other proposals for the same row.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 1000
+
+``truncate_request_timeout_in_ms``
+----------------------------------
+How long the coordinator should wait for truncates to complete
+(This can be much longer, because unless auto_snapshot is disabled
+we need to flush first so we can snapshot before removing the data.)
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 60000
+
+``request_timeout_in_ms``
+-------------------------
+The default timeout for other, miscellaneous operations.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 10000
+
+``internode_application_send_queue_capacity_in_bytes``
+------------------------------------------------------
+*This option is commented out by default.*
+
+Defensive settings for protecting Cassandra from true network partitions.
+See (CASSANDRA-14358) for details.
+
+The amount of time to wait for internode tcp connections to establish.
+internode_tcp_connect_timeout_in_ms = 2000
+
+The amount of time unacknowledged data is allowed on a connection before we throw out the connection
+Note this is only supported on Linux + epoll, and it appears to behave oddly above a setting of 30000
+(it takes much longer than 30s) as of Linux 4.12. If you want something that high set this to 0
+which picks up the OS default and configure the net.ipv4.tcp_retries2 sysctl to be ~8.
+internode_tcp_user_timeout_in_ms = 30000
+
+The maximum continuous period a connection may be unwritable in application space
+internode_application_timeout_in_ms = 30000
+
+Global, per-endpoint and per-connection limits imposed on messages queued for delivery to other nodes
+and waiting to be processed on arrival from other nodes in the cluster.  These limits are applied to the on-wire
+size of the message being sent or received.
+
+The basic per-link limit is consumed in isolation before any endpoint or global limit is imposed.
+Each node-pair has three links: urgent, small and large.  So any given node may have a maximum of
+N*3*(internode_application_send_queue_capacity_in_bytes+internode_application_receive_queue_capacity_in_bytes)
+messages queued without any coordination between them although in practice, with token-aware routing, only RF*tokens
+nodes should need to communicate with significant bandwidth.
+
+The per-endpoint limit is imposed on all messages exceeding the per-link limit, simultaneously with the global limit,
+on all links to or from a single node in the cluster.
+The global limit is imposed on all messages exceeding the per-link limit, simultaneously with the per-endpoint limit,
+on all links to or from any node in the cluster.
+
+
+*Default Value:* 4194304                       #4MiB
+
+``internode_application_send_queue_reserve_endpoint_capacity_in_bytes``
+-----------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 134217728    #128MiB
+
+``internode_application_send_queue_reserve_global_capacity_in_bytes``
+---------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 536870912      #512MiB
+
+``internode_application_receive_queue_capacity_in_bytes``
+---------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 4194304                    #4MiB
+
+``internode_application_receive_queue_reserve_endpoint_capacity_in_bytes``
+--------------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 134217728 #128MiB
+
+``internode_application_receive_queue_reserve_global_capacity_in_bytes``
+------------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 536870912   #512MiB
+
+``slow_query_log_timeout_in_ms``
+--------------------------------
+
+
+How long before a node logs slow queries. Select queries that take longer than
+this timeout to execute, will generate an aggregated log message, so that slow queries
+can be identified. Set this value to zero to disable slow query logging.
+
+*Default Value:* 500
+
+``cross_node_timeout``
+----------------------
+*This option is commented out by default.*
+
+Enable operation timeout information exchange between nodes to accurately
+measure request timeouts.  If disabled, replicas will assume that requests
+were forwarded to them instantly by the coordinator, which means that
+under overload conditions we will waste that much extra time processing 
+already-timed-out requests.
+
+Warning: It is generally assumed that users have setup NTP on their clusters, and that clocks are modestly in sync, 
+since this is a requirement for general correctness of last write wins.
+
+*Default Value:* true
+
+``streaming_keep_alive_period_in_secs``
+---------------------------------------
+*This option is commented out by default.*
+
+Set keep-alive period for streaming
+This node will send a keep-alive message periodically with this period.
+If the node does not receive a keep-alive message from the peer for
+2 keep-alive cycles the stream session times out and fail
+Default value is 300s (5 minutes), which means stalled stream
+times out in 10 minutes by default
+
+*Default Value:* 300
+
+``streaming_connections_per_host``
+----------------------------------
+*This option is commented out by default.*
+
+Limit number of connections per host for streaming
+Increase this when you notice that joins are CPU-bound rather that network
+bound (for example a few nodes with big files).
+
+*Default Value:* 1
+
+``phi_convict_threshold``
+-------------------------
+*This option is commented out by default.*
+
+
+phi value that must be reached for a host to be marked down.
+most users should never need to adjust this.
+
+*Default Value:* 8
+
+``endpoint_snitch``
+-------------------
+
+endpoint_snitch -- Set this to a class that implements
+IEndpointSnitch.  The snitch has two functions:
+
+- it teaches Cassandra enough about your network topology to route
+  requests efficiently
+- it allows Cassandra to spread replicas around your cluster to avoid
+  correlated failures. It does this by grouping machines into
+  "datacenters" and "racks."  Cassandra will do its best not to have
+  more than one replica on the same "rack" (which may not actually
+  be a physical location)
+
+CASSANDRA WILL NOT ALLOW YOU TO SWITCH TO AN INCOMPATIBLE SNITCH
+ONCE DATA IS INSERTED INTO THE CLUSTER.  This would cause data loss.
+This means that if you start with the default SimpleSnitch, which
+locates every node on "rack1" in "datacenter1", your only options
+if you need to add another datacenter are GossipingPropertyFileSnitch
+(and the older PFS).  From there, if you want to migrate to an
+incompatible snitch like Ec2Snitch you can do it by adding new nodes
+under Ec2Snitch (which will locate them in a new "datacenter") and
+decommissioning the old ones.
+
+Out of the box, Cassandra provides:
+
+SimpleSnitch:
+   Treats Strategy order as proximity. This can improve cache
+   locality when disabling read repair.  Only appropriate for
+   single-datacenter deployments.
+
+GossipingPropertyFileSnitch
+   This should be your go-to snitch for production use.  The rack
+   and datacenter for the local node are defined in
+   cassandra-rackdc.properties and propagated to other nodes via
+   gossip.  If cassandra-topology.properties exists, it is used as a
+   fallback, allowing migration from the PropertyFileSnitch.
+
+PropertyFileSnitch:
+   Proximity is determined by rack and data center, which are
+   explicitly configured in cassandra-topology.properties.
+
+Ec2Snitch:
+   Appropriate for EC2 deployments in a single Region. Loads Region
+   and Availability Zone information from the EC2 API. The Region is
+   treated as the datacenter, and the Availability Zone as the rack.
+   Only private IPs are used, so this will not work across multiple
+   Regions.
+
+Ec2MultiRegionSnitch:
+   Uses public IPs as broadcast_address to allow cross-region
+   connectivity.  (Thus, you should set seed addresses to the public
+   IP as well.) You will need to open the storage_port or
+   ssl_storage_port on the public IP firewall.  (For intra-Region
+   traffic, Cassandra will switch to the private IP after
+   establishing a connection.)
+
+RackInferringSnitch:
+   Proximity is determined by rack and data center, which are
+   assumed to correspond to the 3rd and 2nd octet of each node's IP
+   address, respectively.  Unless this happens to match your
+   deployment conventions, this is best used as an example of
+   writing a custom Snitch class and is provided in that spirit.
+
+You can use a custom Snitch by setting this to the full class name
+of the snitch, which will be assumed to be on your classpath.
+
+*Default Value:* SimpleSnitch
+
+``dynamic_snitch_update_interval_in_ms``
+----------------------------------------
+
+controls how often to perform the more expensive part of host score
+calculation
+
+*Default Value:* 100 
+
+``dynamic_snitch_reset_interval_in_ms``
+---------------------------------------
+controls how often to reset all host scores, allowing a bad host to
+possibly recover
+
+*Default Value:* 600000
+
+``dynamic_snitch_badness_threshold``
+------------------------------------
+if set greater than zero, this will allow
+'pinning' of replicas to hosts in order to increase cache capacity.
+The badness threshold will control how much worse the pinned host has to be
+before the dynamic snitch will prefer other replicas over it.  This is
+expressed as a double which represents a percentage.  Thus, a value of
+0.2 means Cassandra would continue to prefer the static snitch values
+until the pinned host was 20% worse than the fastest.
+
+*Default Value:* 0.1
+
+``server_encryption_options``
+-----------------------------
+
+Enable or disable inter-node encryption
+JVM and netty defaults for supported SSL socket protocols and cipher suites can
+be replaced using custom encryption options. This is not recommended
+unless you have policies in place that dictate certain settings, or
+need to disable vulnerable ciphers or protocols in case the JVM cannot
+be updated.
+FIPS compliant settings can be configured at JVM level and should not
+involve changing encryption settings here:
+https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/FIPS.html
+
+*NOTE* No custom encryption options are enabled at the moment
+The available internode options are : all, none, dc, rack
+If set to dc cassandra will encrypt the traffic between the DCs
+If set to rack cassandra will encrypt the traffic between the racks
+
+The passwords used in these options must match the passwords used when generating
+the keystore and truststore.  For instructions on generating these files, see:
+http://download.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
+
+
+*Default Value (complex option)*::
+
+        # set to true for allowing secure incoming connections
+        enabled: false
+        # If enabled and optional are both set to true, encrypted and unencrypted connections are handled on the storage_port
+        optional: false
+        # if enabled, will open up an encrypted listening socket on ssl_storage_port. Should be used
+        # during upgrade to 4.0; otherwise, set to false.
+        enable_legacy_ssl_storage_port: false
+        # on outbound connections, determine which type of peers to securely connect to. 'enabled' must be set to true.
+        internode_encryption: none
+        keystore: conf/.keystore
+        keystore_password: cassandra
+        truststore: conf/.truststore
+        truststore_password: cassandra
+        # More advanced defaults below:
+        # protocol: TLS
+        # store_type: JKS
+        # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
+        # require_client_auth: false
+        # require_endpoint_verification: false
+
+``client_encryption_options``
+-----------------------------
+enable or disable client-to-server encryption.
+
+*Default Value (complex option)*::
+
+        enabled: false
+        # If enabled and optional is set to true encrypted and unencrypted connections are handled.
+        optional: false
+        keystore: conf/.keystore
+        keystore_password: cassandra
+        # require_client_auth: false
+        # Set trustore and truststore_password if require_client_auth is true
+        # truststore: conf/.truststore
+        # truststore_password: cassandra
+        # More advanced defaults below:
+        # protocol: TLS
+        # store_type: JKS
+        # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
+
+``internode_compression``
+-------------------------
+internode_compression controls whether traffic between nodes is
+compressed.
+Can be:
+
+all
+  all traffic is compressed
+
+dc
+  traffic between different datacenters is compressed
+
+none
+  nothing is compressed.
+
+*Default Value:* dc
+
+``inter_dc_tcp_nodelay``
+------------------------
+
+Enable or disable tcp_nodelay for inter-dc communication.
+Disabling it will result in larger (but fewer) network packets being sent,
+reducing overhead from the TCP protocol itself, at the cost of increasing
+latency if you block for cross-datacenter responses.
+
+*Default Value:* false
+
+``tracetype_query_ttl``
+-----------------------
+
+TTL for different trace types used during logging of the repair process.
+
+*Default Value:* 86400
+
+``tracetype_repair_ttl``
+------------------------
+
+*Default Value:* 604800
+
+``enable_user_defined_functions``
+---------------------------------
+
+If unset, all GC Pauses greater than gc_log_threshold_in_ms will log at
+INFO level
+UDFs (user defined functions) are disabled by default.
+As of Cassandra 3.0 there is a sandbox in place that should prevent execution of evil code.
+
+*Default Value:* false
+
+``enable_scripted_user_defined_functions``
+------------------------------------------
+
+Enables scripted UDFs (JavaScript UDFs).
+Java UDFs are always enabled, if enable_user_defined_functions is true.
+Enable this option to be able to use UDFs with "language javascript" or any custom JSR-223 provider.
+This option has no effect, if enable_user_defined_functions is false.
+
+*Default Value:* false
+
+``windows_timer_interval``
+--------------------------
+
+The default Windows kernel timer and scheduling resolution is 15.6ms for power conservation.
+Lowering this value on Windows can provide much tighter latency and better throughput, however
+some virtualized environments may see a negative performance impact from changing this setting
+below their system default. The sysinternals 'clockres' tool can confirm your system's default
+setting.
+
+*Default Value:* 1
+
+``transparent_data_encryption_options``
+---------------------------------------
+
+
+Enables encrypting data at-rest (on disk). Different key providers can be plugged in, but the default reads from
+a JCE-style keystore. A single keystore can hold multiple keys, but the one referenced by
+the "key_alias" is the only key that will be used for encrypt opertaions; previously used keys
+can still (and should!) be in the keystore and will be used on decrypt operations
+(to handle the case of key rotation).
+
+It is strongly recommended to download and install Java Cryptography Extension (JCE)
+Unlimited Strength Jurisdiction Policy Files for your version of the JDK.
+(current link: http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html)
+
+Currently, only the following file types are supported for transparent data encryption, although
+more are coming in future cassandra releases: commitlog, hints
+
+*Default Value (complex option)*::
+
+        enabled: false
+        chunk_length_kb: 64
+        cipher: AES/CBC/PKCS5Padding
+        key_alias: testing:1
+        # CBC IV length for AES needs to be 16 bytes (which is also the default size)
+        # iv_length: 16
+        key_provider:
+          - class_name: org.apache.cassandra.security.JKSKeyProvider
+            parameters:
+              - keystore: conf/.keystore
+                keystore_password: cassandra
+                store_type: JCEKS
+                key_password: cassandra
+
+``tombstone_warn_threshold``
+----------------------------
+
+####################
+SAFETY THRESHOLDS #
+####################
+
+When executing a scan, within or across a partition, we need to keep the
+tombstones seen in memory so we can return them to the coordinator, which
+will use them to make sure other replicas also know about the deleted rows.
+With workloads that generate a lot of tombstones, this can cause performance
+problems and even exaust the server heap.
+(http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets)
+Adjust the thresholds here if you understand the dangers and want to
+scan more tombstones anyway.  These thresholds may also be adjusted at runtime
+using the StorageService mbean.
+
+*Default Value:* 1000
+
+``tombstone_failure_threshold``
+-------------------------------
+
+*Default Value:* 100000
+
+``batch_size_warn_threshold_in_kb``
+-----------------------------------
+
+Log WARN on any multiple-partition batch size exceeding this value. 5kb per batch by default.
+Caution should be taken on increasing the size of this threshold as it can lead to node instability.
+
+*Default Value:* 5
+
+``batch_size_fail_threshold_in_kb``
+-----------------------------------
+
+Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.
+
+*Default Value:* 50
+
+``unlogged_batch_across_partitions_warn_threshold``
+---------------------------------------------------
+
+Log WARN on any batches not of type LOGGED than span across more partitions than this limit
+
+*Default Value:* 10
+
+``compaction_large_partition_warning_threshold_mb``
+---------------------------------------------------
+
+Log a warning when compacting partitions larger than this value
+
+*Default Value:* 100
+
+``gc_log_threshold_in_ms``
+--------------------------
+*This option is commented out by default.*
+
+GC Pauses greater than 200 ms will be logged at INFO level
+This threshold can be adjusted to minimize logging if necessary
+
+*Default Value:* 200
+
+``gc_warn_threshold_in_ms``
+---------------------------
+*This option is commented out by default.*
+
+GC Pauses greater than gc_warn_threshold_in_ms will be logged at WARN level
+Adjust the threshold based on your application throughput requirement. Setting to 0
+will deactivate the feature.
+
+*Default Value:* 1000
+
+``max_value_size_in_mb``
+------------------------
+*This option is commented out by default.*
+
+Maximum size of any value in SSTables. Safety measure to detect SSTable corruption
+early. Any value size larger than this threshold will result into marking an SSTable
+as corrupted. This should be positive and less than 2048.
+
+*Default Value:* 256
+
+``back_pressure_enabled``
+-------------------------
+
+Back-pressure settings #
+If enabled, the coordinator will apply the back-pressure strategy specified below to each mutation
+sent to replicas, with the aim of reducing pressure on overloaded replicas.
+
+*Default Value:* false
+
+``back_pressure_strategy``
+--------------------------
+The back-pressure strategy applied.
+The default implementation, RateBasedBackPressure, takes three arguments:
+high ratio, factor, and flow type, and uses the ratio between incoming mutation responses and outgoing mutation requests.
+If below high ratio, outgoing mutations are rate limited according to the incoming rate decreased by the given factor;
+if above high ratio, the rate limiting is increased by the given factor;
+such factor is usually best configured between 1 and 10, use larger values for a faster recovery
+at the expense of potentially more dropped mutations;
+the rate limiting is applied according to the flow type: if FAST, it's rate limited at the speed of the fastest replica,
+if SLOW at the speed of the slowest one.
+New strategies can be added. Implementors need to implement org.apache.cassandra.net.BackpressureStrategy and
+provide a public constructor accepting a Map<String, Object>.
+
+``otc_coalescing_strategy``
+---------------------------
+*This option is commented out by default.*
+
+Coalescing Strategies #
+Coalescing multiples messages turns out to significantly boost message processing throughput (think doubling or more).
+On bare metal, the floor for packet processing throughput is high enough that many applications won't notice, but in
+virtualized environments, the point at which an application can be bound by network packet processing can be
+surprisingly low compared to the throughput of task processing that is possible inside a VM. It's not that bare metal
+doesn't benefit from coalescing messages, it's that the number of packets a bare metal network interface can process
+is sufficient for many applications such that no load starvation is experienced even without coalescing.
+There are other benefits to coalescing network messages that are harder to isolate with a simple metric like messages
+per second. By coalescing multiple tasks together, a network thread can process multiple messages for the cost of one
+trip to read from a socket, and all the task submission work can be done at the same time reducing context switching
+and increasing cache friendliness of network message processing.
+See CASSANDRA-8692 for details.
+
+Strategy to use for coalescing messages in OutboundTcpConnection.
+Can be fixed, movingaverage, timehorizon, disabled (default).
+You can also specify a subclass of CoalescingStrategies.CoalescingStrategy by name.
+
+*Default Value:* DISABLED
+
+``otc_coalescing_window_us``
+----------------------------
+*This option is commented out by default.*
+
+How many microseconds to wait for coalescing. For fixed strategy this is the amount of time after the first
+message is received before it will be sent with any accompanying messages. For moving average this is the
+maximum amount of time that will be waited as well as the interval at which messages must arrive on average
+for coalescing to be enabled.
+
+*Default Value:* 200
+
+``otc_coalescing_enough_coalesced_messages``
+--------------------------------------------
+*This option is commented out by default.*
+
+Do not try to coalesce messages if we already got that many messages. This should be more than 2 and less than 128.
+
+*Default Value:* 8
+
+``otc_backlog_expiration_interval_ms``
+--------------------------------------
+*This option is commented out by default.*
+
+How many milliseconds to wait between two expiration runs on the backlog (queue) of the OutboundTcpConnection.
+Expiration is done if messages are piling up in the backlog. Droppable messages are expired to free the memory
+taken by expired messages. The interval should be between 0 and 1000, and in most installations the default value
+will be appropriate. A smaller value could potentially expire messages slightly sooner at the expense of more CPU
+time and queue contention while iterating the backlog of messages.
+An interval of 0 disables any wait time, which is the behavior of former Cassandra versions.
+
+
+*Default Value:* 200
+
+``ideal_consistency_level``
+---------------------------
+*This option is commented out by default.*
+
+Track a metric per keyspace indicating whether replication achieved the ideal consistency
+level for writes without timing out. This is different from the consistency level requested by
+each write which may be lower in order to facilitate availability.
+
+*Default Value:* EACH_QUORUM
+
+``automatic_sstable_upgrade``
+-----------------------------
+*This option is commented out by default.*
+
+Automatically upgrade sstables after upgrade - if there is no ordinary compaction to do, the
+oldest non-upgraded sstable will get upgraded to the latest version
+
+*Default Value:* false
+
+``max_concurrent_automatic_sstable_upgrades``
+---------------------------------------------
+*This option is commented out by default.*
+Limit the number of concurrent sstable upgrades
+
+*Default Value:* 1
+
+``audit_logging_options``
+-------------------------
+
+Audit logging - Logs every incoming CQL command request, authentication to a node. See the docs
+on audit_logging for full details about the various configuration options.
+
+``full_query_logging_options``
+------------------------------
+*This option is commented out by default.*
+
+
+default options for full query logging - these can be overridden from command line when executing
+nodetool enablefullquerylog
+
+``corrupted_tombstone_strategy``
+--------------------------------
+*This option is commented out by default.*
+
+validate tombstones on reads and compaction
+can be either "disabled", "warn" or "exception"
+
+*Default Value:* disabled
+
+``diagnostic_events_enabled``
+-----------------------------
+
+Diagnostic Events #
+If enabled, diagnostic events can be helpful for troubleshooting operational issues. Emitted events contain details
+on internal state and temporal relationships across events, accessible by clients via JMX.
+
+*Default Value:* false
+
+``native_transport_flush_in_batches_legacy``
+--------------------------------------------
+*This option is commented out by default.*
+
+Use native transport TCP message coalescing. If on upgrade to 4.0 you found your throughput decreasing, and in
+particular you run an old kernel or have very fewer client connections, this option might be worth evaluating.
+
+*Default Value:* false
+
+``repaired_data_tracking_for_range_reads_enabled``
+--------------------------------------------------
+
+Enable tracking of repaired state of data during reads and comparison between replicas
+Mismatches between the repaired sets of replicas can be characterized as either confirmed
+or unconfirmed. In this context, unconfirmed indicates that the presence of pending repair
+sessions, unrepaired partition tombstones, or some other condition means that the disparity
+cannot be considered conclusive. Confirmed mismatches should be a trigger for investigation
+as they may be indicative of corruption or data loss.
+There are separate flags for range vs partition reads as single partition reads are only tracked
+when CL > 1 and a digest mismatch occurs. Currently, range queries don't use digests so if
+enabled for range reads, all range reads will include repaired data tracking. As this adds
+some overhead, operators may wish to disable it whilst still enabling it for partition reads
+
+*Default Value:* false
+
+``repaired_data_tracking_for_partition_reads_enabled``
+------------------------------------------------------
+
+*Default Value:* false
+
+``report_unconfirmed_repaired_data_mismatches``
+-----------------------------------------------
+If false, only confirmed mismatches will be reported. If true, a separate metric for unconfirmed
+mismatches will also be recorded. This is to avoid potential signal:noise issues are unconfirmed
+mismatches are less actionable than confirmed ones.
+
+*Default Value:* false
+
+``enable_materialized_views``
+-----------------------------
+
+########################
+EXPERIMENTAL FEATURES #
+########################
+
+Enables materialized view creation on this node.
+Materialized views are considered experimental and are not recommended for production use.
+
+*Default Value:* false
+
+``enable_sasi_indexes``
+-----------------------
+
+Enables SASI index creation on this node.
+SASI indexes are considered experimental and are not recommended for production use.
+
+*Default Value:* false
+
+``enable_transient_replication``
+--------------------------------
+
+Enables creation of transiently replicated keyspaces on this node.
+Transient replication is experimental and is not recommended for production use.
+
+*Default Value:* false
diff --git a/src/doc/4.0-beta4/_sources/configuration/cassandra_config_file.rst.txt b/src/doc/4.0-beta4/_sources/configuration/cassandra_config_file.rst.txt
new file mode 100644
index 0000000..b049201
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/cassandra_config_file.rst.txt
@@ -0,0 +1,2103 @@
+.. _cassandra-yaml:
+
+Cassandra Configuration File
+============================
+
+``cluster_name``
+----------------
+The name of the cluster. This is mainly used to prevent machines in
+one logical cluster from joining another.
+
+*Default Value:* 'Test Cluster'
+
+``num_tokens``
+--------------
+
+This defines the number of tokens randomly assigned to this node on the ring
+The more tokens, relative to other nodes, the larger the proportion of data
+that this node will store. You probably want all nodes to have the same number
+of tokens assuming they have equal hardware capability.
+
+If you leave this unspecified, Cassandra will use the default of 1 token for legacy compatibility,
+and will use the initial_token as described below.
+
+Specifying initial_token will override this setting on the node's initial start,
+on subsequent starts, this setting will apply even if initial token is set.
+
+
+*Default Value:* 256
+
+``allocate_tokens_for_keyspace``
+--------------------------------
+*This option is commented out by default.*
+
+Triggers automatic allocation of num_tokens tokens for this node. The allocation
+algorithm attempts to choose tokens in a way that optimizes replicated load over
+the nodes in the datacenter for the replica factor.
+
+The load assigned to each node will be close to proportional to its number of
+vnodes.
+
+Only supported with the Murmur3Partitioner.
+
+Replica factor is determined via the replication strategy used by the specified
+keyspace.
+
+*Default Value:* KEYSPACE
+
+``allocate_tokens_for_local_replication_factor``
+------------------------------------------------
+*This option is commented out by default.*
+
+Replica factor is explicitly set, regardless of keyspace or datacenter.
+This is the replica factor within the datacenter, like NTS.
+
+*Default Value:* 3
+
+``initial_token``
+-----------------
+*This option is commented out by default.*
+
+initial_token allows you to specify tokens manually.  While you can use it with
+vnodes (num_tokens > 1, above) -- in which case you should provide a 
+comma-separated list -- it's primarily used when adding nodes to legacy clusters 
+that do not have vnodes enabled.
+
+``hinted_handoff_enabled``
+--------------------------
+
+May either be "true" or "false" to enable globally
+
+*Default Value:* true
+
+``hinted_handoff_disabled_datacenters``
+---------------------------------------
+*This option is commented out by default.*
+
+When hinted_handoff_enabled is true, a black list of data centers that will not
+perform hinted handoff
+
+*Default Value (complex option)*::
+
+    #    - DC1
+    #    - DC2
+
+``max_hint_window_in_ms``
+-------------------------
+this defines the maximum amount of time a dead host will have hints
+generated.  After it has been dead this long, new hints for it will not be
+created until it has been seen alive and gone down again.
+
+*Default Value:* 10800000 # 3 hours
+
+``hinted_handoff_throttle_in_kb``
+---------------------------------
+
+Maximum throttle in KBs per second, per delivery thread.  This will be
+reduced proportionally to the number of nodes in the cluster.  (If there
+are two nodes in the cluster, each delivery thread will use the maximum
+rate; if there are three, each will throttle to half of the maximum,
+since we expect two nodes to be delivering hints simultaneously.)
+
+*Default Value:* 1024
+
+``max_hints_delivery_threads``
+------------------------------
+
+Number of threads with which to deliver hints;
+Consider increasing this number when you have multi-dc deployments, since
+cross-dc handoff tends to be slower
+
+*Default Value:* 2
+
+``hints_directory``
+-------------------
+*This option is commented out by default.*
+
+Directory where Cassandra should store hints.
+If not set, the default directory is $CASSANDRA_HOME/data/hints.
+
+*Default Value:*  /var/lib/cassandra/hints
+
+``hints_flush_period_in_ms``
+----------------------------
+
+How often hints should be flushed from the internal buffers to disk.
+Will *not* trigger fsync.
+
+*Default Value:* 10000
+
+``max_hints_file_size_in_mb``
+-----------------------------
+
+Maximum size for a single hints file, in megabytes.
+
+*Default Value:* 128
+
+``hints_compression``
+---------------------
+*This option is commented out by default.*
+
+Compression to apply to the hint files. If omitted, hints files
+will be written uncompressed. LZ4, Snappy, and Deflate compressors
+are supported.
+
+*Default Value (complex option)*::
+
+    #   - class_name: LZ4Compressor
+    #     parameters:
+    #         -
+
+``batchlog_replay_throttle_in_kb``
+----------------------------------
+Maximum throttle in KBs per second, total. This will be
+reduced proportionally to the number of nodes in the cluster.
+
+*Default Value:* 1024
+
+``authenticator``
+-----------------
+
+Authentication backend, implementing IAuthenticator; used to identify users
+Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthenticator,
+PasswordAuthenticator}.
+
+- AllowAllAuthenticator performs no checks - set it to disable authentication.
+- PasswordAuthenticator relies on username/password pairs to authenticate
+  users. It keeps usernames and hashed passwords in system_auth.roles table.
+  Please increase system_auth keyspace replication factor if you use this authenticator.
+  If using PasswordAuthenticator, CassandraRoleManager must also be used (see below)
+
+*Default Value:* AllowAllAuthenticator
+
+``authorizer``
+--------------
+
+Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
+Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthorizer,
+CassandraAuthorizer}.
+
+- AllowAllAuthorizer allows any action to any user - set it to disable authorization.
+- CassandraAuthorizer stores permissions in system_auth.role_permissions table. Please
+  increase system_auth keyspace replication factor if you use this authorizer.
+
+*Default Value:* AllowAllAuthorizer
+
+``role_manager``
+----------------
+
+Part of the Authentication & Authorization backend, implementing IRoleManager; used
+to maintain grants and memberships between roles.
+Out of the box, Cassandra provides org.apache.cassandra.auth.CassandraRoleManager,
+which stores role information in the system_auth keyspace. Most functions of the
+IRoleManager require an authenticated login, so unless the configured IAuthenticator
+actually implements authentication, most of this functionality will be unavailable.
+
+- CassandraRoleManager stores role data in the system_auth keyspace. Please
+  increase system_auth keyspace replication factor if you use this role manager.
+
+*Default Value:* CassandraRoleManager
+
+``network_authorizer``
+----------------------
+
+Network authorization backend, implementing INetworkAuthorizer; used to restrict user
+access to certain DCs
+Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllNetworkAuthorizer,
+CassandraNetworkAuthorizer}.
+
+- AllowAllNetworkAuthorizer allows access to any DC to any user - set it to disable authorization.
+- CassandraNetworkAuthorizer stores permissions in system_auth.network_permissions table. Please
+  increase system_auth keyspace replication factor if you use this authorizer.
+
+*Default Value:* AllowAllNetworkAuthorizer
+
+``roles_validity_in_ms``
+------------------------
+
+Validity period for roles cache (fetching granted roles can be an expensive
+operation depending on the role manager, CassandraRoleManager is one example)
+Granted roles are cached for authenticated sessions in AuthenticatedUser and
+after the period specified here, become eligible for (async) reload.
+Defaults to 2000, set to 0 to disable caching entirely.
+Will be disabled automatically for AllowAllAuthenticator.
+
+*Default Value:* 2000
+
+``roles_update_interval_in_ms``
+-------------------------------
+*This option is commented out by default.*
+
+Refresh interval for roles cache (if enabled).
+After this interval, cache entries become eligible for refresh. Upon next
+access, an async reload is scheduled and the old value returned until it
+completes. If roles_validity_in_ms is non-zero, then this must be
+also.
+Defaults to the same value as roles_validity_in_ms.
+
+*Default Value:* 2000
+
+``permissions_validity_in_ms``
+------------------------------
+
+Validity period for permissions cache (fetching permissions can be an
+expensive operation depending on the authorizer, CassandraAuthorizer is
+one example). Defaults to 2000, set to 0 to disable.
+Will be disabled automatically for AllowAllAuthorizer.
+
+*Default Value:* 2000
+
+``permissions_update_interval_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+Refresh interval for permissions cache (if enabled).
+After this interval, cache entries become eligible for refresh. Upon next
+access, an async reload is scheduled and the old value returned until it
+completes. If permissions_validity_in_ms is non-zero, then this must be
+also.
+Defaults to the same value as permissions_validity_in_ms.
+
+*Default Value:* 2000
+
+``credentials_validity_in_ms``
+------------------------------
+
+Validity period for credentials cache. This cache is tightly coupled to
+the provided PasswordAuthenticator implementation of IAuthenticator. If
+another IAuthenticator implementation is configured, this cache will not
+be automatically used and so the following settings will have no effect.
+Please note, credentials are cached in their encrypted form, so while
+activating this cache may reduce the number of queries made to the
+underlying table, it may not  bring a significant reduction in the
+latency of individual authentication attempts.
+Defaults to 2000, set to 0 to disable credentials caching.
+
+*Default Value:* 2000
+
+``credentials_update_interval_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+Refresh interval for credentials cache (if enabled).
+After this interval, cache entries become eligible for refresh. Upon next
+access, an async reload is scheduled and the old value returned until it
+completes. If credentials_validity_in_ms is non-zero, then this must be
+also.
+Defaults to the same value as credentials_validity_in_ms.
+
+*Default Value:* 2000
+
+``partitioner``
+---------------
+
+The partitioner is responsible for distributing groups of rows (by
+partition key) across nodes in the cluster. The partitioner can NOT be
+changed without reloading all data.  If you are adding nodes or upgrading,
+you should set this to the same partitioner that you are currently using.
+
+The default partitioner is the Murmur3Partitioner. Older partitioners
+such as the RandomPartitioner, ByteOrderedPartitioner, and
+OrderPreservingPartitioner have been included for backward compatibility only.
+For new clusters, you should NOT change this value.
+
+
+*Default Value:* org.apache.cassandra.dht.Murmur3Partitioner
+
+``data_file_directories``
+-------------------------
+*This option is commented out by default.*
+
+Directories where Cassandra should store data on disk. If multiple
+directories are specified, Cassandra will spread data evenly across 
+them by partitioning the token ranges.
+If not set, the default directory is $CASSANDRA_HOME/data/data.
+
+*Default Value (complex option)*::
+
+    #     - /var/lib/cassandra/data
+
+``commitlog_directory``
+-----------------------
+*This option is commented out by default.*
+commit log.  when running on magnetic HDD, this should be a
+separate spindle than the data directories.
+If not set, the default directory is $CASSANDRA_HOME/data/commitlog.
+
+*Default Value:*  /var/lib/cassandra/commitlog
+
+``cdc_enabled``
+---------------
+
+Enable / disable CDC functionality on a per-node basis. This modifies the logic used
+for write path allocation rejection (standard: never reject. cdc: reject Mutation
+containing a CDC-enabled table if at space limit in cdc_raw_directory).
+
+*Default Value:* false
+
+``cdc_raw_directory``
+---------------------
+*This option is commented out by default.*
+
+CommitLogSegments are moved to this directory on flush if cdc_enabled: true and the
+segment contains mutations for a CDC-enabled table. This should be placed on a
+separate spindle than the data directories. If not set, the default directory is
+$CASSANDRA_HOME/data/cdc_raw.
+
+*Default Value:*  /var/lib/cassandra/cdc_raw
+
+``disk_failure_policy``
+-----------------------
+
+Policy for data disk failures:
+
+die
+  shut down gossip and client transports and kill the JVM for any fs errors or
+  single-sstable errors, so the node can be replaced.
+
+stop_paranoid
+  shut down gossip and client transports even for single-sstable errors,
+  kill the JVM for errors during startup.
+
+stop
+  shut down gossip and client transports, leaving the node effectively dead, but
+  can still be inspected via JMX, kill the JVM for errors during startup.
+
+best_effort
+   stop using the failed disk and respond to requests based on
+   remaining available sstables.  This means you WILL see obsolete
+   data at CL.ONE!
+
+ignore
+   ignore fatal errors and let requests fail, as in pre-1.2 Cassandra
+
+*Default Value:* stop
+
+``commit_failure_policy``
+-------------------------
+
+Policy for commit disk failures:
+
+die
+  shut down the node and kill the JVM, so the node can be replaced.
+
+stop
+  shut down the node, leaving the node effectively dead, but
+  can still be inspected via JMX.
+
+stop_commit
+  shutdown the commit log, letting writes collect but
+  continuing to service reads, as in pre-2.0.5 Cassandra
+
+ignore
+  ignore fatal errors and let the batches fail
+
+*Default Value:* stop
+
+``prepared_statements_cache_size_mb``
+-------------------------------------
+
+Maximum size of the native protocol prepared statement cache
+
+Valid values are either "auto" (omitting the value) or a value greater 0.
+
+Note that specifying a too large value will result in long running GCs and possbily
+out-of-memory errors. Keep the value at a small fraction of the heap.
+
+If you constantly see "prepared statements discarded in the last minute because
+cache limit reached" messages, the first step is to investigate the root cause
+of these messages and check whether prepared statements are used correctly -
+i.e. use bind markers for variable parts.
+
+Do only change the default value, if you really have more prepared statements than
+fit in the cache. In most cases it is not neccessary to change this value.
+Constantly re-preparing statements is a performance penalty.
+
+Default value ("auto") is 1/256th of the heap or 10MB, whichever is greater
+
+``key_cache_size_in_mb``
+------------------------
+
+Maximum size of the key cache in memory.
+
+Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the
+minimum, sometimes more. The key cache is fairly tiny for the amount of
+time it saves, so it's worthwhile to use it at large numbers.
+The row cache saves even more time, but must contain the entire row,
+so it is extremely space-intensive. It's best to only use the
+row cache if you have hot rows or static rows.
+
+NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
+
+Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache.
+
+``key_cache_save_period``
+-------------------------
+
+Duration in seconds after which Cassandra should
+save the key cache. Caches are saved to saved_caches_directory as
+specified in this configuration file.
+
+Saved caches greatly improve cold-start speeds, and is relatively cheap in
+terms of I/O for the key cache. Row cache saving is much more expensive and
+has limited use.
+
+Default is 14400 or 4 hours.
+
+*Default Value:* 14400
+
+``key_cache_keys_to_save``
+--------------------------
+*This option is commented out by default.*
+
+Number of keys from the key cache to save
+Disabled by default, meaning all keys are going to be saved
+
+*Default Value:* 100
+
+``row_cache_class_name``
+------------------------
+*This option is commented out by default.*
+
+Row cache implementation class name. Available implementations:
+
+org.apache.cassandra.cache.OHCProvider
+  Fully off-heap row cache implementation (default).
+
+org.apache.cassandra.cache.SerializingCacheProvider
+  This is the row cache implementation availabile
+  in previous releases of Cassandra.
+
+*Default Value:* org.apache.cassandra.cache.OHCProvider
+
+``row_cache_size_in_mb``
+------------------------
+
+Maximum size of the row cache in memory.
+Please note that OHC cache implementation requires some additional off-heap memory to manage
+the map structures and some in-flight memory during operations before/after cache entries can be
+accounted against the cache capacity. This overhead is usually small compared to the whole capacity.
+Do not specify more memory that the system can afford in the worst usual situation and leave some
+headroom for OS block level cache. Do never allow your system to swap.
+
+Default value is 0, to disable row caching.
+
+*Default Value:* 0
+
+``row_cache_save_period``
+-------------------------
+
+Duration in seconds after which Cassandra should save the row cache.
+Caches are saved to saved_caches_directory as specified in this configuration file.
+
+Saved caches greatly improve cold-start speeds, and is relatively cheap in
+terms of I/O for the key cache. Row cache saving is much more expensive and
+has limited use.
+
+Default is 0 to disable saving the row cache.
+
+*Default Value:* 0
+
+``row_cache_keys_to_save``
+--------------------------
+*This option is commented out by default.*
+
+Number of keys from the row cache to save.
+Specify 0 (which is the default), meaning all keys are going to be saved
+
+*Default Value:* 100
+
+``counter_cache_size_in_mb``
+----------------------------
+
+Maximum size of the counter cache in memory.
+
+Counter cache helps to reduce counter locks' contention for hot counter cells.
+In case of RF = 1 a counter cache hit will cause Cassandra to skip the read before
+write entirely. With RF > 1 a counter cache hit will still help to reduce the duration
+of the lock hold, helping with hot counter cell updates, but will not allow skipping
+the read entirely. Only the local (clock, count) tuple of a counter cell is kept
+in memory, not the whole counter, so it's relatively cheap.
+
+NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
+
+Default value is empty to make it "auto" (min(2.5% of Heap (in MB), 50MB)). Set to 0 to disable counter cache.
+NOTE: if you perform counter deletes and rely on low gcgs, you should disable the counter cache.
+
+``counter_cache_save_period``
+-----------------------------
+
+Duration in seconds after which Cassandra should
+save the counter cache (keys only). Caches are saved to saved_caches_directory as
+specified in this configuration file.
+
+Default is 7200 or 2 hours.
+
+*Default Value:* 7200
+
+``counter_cache_keys_to_save``
+------------------------------
+*This option is commented out by default.*
+
+Number of keys from the counter cache to save
+Disabled by default, meaning all keys are going to be saved
+
+*Default Value:* 100
+
+``saved_caches_directory``
+--------------------------
+*This option is commented out by default.*
+
+saved caches
+If not set, the default directory is $CASSANDRA_HOME/data/saved_caches.
+
+*Default Value:*  /var/lib/cassandra/saved_caches
+
+``commitlog_sync_batch_window_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+commitlog_sync may be either "periodic", "group", or "batch." 
+
+When in batch mode, Cassandra won't ack writes until the commit log
+has been flushed to disk.  Each incoming write will trigger the flush task.
+commitlog_sync_batch_window_in_ms is a deprecated value. Previously it had
+almost no value, and is being removed.
+
+
+*Default Value:* 2
+
+``commitlog_sync_group_window_in_ms``
+-------------------------------------
+*This option is commented out by default.*
+
+group mode is similar to batch mode, where Cassandra will not ack writes
+until the commit log has been flushed to disk. The difference is group
+mode will wait up to commitlog_sync_group_window_in_ms between flushes.
+
+
+*Default Value:* 1000
+
+``commitlog_sync``
+------------------
+
+the default option is "periodic" where writes may be acked immediately
+and the CommitLog is simply synced every commitlog_sync_period_in_ms
+milliseconds.
+
+*Default Value:* periodic
+
+``commitlog_sync_period_in_ms``
+-------------------------------
+
+*Default Value:* 10000
+
+``periodic_commitlog_sync_lag_block_in_ms``
+-------------------------------------------
+*This option is commented out by default.*
+
+When in periodic commitlog mode, the number of milliseconds to block writes
+while waiting for a slow disk flush to complete.
+
+``commitlog_segment_size_in_mb``
+--------------------------------
+
+The size of the individual commitlog file segments.  A commitlog
+segment may be archived, deleted, or recycled once all the data
+in it (potentially from each columnfamily in the system) has been
+flushed to sstables.
+
+The default size is 32, which is almost always fine, but if you are
+archiving commitlog segments (see commitlog_archiving.properties),
+then you probably want a finer granularity of archiving; 8 or 16 MB
+is reasonable.
+Max mutation size is also configurable via max_mutation_size_in_kb setting in
+cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 1024.
+This should be positive and less than 2048.
+
+NOTE: If max_mutation_size_in_kb is set explicitly then commitlog_segment_size_in_mb must
+be set to at least twice the size of max_mutation_size_in_kb / 1024
+
+
+*Default Value:* 32
+
+``commitlog_compression``
+-------------------------
+*This option is commented out by default.*
+
+Compression to apply to the commit log. If omitted, the commit log
+will be written uncompressed.  LZ4, Snappy, and Deflate compressors
+are supported.
+
+*Default Value (complex option)*::
+
+    #   - class_name: LZ4Compressor
+    #     parameters:
+    #         -
+
+``table``
+---------
+*This option is commented out by default.*
+Compression to apply to SSTables as they flush for compressed tables.
+Note that tables without compression enabled do not respect this flag.
+
+As high ratio compressors like LZ4HC, Zstd, and Deflate can potentially
+block flushes for too long, the default is to flush with a known fast
+compressor in those cases. Options are:
+
+none : Flush without compressing blocks but while still doing checksums.
+fast : Flush with a fast compressor. If the table is already using a
+       fast compressor that compressor is used.
+
+*Default Value:* Always flush with the same compressor that the table uses. This
+
+``flush_compression``
+---------------------
+*This option is commented out by default.*
+       was the pre 4.0 behavior.
+
+
+*Default Value:* fast
+
+``seed_provider``
+-----------------
+
+any class that implements the SeedProvider interface and has a
+constructor that takes a Map<String, String> of parameters will do.
+
+*Default Value (complex option)*::
+
+        # Addresses of hosts that are deemed contact points. 
+        # Cassandra nodes use this list of hosts to find each other and learn
+        # the topology of the ring.  You must change this if you are running
+        # multiple nodes!
+        - class_name: org.apache.cassandra.locator.SimpleSeedProvider
+          parameters:
+              # seeds is actually a comma-delimited list of addresses.
+              # Ex: "<ip1>,<ip2>,<ip3>"
+              - seeds: "127.0.0.1:7000"
+
+``concurrent_reads``
+--------------------
+For workloads with more data than can fit in memory, Cassandra's
+bottleneck will be reads that need to fetch data from
+disk. "concurrent_reads" should be set to (16 * number_of_drives) in
+order to allow the operations to enqueue low enough in the stack
+that the OS and drives can reorder them. Same applies to
+"concurrent_counter_writes", since counter writes read the current
+values before incrementing and writing them back.
+
+On the other hand, since writes are almost never IO bound, the ideal
+number of "concurrent_writes" is dependent on the number of cores in
+your system; (8 * number_of_cores) is a good rule of thumb.
+
+*Default Value:* 32
+
+``concurrent_writes``
+---------------------
+
+*Default Value:* 32
+
+``concurrent_counter_writes``
+-----------------------------
+
+*Default Value:* 32
+
+``concurrent_materialized_view_writes``
+---------------------------------------
+
+For materialized view writes, as there is a read involved, so this should
+be limited by the less of concurrent reads or concurrent writes.
+
+*Default Value:* 32
+
+``file_cache_size_in_mb``
+-------------------------
+*This option is commented out by default.*
+
+Maximum memory to use for sstable chunk cache and buffer pooling.
+32MB of this are reserved for pooling buffers, the rest is used as an
+cache that holds uncompressed sstable chunks.
+Defaults to the smaller of 1/4 of heap or 512MB. This pool is allocated off-heap,
+so is in addition to the memory allocated for heap. The cache also has on-heap
+overhead which is roughly 128 bytes per chunk (i.e. 0.2% of the reserved size
+if the default 64k chunk size is used).
+Memory is only allocated when needed.
+
+*Default Value:* 512
+
+``buffer_pool_use_heap_if_exhausted``
+-------------------------------------
+*This option is commented out by default.*
+
+Flag indicating whether to allocate on or off heap when the sstable buffer
+pool is exhausted, that is when it has exceeded the maximum memory
+file_cache_size_in_mb, beyond which it will not cache buffers but allocate on request.
+
+
+*Default Value:* true
+
+``disk_optimization_strategy``
+------------------------------
+*This option is commented out by default.*
+
+The strategy for optimizing disk read
+Possible values are:
+ssd (for solid state disks, the default)
+spinning (for spinning disks)
+
+*Default Value:* ssd
+
+``memtable_heap_space_in_mb``
+-----------------------------
+*This option is commented out by default.*
+
+Total permitted memory to use for memtables. Cassandra will stop
+accepting writes when the limit is exceeded until a flush completes,
+and will trigger a flush based on memtable_cleanup_threshold
+If omitted, Cassandra will set both to 1/4 the size of the heap.
+
+*Default Value:* 2048
+
+``memtable_offheap_space_in_mb``
+--------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 2048
+
+``memtable_cleanup_threshold``
+------------------------------
+*This option is commented out by default.*
+
+memtable_cleanup_threshold is deprecated. The default calculation
+is the only reasonable choice. See the comments on  memtable_flush_writers
+for more information.
+
+Ratio of occupied non-flushing memtable size to total permitted size
+that will trigger a flush of the largest memtable. Larger mct will
+mean larger flushes and hence less compaction, but also less concurrent
+flush activity which can make it difficult to keep your disks fed
+under heavy write load.
+
+memtable_cleanup_threshold defaults to 1 / (memtable_flush_writers + 1)
+
+*Default Value:* 0.11
+
+``memtable_allocation_type``
+----------------------------
+
+Specify the way Cassandra allocates and manages memtable memory.
+Options are:
+
+heap_buffers
+  on heap nio buffers
+
+offheap_buffers
+  off heap (direct) nio buffers
+
+offheap_objects
+   off heap objects
+
+*Default Value:* heap_buffers
+
+``repair_session_space_in_mb``
+------------------------------
+*This option is commented out by default.*
+
+Limit memory usage for Merkle tree calculations during repairs. The default
+is 1/16th of the available heap. The main tradeoff is that smaller trees
+have less resolution, which can lead to over-streaming data. If you see heap
+pressure during repairs, consider lowering this, but you cannot go below
+one megabyte. If you see lots of over-streaming, consider raising
+this or using subrange repair.
+
+For more details see https://issues.apache.org/jira/browse/CASSANDRA-14096.
+
+
+``commitlog_total_space_in_mb``
+-------------------------------
+*This option is commented out by default.*
+
+Total space to use for commit logs on disk.
+
+If space gets above this value, Cassandra will flush every dirty CF
+in the oldest segment and remove it.  So a small total commitlog space
+will tend to cause more flush activity on less-active columnfamilies.
+
+The default value is the smaller of 8192, and 1/4 of the total space
+of the commitlog volume.
+
+
+*Default Value:* 8192
+
+``memtable_flush_writers``
+--------------------------
+*This option is commented out by default.*
+
+This sets the number of memtable flush writer threads per disk
+as well as the total number of memtables that can be flushed concurrently.
+These are generally a combination of compute and IO bound.
+
+Memtable flushing is more CPU efficient than memtable ingest and a single thread
+can keep up with the ingest rate of a whole server on a single fast disk
+until it temporarily becomes IO bound under contention typically with compaction.
+At that point you need multiple flush threads. At some point in the future
+it may become CPU bound all the time.
+
+You can tell if flushing is falling behind using the MemtablePool.BlockedOnAllocation
+metric which should be 0, but will be non-zero if threads are blocked waiting on flushing
+to free memory.
+
+memtable_flush_writers defaults to two for a single data directory.
+This means that two  memtables can be flushed concurrently to the single data directory.
+If you have multiple data directories the default is one memtable flushing at a time
+but the flush will use a thread per data directory so you will get two or more writers.
+
+Two is generally enough to flush on a fast disk [array] mounted as a single data directory.
+Adding more flush writers will result in smaller more frequent flushes that introduce more
+compaction overhead.
+
+There is a direct tradeoff between number of memtables that can be flushed concurrently
+and flush size and frequency. More is not better you just need enough flush writers
+to never stall waiting for flushing to free memory.
+
+
+*Default Value:* 2
+
+``cdc_total_space_in_mb``
+-------------------------
+*This option is commented out by default.*
+
+Total space to use for change-data-capture logs on disk.
+
+If space gets above this value, Cassandra will throw WriteTimeoutException
+on Mutations including tables with CDC enabled. A CDCCompactor is responsible
+for parsing the raw CDC logs and deleting them when parsing is completed.
+
+The default value is the min of 4096 mb and 1/8th of the total space
+of the drive where cdc_raw_directory resides.
+
+*Default Value:* 4096
+
+``cdc_free_space_check_interval_ms``
+------------------------------------
+*This option is commented out by default.*
+
+When we hit our cdc_raw limit and the CDCCompactor is either running behind
+or experiencing backpressure, we check at the following interval to see if any
+new space for cdc-tracked tables has been made available. Default to 250ms
+
+*Default Value:* 250
+
+``index_summary_capacity_in_mb``
+--------------------------------
+
+A fixed memory pool size in MB for for SSTable index summaries. If left
+empty, this will default to 5% of the heap size. If the memory usage of
+all index summaries exceeds this limit, SSTables with low read rates will
+shrink their index summaries in order to meet this limit.  However, this
+is a best-effort process. In extreme conditions Cassandra may need to use
+more than this amount of memory.
+
+``index_summary_resize_interval_in_minutes``
+--------------------------------------------
+
+How frequently index summaries should be resampled.  This is done
+periodically to redistribute memory from the fixed-size pool to sstables
+proportional their recent read rates.  Setting to -1 will disable this
+process, leaving existing index summaries at their current sampling level.
+
+*Default Value:* 60
+
+``trickle_fsync``
+-----------------
+
+Whether to, when doing sequential writing, fsync() at intervals in
+order to force the operating system to flush the dirty
+buffers. Enable this to avoid sudden dirty buffer flushing from
+impacting read latencies. Almost always a good idea on SSDs; not
+necessarily on platters.
+
+*Default Value:* false
+
+``trickle_fsync_interval_in_kb``
+--------------------------------
+
+*Default Value:* 10240
+
+``storage_port``
+----------------
+
+TCP port, for commands and data
+For security reasons, you should not expose this port to the internet.  Firewall it if needed.
+
+*Default Value:* 7000
+
+``ssl_storage_port``
+--------------------
+
+SSL port, for legacy encrypted communication. This property is unused unless enabled in
+server_encryption_options (see below). As of cassandra 4.0, this property is deprecated
+as a single port can be used for either/both secure and insecure connections.
+For security reasons, you should not expose this port to the internet. Firewall it if needed.
+
+*Default Value:* 7001
+
+``listen_address``
+------------------
+
+Address or interface to bind to and tell other Cassandra nodes to connect to.
+You _must_ change this if you want multiple nodes to be able to communicate!
+
+Set listen_address OR listen_interface, not both.
+
+Leaving it blank leaves it up to InetAddress.getLocalHost(). This
+will always do the Right Thing _if_ the node is properly configured
+(hostname, name resolution, etc), and the Right Thing is to use the
+address associated with the hostname (it might not be). If unresolvable
+it will fall back to InetAddress.getLoopbackAddress(), which is wrong for production systems.
+
+Setting listen_address to 0.0.0.0 is always wrong.
+
+
+*Default Value:* localhost
+
+``listen_interface``
+--------------------
+*This option is commented out by default.*
+
+Set listen_address OR listen_interface, not both. Interfaces must correspond
+to a single address, IP aliasing is not supported.
+
+*Default Value:* eth0
+
+``listen_interface_prefer_ipv6``
+--------------------------------
+*This option is commented out by default.*
+
+If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address
+you can specify which should be chosen using listen_interface_prefer_ipv6. If false the first ipv4
+address will be used. If true the first ipv6 address will be used. Defaults to false preferring
+ipv4. If there is only one address it will be selected regardless of ipv4/ipv6.
+
+*Default Value:* false
+
+``broadcast_address``
+---------------------
+*This option is commented out by default.*
+
+Address to broadcast to other Cassandra nodes
+Leaving this blank will set it to the same value as listen_address
+
+*Default Value:* 1.2.3.4
+
+``listen_on_broadcast_address``
+-------------------------------
+*This option is commented out by default.*
+
+When using multiple physical network interfaces, set this
+to true to listen on broadcast_address in addition to
+the listen_address, allowing nodes to communicate in both
+interfaces.
+Ignore this property if the network configuration automatically
+routes  between the public and private networks such as EC2.
+
+*Default Value:* false
+
+``internode_authenticator``
+---------------------------
+*This option is commented out by default.*
+
+Internode authentication backend, implementing IInternodeAuthenticator;
+used to allow/disallow connections from peer nodes.
+
+*Default Value:* org.apache.cassandra.auth.AllowAllInternodeAuthenticator
+
+``start_native_transport``
+--------------------------
+
+Whether to start the native transport server.
+The address on which the native transport is bound is defined by rpc_address.
+
+*Default Value:* true
+
+``native_transport_port``
+-------------------------
+port for the CQL native transport to listen for clients on
+For security reasons, you should not expose this port to the internet.  Firewall it if needed.
+
+*Default Value:* 9042
+
+``native_transport_port_ssl``
+-----------------------------
+*This option is commented out by default.*
+Enabling native transport encryption in client_encryption_options allows you to either use
+encryption for the standard port or to use a dedicated, additional port along with the unencrypted
+standard native_transport_port.
+Enabling client encryption and keeping native_transport_port_ssl disabled will use encryption
+for native_transport_port. Setting native_transport_port_ssl to a different value
+from native_transport_port will use encryption for native_transport_port_ssl while
+keeping native_transport_port unencrypted.
+
+*Default Value:* 9142
+
+``native_transport_max_threads``
+--------------------------------
+*This option is commented out by default.*
+The maximum threads for handling requests (note that idle threads are stopped
+after 30 seconds so there is not corresponding minimum setting).
+
+*Default Value:* 128
+
+``native_transport_max_frame_size_in_mb``
+-----------------------------------------
+*This option is commented out by default.*
+
+The maximum size of allowed frame. Frame (requests) larger than this will
+be rejected as invalid. The default is 256MB. If you're changing this parameter,
+you may want to adjust max_value_size_in_mb accordingly. This should be positive and less than 2048.
+
+*Default Value:* 256
+
+``native_transport_frame_block_size_in_kb``
+-------------------------------------------
+*This option is commented out by default.*
+
+If checksumming is enabled as a protocol option, denotes the size of the chunks into which frame
+are bodies will be broken and checksummed.
+
+*Default Value:* 32
+
+``native_transport_max_concurrent_connections``
+-----------------------------------------------
+*This option is commented out by default.*
+
+The maximum number of concurrent client connections.
+The default is -1, which means unlimited.
+
+*Default Value:* -1
+
+``native_transport_max_concurrent_connections_per_ip``
+------------------------------------------------------
+*This option is commented out by default.*
+
+The maximum number of concurrent client connections per source ip.
+The default is -1, which means unlimited.
+
+*Default Value:* -1
+
+``native_transport_allow_older_protocols``
+------------------------------------------
+
+Controls whether Cassandra honors older, yet currently supported, protocol versions.
+The default is true, which means all supported protocols will be honored.
+
+*Default Value:* true
+
+``native_transport_idle_timeout_in_ms``
+---------------------------------------
+*This option is commented out by default.*
+
+Controls when idle client connections are closed. Idle connections are ones that had neither reads
+nor writes for a time period.
+
+Clients may implement heartbeats by sending OPTIONS native protocol message after a timeout, which
+will reset idle timeout timer on the server side. To close idle client connections, corresponding
+values for heartbeat intervals have to be set on the client side.
+
+Idle connection timeouts are disabled by default.
+
+*Default Value:* 60000
+
+``rpc_address``
+---------------
+
+The address or interface to bind the native transport server to.
+
+Set rpc_address OR rpc_interface, not both.
+
+Leaving rpc_address blank has the same effect as on listen_address
+(i.e. it will be based on the configured hostname of the node).
+
+Note that unlike listen_address, you can specify 0.0.0.0, but you must also
+set broadcast_rpc_address to a value other than 0.0.0.0.
+
+For security reasons, you should not expose this port to the internet.  Firewall it if needed.
+
+*Default Value:* localhost
+
+``rpc_interface``
+-----------------
+*This option is commented out by default.*
+
+Set rpc_address OR rpc_interface, not both. Interfaces must correspond
+to a single address, IP aliasing is not supported.
+
+*Default Value:* eth1
+
+``rpc_interface_prefer_ipv6``
+-----------------------------
+*This option is commented out by default.*
+
+If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address
+you can specify which should be chosen using rpc_interface_prefer_ipv6. If false the first ipv4
+address will be used. If true the first ipv6 address will be used. Defaults to false preferring
+ipv4. If there is only one address it will be selected regardless of ipv4/ipv6.
+
+*Default Value:* false
+
+``broadcast_rpc_address``
+-------------------------
+*This option is commented out by default.*
+
+RPC address to broadcast to drivers and other Cassandra nodes. This cannot
+be set to 0.0.0.0. If left blank, this will be set to the value of
+rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must
+be set.
+
+*Default Value:* 1.2.3.4
+
+``rpc_keepalive``
+-----------------
+
+enable or disable keepalive on rpc/native connections
+
+*Default Value:* true
+
+``internode_send_buff_size_in_bytes``
+-------------------------------------
+*This option is commented out by default.*
+
+Uncomment to set socket buffer size for internode communication
+Note that when setting this, the buffer size is limited by net.core.wmem_max
+and when not setting it it is defined by net.ipv4.tcp_wmem
+See also:
+/proc/sys/net/core/wmem_max
+/proc/sys/net/core/rmem_max
+/proc/sys/net/ipv4/tcp_wmem
+/proc/sys/net/ipv4/tcp_wmem
+and 'man tcp'
+
+``internode_recv_buff_size_in_bytes``
+-------------------------------------
+*This option is commented out by default.*
+
+Uncomment to set socket buffer size for internode communication
+Note that when setting this, the buffer size is limited by net.core.wmem_max
+and when not setting it it is defined by net.ipv4.tcp_wmem
+
+``incremental_backups``
+-----------------------
+
+Set to true to have Cassandra create a hard link to each sstable
+flushed or streamed locally in a backups/ subdirectory of the
+keyspace data.  Removing these links is the operator's
+responsibility.
+
+*Default Value:* false
+
+``snapshot_before_compaction``
+------------------------------
+
+Whether or not to take a snapshot before each compaction.  Be
+careful using this option, since Cassandra won't clean up the
+snapshots for you.  Mostly useful if you're paranoid when there
+is a data format change.
+
+*Default Value:* false
+
+``auto_snapshot``
+-----------------
+
+Whether or not a snapshot is taken of the data before keyspace truncation
+or dropping of column families. The STRONGLY advised default of true 
+should be used to provide data safety. If you set this flag to false, you will
+lose data on truncation or drop.
+
+*Default Value:* true
+
+``column_index_size_in_kb``
+---------------------------
+
+Granularity of the collation index of rows within a partition.
+Increase if your rows are large, or if you have a very large
+number of rows per partition.  The competing goals are these:
+
+- a smaller granularity means more index entries are generated
+  and looking up rows withing the partition by collation column
+  is faster
+- but, Cassandra will keep the collation index in memory for hot
+  rows (as part of the key cache), so a larger granularity means
+  you can cache more hot rows
+
+*Default Value:* 64
+
+``column_index_cache_size_in_kb``
+---------------------------------
+
+Per sstable indexed key cache entries (the collation index in memory
+mentioned above) exceeding this size will not be held on heap.
+This means that only partition information is held on heap and the
+index entries are read from disk.
+
+Note that this size refers to the size of the
+serialized index information and not the size of the partition.
+
+*Default Value:* 2
+
+``concurrent_compactors``
+-------------------------
+*This option is commented out by default.*
+
+Number of simultaneous compactions to allow, NOT including
+validation "compactions" for anti-entropy repair.  Simultaneous
+compactions can help preserve read performance in a mixed read/write
+workload, by mitigating the tendency of small sstables to accumulate
+during a single long running compactions. The default is usually
+fine and if you experience problems with compaction running too
+slowly or too fast, you should look at
+compaction_throughput_mb_per_sec first.
+
+concurrent_compactors defaults to the smaller of (number of disks,
+number of cores), with a minimum of 2 and a maximum of 8.
+
+If your data directories are backed by SSD, you should increase this
+to the number of cores.
+
+*Default Value:* 1
+
+``concurrent_validations``
+--------------------------
+*This option is commented out by default.*
+
+Number of simultaneous repair validations to allow. If not set or set to
+a value less than 1, it defaults to the value of concurrent_compactors.
+To set a value greeater than concurrent_compactors at startup, the system
+property cassandra.allow_unlimited_concurrent_validations must be set to
+true. To dynamically resize to a value > concurrent_compactors on a running
+node, first call the bypassConcurrentValidatorsLimit method on the
+org.apache.cassandra.db:type=StorageService mbean
+
+*Default Value:* 0
+
+``concurrent_materialized_view_builders``
+-----------------------------------------
+
+Number of simultaneous materialized view builder tasks to allow.
+
+*Default Value:* 1
+
+``compaction_throughput_mb_per_sec``
+------------------------------------
+
+Throttles compaction to the given total throughput across the entire
+system. The faster you insert data, the faster you need to compact in
+order to keep the sstable count down, but in general, setting this to
+16 to 32 times the rate you are inserting data is more than sufficient.
+Setting this to 0 disables throttling. Note that this accounts for all types
+of compaction, including validation compaction (building Merkle trees
+for repairs).
+
+*Default Value:* 64
+
+``sstable_preemptive_open_interval_in_mb``
+------------------------------------------
+
+When compacting, the replacement sstable(s) can be opened before they
+are completely written, and used in place of the prior sstables for
+any range that has been written. This helps to smoothly transfer reads 
+between the sstables, reducing page cache churn and keeping hot rows hot
+
+*Default Value:* 50
+
+``stream_entire_sstables``
+--------------------------
+*This option is commented out by default.*
+
+When enabled, permits Cassandra to zero-copy stream entire eligible
+SSTables between nodes, including every component.
+This speeds up the network transfer significantly subject to
+throttling specified by stream_throughput_outbound_megabits_per_sec.
+Enabling this will reduce the GC pressure on sending and receiving node.
+When unset, the default is enabled. While this feature tries to keep the
+disks balanced, it cannot guarantee it. This feature will be automatically
+disabled if internode encryption is enabled. Currently this can be used with
+Leveled Compaction. Once CASSANDRA-14586 is fixed other compaction strategies
+will benefit as well when used in combination with CASSANDRA-6696.
+
+*Default Value:* true
+
+``stream_throughput_outbound_megabits_per_sec``
+-----------------------------------------------
+*This option is commented out by default.*
+
+Throttles all outbound streaming file transfers on this node to the
+given total throughput in Mbps. This is necessary because Cassandra does
+mostly sequential IO when streaming data during bootstrap or repair, which
+can lead to saturating the network connection and degrading rpc performance.
+When unset, the default is 200 Mbps or 25 MB/s.
+
+*Default Value:* 200
+
+``inter_dc_stream_throughput_outbound_megabits_per_sec``
+--------------------------------------------------------
+*This option is commented out by default.*
+
+Throttles all streaming file transfer between the datacenters,
+this setting allows users to throttle inter dc stream throughput in addition
+to throttling all network stream traffic as configured with
+stream_throughput_outbound_megabits_per_sec
+When unset, the default is 200 Mbps or 25 MB/s
+
+*Default Value:* 200
+
+``read_request_timeout_in_ms``
+------------------------------
+
+How long the coordinator should wait for read operations to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 5000
+
+``range_request_timeout_in_ms``
+-------------------------------
+How long the coordinator should wait for seq or index scans to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 10000
+
+``write_request_timeout_in_ms``
+-------------------------------
+How long the coordinator should wait for writes to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 2000
+
+``counter_write_request_timeout_in_ms``
+---------------------------------------
+How long the coordinator should wait for counter writes to complete.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 5000
+
+``cas_contention_timeout_in_ms``
+--------------------------------
+How long a coordinator should continue to retry a CAS operation
+that contends with other proposals for the same row.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 1000
+
+``truncate_request_timeout_in_ms``
+----------------------------------
+How long the coordinator should wait for truncates to complete
+(This can be much longer, because unless auto_snapshot is disabled
+we need to flush first so we can snapshot before removing the data.)
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 60000
+
+``request_timeout_in_ms``
+-------------------------
+The default timeout for other, miscellaneous operations.
+Lowest acceptable value is 10 ms.
+
+*Default Value:* 10000
+
+``internode_application_send_queue_capacity_in_bytes``
+------------------------------------------------------
+*This option is commented out by default.*
+
+Defensive settings for protecting Cassandra from true network partitions.
+See (CASSANDRA-14358) for details.
+
+The amount of time to wait for internode tcp connections to establish.
+internode_tcp_connect_timeout_in_ms = 2000
+
+The amount of time unacknowledged data is allowed on a connection before we throw out the connection
+Note this is only supported on Linux + epoll, and it appears to behave oddly above a setting of 30000
+(it takes much longer than 30s) as of Linux 4.12. If you want something that high set this to 0
+which picks up the OS default and configure the net.ipv4.tcp_retries2 sysctl to be ~8.
+internode_tcp_user_timeout_in_ms = 30000
+
+The maximum continuous period a connection may be unwritable in application space
+internode_application_timeout_in_ms = 30000
+
+Global, per-endpoint and per-connection limits imposed on messages queued for delivery to other nodes
+and waiting to be processed on arrival from other nodes in the cluster.  These limits are applied to the on-wire
+size of the message being sent or received.
+
+The basic per-link limit is consumed in isolation before any endpoint or global limit is imposed.
+Each node-pair has three links: urgent, small and large.  So any given node may have a maximum of
+N*3*(internode_application_send_queue_capacity_in_bytes+internode_application_receive_queue_capacity_in_bytes)
+messages queued without any coordination between them although in practice, with token-aware routing, only RF*tokens
+nodes should need to communicate with significant bandwidth.
+
+The per-endpoint limit is imposed on all messages exceeding the per-link limit, simultaneously with the global limit,
+on all links to or from a single node in the cluster.
+The global limit is imposed on all messages exceeding the per-link limit, simultaneously with the per-endpoint limit,
+on all links to or from any node in the cluster.
+
+
+*Default Value:* 4194304                       #4MiB
+
+``internode_application_send_queue_reserve_endpoint_capacity_in_bytes``
+-----------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 134217728    #128MiB
+
+``internode_application_send_queue_reserve_global_capacity_in_bytes``
+---------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 536870912      #512MiB
+
+``internode_application_receive_queue_capacity_in_bytes``
+---------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 4194304                    #4MiB
+
+``internode_application_receive_queue_reserve_endpoint_capacity_in_bytes``
+--------------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 134217728 #128MiB
+
+``internode_application_receive_queue_reserve_global_capacity_in_bytes``
+------------------------------------------------------------------------
+*This option is commented out by default.*
+
+*Default Value:* 536870912   #512MiB
+
+``slow_query_log_timeout_in_ms``
+--------------------------------
+
+
+How long before a node logs slow queries. Select queries that take longer than
+this timeout to execute, will generate an aggregated log message, so that slow queries
+can be identified. Set this value to zero to disable slow query logging.
+
+*Default Value:* 500
+
+``cross_node_timeout``
+----------------------
+*This option is commented out by default.*
+
+Enable operation timeout information exchange between nodes to accurately
+measure request timeouts.  If disabled, replicas will assume that requests
+were forwarded to them instantly by the coordinator, which means that
+under overload conditions we will waste that much extra time processing 
+already-timed-out requests.
+
+Warning: It is generally assumed that users have setup NTP on their clusters, and that clocks are modestly in sync, 
+since this is a requirement for general correctness of last write wins.
+
+*Default Value:* true
+
+``streaming_keep_alive_period_in_secs``
+---------------------------------------
+*This option is commented out by default.*
+
+Set keep-alive period for streaming
+This node will send a keep-alive message periodically with this period.
+If the node does not receive a keep-alive message from the peer for
+2 keep-alive cycles the stream session times out and fail
+Default value is 300s (5 minutes), which means stalled stream
+times out in 10 minutes by default
+
+*Default Value:* 300
+
+``streaming_connections_per_host``
+----------------------------------
+*This option is commented out by default.*
+
+Limit number of connections per host for streaming
+Increase this when you notice that joins are CPU-bound rather that network
+bound (for example a few nodes with big files).
+
+*Default Value:* 1
+
+``phi_convict_threshold``
+-------------------------
+*This option is commented out by default.*
+
+
+phi value that must be reached for a host to be marked down.
+most users should never need to adjust this.
+
+*Default Value:* 8
+
+``endpoint_snitch``
+-------------------
+
+endpoint_snitch -- Set this to a class that implements
+IEndpointSnitch.  The snitch has two functions:
+
+- it teaches Cassandra enough about your network topology to route
+  requests efficiently
+- it allows Cassandra to spread replicas around your cluster to avoid
+  correlated failures. It does this by grouping machines into
+  "datacenters" and "racks."  Cassandra will do its best not to have
+  more than one replica on the same "rack" (which may not actually
+  be a physical location)
+
+CASSANDRA WILL NOT ALLOW YOU TO SWITCH TO AN INCOMPATIBLE SNITCH
+ONCE DATA IS INSERTED INTO THE CLUSTER.  This would cause data loss.
+This means that if you start with the default SimpleSnitch, which
+locates every node on "rack1" in "datacenter1", your only options
+if you need to add another datacenter are GossipingPropertyFileSnitch
+(and the older PFS).  From there, if you want to migrate to an
+incompatible snitch like Ec2Snitch you can do it by adding new nodes
+under Ec2Snitch (which will locate them in a new "datacenter") and
+decommissioning the old ones.
+
+Out of the box, Cassandra provides:
+
+SimpleSnitch:
+   Treats Strategy order as proximity. This can improve cache
+   locality when disabling read repair.  Only appropriate for
+   single-datacenter deployments.
+
+GossipingPropertyFileSnitch
+   This should be your go-to snitch for production use.  The rack
+   and datacenter for the local node are defined in
+   cassandra-rackdc.properties and propagated to other nodes via
+   gossip.  If cassandra-topology.properties exists, it is used as a
+   fallback, allowing migration from the PropertyFileSnitch.
+
+PropertyFileSnitch:
+   Proximity is determined by rack and data center, which are
+   explicitly configured in cassandra-topology.properties.
+
+Ec2Snitch:
+   Appropriate for EC2 deployments in a single Region. Loads Region
+   and Availability Zone information from the EC2 API. The Region is
+   treated as the datacenter, and the Availability Zone as the rack.
+   Only private IPs are used, so this will not work across multiple
+   Regions.
+
+Ec2MultiRegionSnitch:
+   Uses public IPs as broadcast_address to allow cross-region
+   connectivity.  (Thus, you should set seed addresses to the public
+   IP as well.) You will need to open the storage_port or
+   ssl_storage_port on the public IP firewall.  (For intra-Region
+   traffic, Cassandra will switch to the private IP after
+   establishing a connection.)
+
+RackInferringSnitch:
+   Proximity is determined by rack and data center, which are
+   assumed to correspond to the 3rd and 2nd octet of each node's IP
+   address, respectively.  Unless this happens to match your
+   deployment conventions, this is best used as an example of
+   writing a custom Snitch class and is provided in that spirit.
+
+You can use a custom Snitch by setting this to the full class name
+of the snitch, which will be assumed to be on your classpath.
+
+*Default Value:* SimpleSnitch
+
+``dynamic_snitch_update_interval_in_ms``
+----------------------------------------
+
+controls how often to perform the more expensive part of host score
+calculation
+
+*Default Value:* 100 
+
+``dynamic_snitch_reset_interval_in_ms``
+---------------------------------------
+controls how often to reset all host scores, allowing a bad host to
+possibly recover
+
+*Default Value:* 600000
+
+``dynamic_snitch_badness_threshold``
+------------------------------------
+if set greater than zero, this will allow
+'pinning' of replicas to hosts in order to increase cache capacity.
+The badness threshold will control how much worse the pinned host has to be
+before the dynamic snitch will prefer other replicas over it.  This is
+expressed as a double which represents a percentage.  Thus, a value of
+0.2 means Cassandra would continue to prefer the static snitch values
+until the pinned host was 20% worse than the fastest.
+
+*Default Value:* 0.1
+
+``server_encryption_options``
+-----------------------------
+
+Configure server-to-server internode encryption
+
+JVM and netty defaults for supported SSL socket protocols and cipher suites can
+be replaced using custom encryption options. This is not recommended
+unless you have policies in place that dictate certain settings, or
+need to disable vulnerable ciphers or protocols in case the JVM cannot
+be updated.
+
+FIPS compliant settings can be configured at JVM level and should not
+involve changing encryption settings here:
+https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/FIPS.html
+
+**NOTE** this default configuration is an insecure configuration. If you need to
+enable server-to-server encryption generate server keystores (and truststores for mutual
+authentication) per:
+http://download.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
+Then perform the following configuration changes:
+
+Step 1: Set internode_encryption=<dc|rack|all> and explicitly set optional=true. Restart all nodes
+
+Step 2: Set optional=false (or remove it) and if you generated truststores and want to use mutual
+auth set require_client_auth=true. Restart all nodes
+
+*Default Value (complex option)*::
+
+        # On outbound connections, determine which type of peers to securely connect to.
+        #   The available options are :
+        #     none : Do not encrypt outgoing connections
+        #     dc   : Encrypt connections to peers in other datacenters but not within datacenters
+        #     rack : Encrypt connections to peers in other racks but not within racks
+        #     all  : Always use encrypted connections
+        internode_encryption: none
+        # When set to true, encrypted and unencrypted connections are allowed on the storage_port
+        # This should _only be true_ while in unencrypted or transitional operation
+        # optional defaults to true if internode_encryption is none
+        # optional: true
+        # If enabled, will open up an encrypted listening socket on ssl_storage_port. Should only be used
+        # during upgrade to 4.0; otherwise, set to false.
+        enable_legacy_ssl_storage_port: false
+        # Set to a valid keystore if internode_encryption is dc, rack or all
+        keystore: conf/.keystore
+        keystore_password: cassandra
+        # Verify peer server certificates
+        require_client_auth: false
+        # Set to a valid trustore if require_client_auth is true
+        truststore: conf/.truststore
+        truststore_password: cassandra
+        # Verify that the host name in the certificate matches the connected host
+        require_endpoint_verification: false
+        # More advanced defaults:
+        # protocol: TLS
+        # store_type: JKS
+        # cipher_suites: [
+        #   TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+        #   TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
+        #   TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA,
+        #   TLS_RSA_WITH_AES_256_CBC_SHA
+        # ]
+
+``client_encryption_options``
+-----------------------------
+Configure client-to-server encryption.
+
+**NOTE** this default configuration is an insecure configuration. If you need to
+enable client-to-server encryption generate server keystores (and truststores for mutual
+authentication) per:
+http://download.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
+Then perform the following configuration changes:
+
+Step 1: Set enabled=true and explicitly set optional=true. Restart all nodes
+
+Step 2: Set optional=false (or remove it) and if you generated truststores and want to use mutual
+auth set require_client_auth=true. Restart all nodes
+
+*Default Value (complex option)*::
+
+        # Enable client-to-server encryption
+        enabled: false
+        # When set to true, encrypted and unencrypted connections are allowed on the native_transport_port
+        # This should _only be true_ while in unencrypted or transitional operation
+        # optional defaults to true when enabled is false, and false when enabled is true.
+        # optional: true
+        # Set keystore and keystore_password to valid keystores if enabled is true
+        keystore: conf/.keystore
+        keystore_password: cassandra
+        # Verify client certificates
+        require_client_auth: false
+        # Set trustore and truststore_password if require_client_auth is true
+        # truststore: conf/.truststore
+        # truststore_password: cassandra
+        # More advanced defaults:
+        # protocol: TLS
+        # store_type: JKS
+        # cipher_suites: [
+        #   TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+        #   TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
+        #   TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA,
+        #   TLS_RSA_WITH_AES_256_CBC_SHA
+        # ]
+
+``internode_compression``
+-------------------------
+internode_compression controls whether traffic between nodes is
+compressed.
+Can be:
+
+all
+  all traffic is compressed
+
+dc
+  traffic between different datacenters is compressed
+
+none
+  nothing is compressed.
+
+*Default Value:* dc
+
+``inter_dc_tcp_nodelay``
+------------------------
+
+Enable or disable tcp_nodelay for inter-dc communication.
+Disabling it will result in larger (but fewer) network packets being sent,
+reducing overhead from the TCP protocol itself, at the cost of increasing
+latency if you block for cross-datacenter responses.
+
+*Default Value:* false
+
+``tracetype_query_ttl``
+-----------------------
+
+TTL for different trace types used during logging of the repair process.
+
+*Default Value:* 86400
+
+``tracetype_repair_ttl``
+------------------------
+
+*Default Value:* 604800
+
+``enable_user_defined_functions``
+---------------------------------
+
+If unset, all GC Pauses greater than gc_log_threshold_in_ms will log at
+INFO level
+UDFs (user defined functions) are disabled by default.
+As of Cassandra 3.0 there is a sandbox in place that should prevent execution of evil code.
+
+*Default Value:* false
+
+``enable_scripted_user_defined_functions``
+------------------------------------------
+
+Enables scripted UDFs (JavaScript UDFs).
+Java UDFs are always enabled, if enable_user_defined_functions is true.
+Enable this option to be able to use UDFs with "language javascript" or any custom JSR-223 provider.
+This option has no effect, if enable_user_defined_functions is false.
+
+*Default Value:* false
+
+``windows_timer_interval``
+--------------------------
+
+The default Windows kernel timer and scheduling resolution is 15.6ms for power conservation.
+Lowering this value on Windows can provide much tighter latency and better throughput, however
+some virtualized environments may see a negative performance impact from changing this setting
+below their system default. The sysinternals 'clockres' tool can confirm your system's default
+setting.
+
+*Default Value:* 1
+
+``transparent_data_encryption_options``
+---------------------------------------
+
+
+Enables encrypting data at-rest (on disk). Different key providers can be plugged in, but the default reads from
+a JCE-style keystore. A single keystore can hold multiple keys, but the one referenced by
+the "key_alias" is the only key that will be used for encrypt opertaions; previously used keys
+can still (and should!) be in the keystore and will be used on decrypt operations
+(to handle the case of key rotation).
+
+It is strongly recommended to download and install Java Cryptography Extension (JCE)
+Unlimited Strength Jurisdiction Policy Files for your version of the JDK.
+(current link: http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html)
+
+Currently, only the following file types are supported for transparent data encryption, although
+more are coming in future cassandra releases: commitlog, hints
+
+*Default Value (complex option)*::
+
+        enabled: false
+        chunk_length_kb: 64
+        cipher: AES/CBC/PKCS5Padding
+        key_alias: testing:1
+        # CBC IV length for AES needs to be 16 bytes (which is also the default size)
+        # iv_length: 16
+        key_provider:
+          - class_name: org.apache.cassandra.security.JKSKeyProvider
+            parameters:
+              - keystore: conf/.keystore
+                keystore_password: cassandra
+                store_type: JCEKS
+                key_password: cassandra
+
+``tombstone_warn_threshold``
+----------------------------
+
+####################
+SAFETY THRESHOLDS #
+####################
+
+When executing a scan, within or across a partition, we need to keep the
+tombstones seen in memory so we can return them to the coordinator, which
+will use them to make sure other replicas also know about the deleted rows.
+With workloads that generate a lot of tombstones, this can cause performance
+problems and even exaust the server heap.
+(http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets)
+Adjust the thresholds here if you understand the dangers and want to
+scan more tombstones anyway.  These thresholds may also be adjusted at runtime
+using the StorageService mbean.
+
+*Default Value:* 1000
+
+``tombstone_failure_threshold``
+-------------------------------
+
+*Default Value:* 100000
+
+``replica_filtering_protection``
+--------------------------------
+
+Filtering and secondary index queries at read consistency levels above ONE/LOCAL_ONE use a
+mechanism called replica filtering protection to ensure that results from stale replicas do
+not violate consistency. (See CASSANDRA-8272 and CASSANDRA-15907 for more details.) This
+mechanism materializes replica results by partition on-heap at the coordinator. The more possibly
+stale results returned by the replicas, the more rows materialized during the query.
+
+``batch_size_warn_threshold_in_kb``
+-----------------------------------
+
+Log WARN on any multiple-partition batch size exceeding this value. 5kb per batch by default.
+Caution should be taken on increasing the size of this threshold as it can lead to node instability.
+
+*Default Value:* 5
+
+``batch_size_fail_threshold_in_kb``
+-----------------------------------
+
+Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.
+
+*Default Value:* 50
+
+``unlogged_batch_across_partitions_warn_threshold``
+---------------------------------------------------
+
+Log WARN on any batches not of type LOGGED than span across more partitions than this limit
+
+*Default Value:* 10
+
+``compaction_large_partition_warning_threshold_mb``
+---------------------------------------------------
+
+Log a warning when compacting partitions larger than this value
+
+*Default Value:* 100
+
+``gc_log_threshold_in_ms``
+--------------------------
+*This option is commented out by default.*
+
+GC Pauses greater than 200 ms will be logged at INFO level
+This threshold can be adjusted to minimize logging if necessary
+
+*Default Value:* 200
+
+``gc_warn_threshold_in_ms``
+---------------------------
+*This option is commented out by default.*
+
+GC Pauses greater than gc_warn_threshold_in_ms will be logged at WARN level
+Adjust the threshold based on your application throughput requirement. Setting to 0
+will deactivate the feature.
+
+*Default Value:* 1000
+
+``max_value_size_in_mb``
+------------------------
+*This option is commented out by default.*
+
+Maximum size of any value in SSTables. Safety measure to detect SSTable corruption
+early. Any value size larger than this threshold will result into marking an SSTable
+as corrupted. This should be positive and less than 2048.
+
+*Default Value:* 256
+
+``otc_coalescing_strategy``
+---------------------------
+*This option is commented out by default.*
+
+Coalescing Strategies #
+Coalescing multiples messages turns out to significantly boost message processing throughput (think doubling or more).
+On bare metal, the floor for packet processing throughput is high enough that many applications won't notice, but in
+virtualized environments, the point at which an application can be bound by network packet processing can be
+surprisingly low compared to the throughput of task processing that is possible inside a VM. It's not that bare metal
+doesn't benefit from coalescing messages, it's that the number of packets a bare metal network interface can process
+is sufficient for many applications such that no load starvation is experienced even without coalescing.
+There are other benefits to coalescing network messages that are harder to isolate with a simple metric like messages
+per second. By coalescing multiple tasks together, a network thread can process multiple messages for the cost of one
+trip to read from a socket, and all the task submission work can be done at the same time reducing context switching
+and increasing cache friendliness of network message processing.
+See CASSANDRA-8692 for details.
+
+Strategy to use for coalescing messages in OutboundTcpConnection.
+Can be fixed, movingaverage, timehorizon, disabled (default).
+You can also specify a subclass of CoalescingStrategies.CoalescingStrategy by name.
+
+*Default Value:* DISABLED
+
+``otc_coalescing_window_us``
+----------------------------
+*This option is commented out by default.*
+
+How many microseconds to wait for coalescing. For fixed strategy this is the amount of time after the first
+message is received before it will be sent with any accompanying messages. For moving average this is the
+maximum amount of time that will be waited as well as the interval at which messages must arrive on average
+for coalescing to be enabled.
+
+*Default Value:* 200
+
+``otc_coalescing_enough_coalesced_messages``
+--------------------------------------------
+*This option is commented out by default.*
+
+Do not try to coalesce messages if we already got that many messages. This should be more than 2 and less than 128.
+
+*Default Value:* 8
+
+``otc_backlog_expiration_interval_ms``
+--------------------------------------
+*This option is commented out by default.*
+
+How many milliseconds to wait between two expiration runs on the backlog (queue) of the OutboundTcpConnection.
+Expiration is done if messages are piling up in the backlog. Droppable messages are expired to free the memory
+taken by expired messages. The interval should be between 0 and 1000, and in most installations the default value
+will be appropriate. A smaller value could potentially expire messages slightly sooner at the expense of more CPU
+time and queue contention while iterating the backlog of messages.
+An interval of 0 disables any wait time, which is the behavior of former Cassandra versions.
+
+
+*Default Value:* 200
+
+``ideal_consistency_level``
+---------------------------
+*This option is commented out by default.*
+
+Track a metric per keyspace indicating whether replication achieved the ideal consistency
+level for writes without timing out. This is different from the consistency level requested by
+each write which may be lower in order to facilitate availability.
+
+*Default Value:* EACH_QUORUM
+
+``automatic_sstable_upgrade``
+-----------------------------
+*This option is commented out by default.*
+
+Automatically upgrade sstables after upgrade - if there is no ordinary compaction to do, the
+oldest non-upgraded sstable will get upgraded to the latest version
+
+*Default Value:* false
+
+``max_concurrent_automatic_sstable_upgrades``
+---------------------------------------------
+*This option is commented out by default.*
+Limit the number of concurrent sstable upgrades
+
+*Default Value:* 1
+
+``audit_logging_options``
+-------------------------
+
+Audit logging - Logs every incoming CQL command request, authentication to a node. See the docs
+on audit_logging for full details about the various configuration options.
+
+``full_query_logging_options``
+------------------------------
+*This option is commented out by default.*
+
+
+default options for full query logging - these can be overridden from command line when executing
+nodetool enablefullquerylog
+
+``corrupted_tombstone_strategy``
+--------------------------------
+*This option is commented out by default.*
+
+validate tombstones on reads and compaction
+can be either "disabled", "warn" or "exception"
+
+*Default Value:* disabled
+
+``diagnostic_events_enabled``
+-----------------------------
+
+Diagnostic Events #
+If enabled, diagnostic events can be helpful for troubleshooting operational issues. Emitted events contain details
+on internal state and temporal relationships across events, accessible by clients via JMX.
+
+*Default Value:* false
+
+``native_transport_flush_in_batches_legacy``
+--------------------------------------------
+*This option is commented out by default.*
+
+Use native transport TCP message coalescing. If on upgrade to 4.0 you found your throughput decreasing, and in
+particular you run an old kernel or have very fewer client connections, this option might be worth evaluating.
+
+*Default Value:* false
+
+``repaired_data_tracking_for_range_reads_enabled``
+--------------------------------------------------
+
+Enable tracking of repaired state of data during reads and comparison between replicas
+Mismatches between the repaired sets of replicas can be characterized as either confirmed
+or unconfirmed. In this context, unconfirmed indicates that the presence of pending repair
+sessions, unrepaired partition tombstones, or some other condition means that the disparity
+cannot be considered conclusive. Confirmed mismatches should be a trigger for investigation
+as they may be indicative of corruption or data loss.
+There are separate flags for range vs partition reads as single partition reads are only tracked
+when CL > 1 and a digest mismatch occurs. Currently, range queries don't use digests so if
+enabled for range reads, all range reads will include repaired data tracking. As this adds
+some overhead, operators may wish to disable it whilst still enabling it for partition reads
+
+*Default Value:* false
+
+``repaired_data_tracking_for_partition_reads_enabled``
+------------------------------------------------------
+
+*Default Value:* false
+
+``report_unconfirmed_repaired_data_mismatches``
+-----------------------------------------------
+If false, only confirmed mismatches will be reported. If true, a separate metric for unconfirmed
+mismatches will also be recorded. This is to avoid potential signal:noise issues are unconfirmed
+mismatches are less actionable than confirmed ones.
+
+*Default Value:* false
+
+``enable_materialized_views``
+-----------------------------
+
+########################
+EXPERIMENTAL FEATURES #
+########################
+
+Enables materialized view creation on this node.
+Materialized views are considered experimental and are not recommended for production use.
+
+*Default Value:* false
+
+``enable_sasi_indexes``
+-----------------------
+
+Enables SASI index creation on this node.
+SASI indexes are considered experimental and are not recommended for production use.
+
+*Default Value:* false
+
+``enable_transient_replication``
+--------------------------------
+
+Enables creation of transiently replicated keyspaces on this node.
+Transient replication is experimental and is not recommended for production use.
+
+*Default Value:* false
diff --git a/src/doc/4.0-beta4/_sources/configuration/index.rst.txt b/src/doc/4.0-beta4/_sources/configuration/index.rst.txt
new file mode 100644
index 0000000..ea34af3
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/configuration/index.rst.txt
@@ -0,0 +1,31 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+Configuring Cassandra
+=====================
+
+This section describes how to configure Apache Cassandra.
+
+.. toctree::
+   :maxdepth: 1
+
+   cass_yaml_file
+   cass_rackdc_file
+   cass_env_sh_file
+   cass_topo_file
+   cass_cl_archive_file
+   cass_logback_xml_file
+   cass_jvm_options_file
diff --git a/src/doc/4.0-beta4/_sources/contactus.rst.txt b/src/doc/4.0-beta4/_sources/contactus.rst.txt
new file mode 100644
index 0000000..3ed9004
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/contactus.rst.txt
@@ -0,0 +1,50 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+Contact us
+==========
+
+You can get in touch with the Cassandra community either via the mailing lists or :ref:`Slack rooms <slack>`.
+
+.. _mailing-lists:
+
+Mailing lists
+-------------
+
+The following mailing lists are available:
+
+- `Users <http://www.mail-archive.com/user@cassandra.apache.org/>`__ – General discussion list for users - `Subscribe
+  <us...@cassandra.apache.org>`__
+- `Developers <http://www.mail-archive.com/dev@cassandra.apache.org/>`__ – Development related discussion - `Subscribe
+  <de...@cassandra.apache.org>`__
+- `Commits <http://www.mail-archive.com/commits@cassandra.apache.org/>`__ – Commit notification source repository -
+  `Subscribe <co...@cassandra.apache.org>`__
+- `Client Libraries <http://www.mail-archive.com/client-dev@cassandra.apache.org/>`__ – Discussion related to the
+  development of idiomatic client APIs - `Subscribe <cl...@cassandra.apache.org>`__
+
+Subscribe by sending an email to the email address in the Subscribe links above. Follow the instructions in the welcome
+email to confirm your subscription. Make sure to keep the welcome email as it contains instructions on how to
+unsubscribe.
+
+.. _slack:
+
+Slack
+-----
+To chat with developers or users in real-time, join our rooms on `ASF Slack <https://s.apache.org/slack-invite>`__:
+
+- ``cassandra`` - for user questions and general discussions.
+- ``cassandra-dev`` - strictly for questions or discussions related to Cassandra development.
+
diff --git a/src/doc/4.0-beta4/_sources/cql/appendices.rst.txt b/src/doc/4.0-beta4/_sources/cql/appendices.rst.txt
new file mode 100644
index 0000000..480b78e
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/appendices.rst.txt
@@ -0,0 +1,330 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+Appendices
+----------
+
+.. _appendix-A:
+
+Appendix A: CQL Keywords
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+CQL distinguishes between *reserved* and *non-reserved* keywords.
+Reserved keywords cannot be used as identifier, they are truly reserved
+for the language (but one can enclose a reserved keyword by
+double-quotes to use it as an identifier). Non-reserved keywords however
+only have a specific meaning in certain context but can used as
+identifier otherwise. The only *raison d’être* of these non-reserved
+keywords is convenience: some keyword are non-reserved when it was
+always easy for the parser to decide whether they were used as keywords
+or not.
+
++--------------------+-------------+
+| Keyword            | Reserved?   |
++====================+=============+
+| ``ADD``            | yes         |
++--------------------+-------------+
+| ``AGGREGATE``      | no          |
++--------------------+-------------+
+| ``ALL``            | no          |
++--------------------+-------------+
+| ``ALLOW``          | yes         |
++--------------------+-------------+
+| ``ALTER``          | yes         |
++--------------------+-------------+
+| ``AND``            | yes         |
++--------------------+-------------+
+| ``APPLY``          | yes         |
++--------------------+-------------+
+| ``AS``             | no          |
++--------------------+-------------+
+| ``ASC``            | yes         |
++--------------------+-------------+
+| ``ASCII``          | no          |
++--------------------+-------------+
+| ``AUTHORIZE``      | yes         |
++--------------------+-------------+
+| ``BATCH``          | yes         |
++--------------------+-------------+
+| ``BEGIN``          | yes         |
++--------------------+-------------+
+| ``BIGINT``         | no          |
++--------------------+-------------+
+| ``BLOB``           | no          |
++--------------------+-------------+
+| ``BOOLEAN``        | no          |
++--------------------+-------------+
+| ``BY``             | yes         |
++--------------------+-------------+
+| ``CALLED``         | no          |
++--------------------+-------------+
+| ``CLUSTERING``     | no          |
++--------------------+-------------+
+| ``COLUMNFAMILY``   | yes         |
++--------------------+-------------+
+| ``COMPACT``        | no          |
++--------------------+-------------+
+| ``CONTAINS``       | no          |
++--------------------+-------------+
+| ``COUNT``          | no          |
++--------------------+-------------+
+| ``COUNTER``        | no          |
++--------------------+-------------+
+| ``CREATE``         | yes         |
++--------------------+-------------+
+| ``CUSTOM``         | no          |
++--------------------+-------------+
+| ``DATE``           | no          |
++--------------------+-------------+
+| ``DECIMAL``        | no          |
++--------------------+-------------+
+| ``DELETE``         | yes         |
++--------------------+-------------+
+| ``DESC``           | yes         |
++--------------------+-------------+
+| ``DESCRIBE``       | yes         |
++--------------------+-------------+
+| ``DISTINCT``       | no          |
++--------------------+-------------+
+| ``DOUBLE``         | no          |
++--------------------+-------------+
+| ``DROP``           | yes         |
++--------------------+-------------+
+| ``ENTRIES``        | yes         |
++--------------------+-------------+
+| ``EXECUTE``        | yes         |
++--------------------+-------------+
+| ``EXISTS``         | no          |
++--------------------+-------------+
+| ``FILTERING``      | no          |
++--------------------+-------------+
+| ``FINALFUNC``      | no          |
++--------------------+-------------+
+| ``FLOAT``          | no          |
++--------------------+-------------+
+| ``FROM``           | yes         |
++--------------------+-------------+
+| ``FROZEN``         | no          |
++--------------------+-------------+
+| ``FULL``           | yes         |
++--------------------+-------------+
+| ``FUNCTION``       | no          |
++--------------------+-------------+
+| ``FUNCTIONS``      | no          |
++--------------------+-------------+
+| ``GRANT``          | yes         |
++--------------------+-------------+
+| ``IF``             | yes         |
++--------------------+-------------+
+| ``IN``             | yes         |
++--------------------+-------------+
+| ``INDEX``          | yes         |
++--------------------+-------------+
+| ``INET``           | no          |
++--------------------+-------------+
+| ``INFINITY``       | yes         |
++--------------------+-------------+
+| ``INITCOND``       | no          |
++--------------------+-------------+
+| ``INPUT``          | no          |
++--------------------+-------------+
+| ``INSERT``         | yes         |
++--------------------+-------------+
+| ``INT``            | no          |
++--------------------+-------------+
+| ``INTO``           | yes         |
++--------------------+-------------+
+| ``JSON``           | no          |
++--------------------+-------------+
+| ``KEY``            | no          |
++--------------------+-------------+
+| ``KEYS``           | no          |
++--------------------+-------------+
+| ``KEYSPACE``       | yes         |
++--------------------+-------------+
+| ``KEYSPACES``      | no          |
++--------------------+-------------+
+| ``LANGUAGE``       | no          |
++--------------------+-------------+
+| ``LIMIT``          | yes         |
++--------------------+-------------+
+| ``LIST``           | no          |
++--------------------+-------------+
+| ``LOGIN``          | no          |
++--------------------+-------------+
+| ``MAP``            | no          |
++--------------------+-------------+
+| ``MODIFY``         | yes         |
++--------------------+-------------+
+| ``NAN``            | yes         |
++--------------------+-------------+
+| ``NOLOGIN``        | no          |
++--------------------+-------------+
+| ``NORECURSIVE``    | yes         |
++--------------------+-------------+
+| ``NOSUPERUSER``    | no          |
++--------------------+-------------+
+| ``NOT``            | yes         |
++--------------------+-------------+
+| ``NULL``           | yes         |
++--------------------+-------------+
+| ``OF``             | yes         |
++--------------------+-------------+
+| ``ON``             | yes         |
++--------------------+-------------+
+| ``OPTIONS``        | no          |
++--------------------+-------------+
+| ``OR``             | yes         |
++--------------------+-------------+
+| ``ORDER``          | yes         |
++--------------------+-------------+
+| ``PASSWORD``       | no          |
++--------------------+-------------+
+| ``PERMISSION``     | no          |
++--------------------+-------------+
+| ``PERMISSIONS``    | no          |
++--------------------+-------------+
+| ``PRIMARY``        | yes         |
++--------------------+-------------+
+| ``RENAME``         | yes         |
++--------------------+-------------+
+| ``REPLACE``        | yes         |
++--------------------+-------------+
+| ``RETURNS``        | no          |
++--------------------+-------------+
+| ``REVOKE``         | yes         |
++--------------------+-------------+
+| ``ROLE``           | no          |
++--------------------+-------------+
+| ``ROLES``          | no          |
++--------------------+-------------+
+| ``SCHEMA``         | yes         |
++--------------------+-------------+
+| ``SELECT``         | yes         |
++--------------------+-------------+
+| ``SET``            | yes         |
++--------------------+-------------+
+| ``SFUNC``          | no          |
++--------------------+-------------+
+| ``SMALLINT``       | no          |
++--------------------+-------------+
+| ``STATIC``         | no          |
++--------------------+-------------+
+| ``STORAGE``        | no          |
++--------------------+-------------+
+| ``STYPE``          | no          |
++--------------------+-------------+
+| ``SUPERUSER``      | no          |
++--------------------+-------------+
+| ``TABLE``          | yes         |
++--------------------+-------------+
+| ``TEXT``           | no          |
++--------------------+-------------+
+| ``TIME``           | no          |
++--------------------+-------------+
+| ``TIMESTAMP``      | no          |
++--------------------+-------------+
+| ``TIMEUUID``       | no          |
++--------------------+-------------+
+| ``TINYINT``        | no          |
++--------------------+-------------+
+| ``TO``             | yes         |
++--------------------+-------------+
+| ``TOKEN``          | yes         |
++--------------------+-------------+
+| ``TRIGGER``        | no          |
++--------------------+-------------+
+| ``TRUNCATE``       | yes         |
++--------------------+-------------+
+| ``TTL``            | no          |
++--------------------+-------------+
+| ``TUPLE``          | no          |
++--------------------+-------------+
+| ``TYPE``           | no          |
++--------------------+-------------+
+| ``UNLOGGED``       | yes         |
++--------------------+-------------+
+| ``UPDATE``         | yes         |
++--------------------+-------------+
+| ``USE``            | yes         |
++--------------------+-------------+
+| ``USER``           | no          |
++--------------------+-------------+
+| ``USERS``          | no          |
++--------------------+-------------+
+| ``USING``          | yes         |
++--------------------+-------------+
+| ``UUID``           | no          |
++--------------------+-------------+
+| ``VALUES``         | no          |
++--------------------+-------------+
+| ``VARCHAR``        | no          |
++--------------------+-------------+
+| ``VARINT``         | no          |
++--------------------+-------------+
+| ``WHERE``          | yes         |
++--------------------+-------------+
+| ``WITH``           | yes         |
++--------------------+-------------+
+| ``WRITETIME``      | no          |
++--------------------+-------------+
+
+Appendix B: CQL Reserved Types
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following type names are not currently used by CQL, but are reserved
+for potential future use. User-defined types may not use reserved type
+names as their name.
+
++-----------------+
+| type            |
++=================+
+| ``bitstring``   |
++-----------------+
+| ``byte``        |
++-----------------+
+| ``complex``     |
++-----------------+
+| ``enum``        |
++-----------------+
+| ``interval``    |
++-----------------+
+| ``macaddr``     |
++-----------------+
+
+
+Appendix C: Dropping Compact Storage
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Starting version 4.0, Thrift and COMPACT STORAGE is no longer supported.
+
+'ALTER ... DROP COMPACT STORAGE' statement makes Compact Tables CQL-compatible,
+exposing internal structure of Thrift/Compact Tables:
+
+- CQL-created Compact Tables that have no clustering columns, will expose an
+  additional clustering column ``column1`` with ``UTF8Type``.
+- CQL-created Compact Tables that had no regular columns, will expose a
+  regular column ``value`` with ``BytesType``.
+- For CQL-Created Compact Tables, all columns originally defined as
+  ``regular`` will be come ``static``
+- CQL-created Compact Tables that have clustering but have no regular
+  columns will have an empty value column (of ``EmptyType``)
+- SuperColumn Tables (can only be created through Thrift) will expose
+  a compact value map with an empty name.
+- Thrift-created Compact Tables will have types corresponding to their
+  Thrift definition.
diff --git a/src/doc/4.0-beta4/_sources/cql/changes.rst.txt b/src/doc/4.0-beta4/_sources/cql/changes.rst.txt
new file mode 100644
index 0000000..6691f15
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/changes.rst.txt
@@ -0,0 +1,211 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+Changes
+-------
+
+The following describes the changes in each version of CQL.
+
+3.4.5
+^^^^^
+
+- Adds support for arithmetic operators (:jira:`11935`)
+- Adds support for ``+`` and ``-`` operations on dates (:jira:`11936`)
+- Adds ``currentTimestamp``, ``currentDate``, ``currentTime`` and ``currentTimeUUID`` functions (:jira:`13132`)
+
+
+3.4.4
+^^^^^
+
+- ``ALTER TABLE`` ``ALTER`` has been removed; a column's type may not be changed after creation (:jira:`12443`).
+- ``ALTER TYPE`` ``ALTER`` has been removed; a field's type may not be changed after creation (:jira:`12443`).
+
+3.4.3
+^^^^^
+
+- Adds a new ``duration `` :ref:`data types <data-types>` (:jira:`11873`).
+- Support for ``GROUP BY`` (:jira:`10707`).
+- Adds a ``DEFAULT UNSET`` option for ``INSERT JSON`` to ignore omitted columns (:jira:`11424`).
+- Allows ``null`` as a legal value for TTL on insert and update. It will be treated as equivalent to inserting a 0 (:jira:`12216`).
+
+3.4.2
+^^^^^
+
+- If a table has a non zero ``default_time_to_live``, then explicitly specifying a TTL of 0 in an ``INSERT`` or
+  ``UPDATE`` statement will result in the new writes not having any expiration (that is, an explicit TTL of 0 cancels
+  the ``default_time_to_live``). This wasn't the case before and the ``default_time_to_live`` was applied even though a
+  TTL had been explicitly set.
+- ``ALTER TABLE`` ``ADD`` and ``DROP`` now allow multiple columns to be added/removed.
+- New ``PER PARTITION LIMIT`` option for ``SELECT`` statements (see `CASSANDRA-7017
+  <https://issues.apache.org/jira/browse/CASSANDRA-7017)>`__.
+- :ref:`User-defined functions <cql-functions>` can now instantiate ``UDTValue`` and ``TupleValue`` instances via the
+  new ``UDFContext`` interface (see `CASSANDRA-10818 <https://issues.apache.org/jira/browse/CASSANDRA-10818)>`__.
+- :ref:`User-defined types <udts>` may now be stored in a non-frozen form, allowing individual fields to be updated and
+  deleted in ``UPDATE`` statements and ``DELETE`` statements, respectively. (`CASSANDRA-7423
+  <https://issues.apache.org/jira/browse/CASSANDRA-7423)>`__).
+
+3.4.1
+^^^^^
+
+- Adds ``CAST`` functions.
+
+3.4.0
+^^^^^
+
+- Support for :ref:`materialized views <materialized-views>`.
+- ``DELETE`` support for inequality expressions and ``IN`` restrictions on any primary key columns.
+- ``UPDATE`` support for ``IN`` restrictions on any primary key columns.
+
+3.3.1
+^^^^^
+
+- The syntax ``TRUNCATE TABLE X`` is now accepted as an alias for ``TRUNCATE X``.
+
+3.3.0
+^^^^^
+
+- :ref:`User-defined functions and aggregates <cql-functions>` are now supported.
+- Allows double-dollar enclosed strings literals as an alternative to single-quote enclosed strings.
+- Introduces Roles to supersede user based authentication and access control
+- New ``date``, ``time``, ``tinyint`` and ``smallint`` :ref:`data types <data-types>` have been added.
+- :ref:`JSON support <cql-json>` has been added
+- Adds new time conversion functions and deprecate ``dateOf`` and ``unixTimestampOf``.
+
+3.2.0
+^^^^^
+
+- :ref:`User-defined types <udts>` supported.
+- ``CREATE INDEX`` now supports indexing collection columns, including indexing the keys of map collections through the
+  ``keys()`` function
+- Indexes on collections may be queried using the new ``CONTAINS`` and ``CONTAINS KEY`` operators
+- :ref:`Tuple types <tuples>` were added to hold fixed-length sets of typed positional fields.
+- ``DROP INDEX`` now supports optionally specifying a keyspace.
+
+3.1.7
+^^^^^
+
+- ``SELECT`` statements now support selecting multiple rows in a single partition using an ``IN`` clause on combinations
+  of clustering columns.
+- ``IF NOT EXISTS`` and ``IF EXISTS`` syntax is now supported by ``CREATE USER`` and ``DROP USER`` statements,
+  respectively.
+
+3.1.6
+^^^^^
+
+- A new ``uuid()`` method has been added.
+- Support for ``DELETE ... IF EXISTS`` syntax.
+
+3.1.5
+^^^^^
+
+- It is now possible to group clustering columns in a relation, see :ref:`WHERE <where-clause>` clauses.
+- Added support for :ref:`static columns <static-columns>`.
+
+3.1.4
+^^^^^
+
+- ``CREATE INDEX`` now allows specifying options when creating CUSTOM indexes.
+
+3.1.3
+^^^^^
+
+- Millisecond precision formats have been added to the :ref:`timestamp <timestamps>` parser.
+
+3.1.2
+^^^^^
+
+- ``NaN`` and ``Infinity`` has been added as valid float constants. They are now reserved keywords. In the unlikely case
+  you we using them as a column identifier (or keyspace/table one), you will now need to double quote them.
+
+3.1.1
+^^^^^
+
+- ``SELECT`` statement now allows listing the partition keys (using the ``DISTINCT`` modifier). See `CASSANDRA-4536
+  <https://issues.apache.org/jira/browse/CASSANDRA-4536>`__.
+- The syntax ``c IN ?`` is now supported in ``WHERE`` clauses. In that case, the value expected for the bind variable
+  will be a list of whatever type ``c`` is.
+- It is now possible to use named bind variables (using ``:name`` instead of ``?``).
+
+3.1.0
+^^^^^
+
+- ``ALTER TABLE`` ``DROP`` option added.
+- ``SELECT`` statement now supports aliases in select clause. Aliases in WHERE and ORDER BY clauses are not supported.
+- ``CREATE`` statements for ``KEYSPACE``, ``TABLE`` and ``INDEX`` now supports an ``IF NOT EXISTS`` condition.
+  Similarly, ``DROP`` statements support a ``IF EXISTS`` condition.
+- ``INSERT`` statements optionally supports a ``IF NOT EXISTS`` condition and ``UPDATE`` supports ``IF`` conditions.
+
+3.0.5
+^^^^^
+
+- ``SELECT``, ``UPDATE``, and ``DELETE`` statements now allow empty ``IN`` relations (see `CASSANDRA-5626
+  <https://issues.apache.org/jira/browse/CASSANDRA-5626)>`__.
+
+3.0.4
+^^^^^
+
+- Updated the syntax for custom :ref:`secondary indexes <secondary-indexes>`.
+- Non-equal condition on the partition key are now never supported, even for ordering partitioner as this was not
+  correct (the order was **not** the one of the type of the partition key). Instead, the ``token`` method should always
+  be used for range queries on the partition key (see :ref:`WHERE clauses <where-clause>`).
+
+3.0.3
+^^^^^
+
+- Support for custom :ref:`secondary indexes <secondary-indexes>` has been added.
+
+3.0.2
+^^^^^
+
+- Type validation for the :ref:`constants <constants>` has been fixed. For instance, the implementation used to allow
+  ``'2'`` as a valid value for an ``int`` column (interpreting it has the equivalent of ``2``), or ``42`` as a valid
+  ``blob`` value (in which case ``42`` was interpreted as an hexadecimal representation of the blob). This is no longer
+  the case, type validation of constants is now more strict. See the :ref:`data types <data-types>` section for details
+  on which constant is allowed for which type.
+- The type validation fixed of the previous point has lead to the introduction of blobs constants to allow the input of
+  blobs. Do note that while the input of blobs as strings constant is still supported by this version (to allow smoother
+  transition to blob constant), it is now deprecated and will be removed by a future version. If you were using strings
+  as blobs, you should thus update your client code ASAP to switch blob constants.
+- A number of functions to convert native types to blobs have also been introduced. Furthermore the token function is
+  now also allowed in select clauses. See the :ref:`section on functions <cql-functions>` for details.
+
+3.0.1
+^^^^^
+
+- Date strings (and timestamps) are no longer accepted as valid ``timeuuid`` values. Doing so was a bug in the sense
+  that date string are not valid ``timeuuid``, and it was thus resulting in `confusing behaviors
+  <https://issues.apache.org/jira/browse/CASSANDRA-4936>`__. However, the following new methods have been added to help
+  working with ``timeuuid``: ``now``, ``minTimeuuid``, ``maxTimeuuid`` ,
+  ``dateOf`` and ``unixTimestampOf``.
+- Float constants now support the exponent notation. In other words, ``4.2E10`` is now a valid floating point value.
+
+Versioning
+^^^^^^^^^^
+
+Versioning of the CQL language adheres to the `Semantic Versioning <http://semver.org>`__ guidelines. Versions take the
+form X.Y.Z where X, Y, and Z are integer values representing major, minor, and patch level respectively. There is no
+correlation between Cassandra release versions and the CQL language version.
+
+========= =============================================================================================================
+ version   description
+========= =============================================================================================================
+ Major     The major version *must* be bumped when backward incompatible changes are introduced. This should rarely
+           occur.
+ Minor     Minor version increments occur when new, but backward compatible, functionality is introduced.
+ Patch     The patch version is incremented when bugs are fixed.
+========= =============================================================================================================
diff --git a/src/doc/4.0-beta4/_sources/cql/ddl.rst.txt b/src/doc/4.0-beta4/_sources/cql/ddl.rst.txt
new file mode 100644
index 0000000..d0e92ec
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/ddl.rst.txt
@@ -0,0 +1,1151 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _data-definition:
+
+Data Definition
+---------------
+
+CQL stores data in *tables*, whose schema defines the layout of said data in the table, and those tables are grouped in
+*keyspaces*. A keyspace defines a number of options that applies to all the tables it contains, most prominently of
+which is the :ref:`replication strategy <replication-strategy>` used by the keyspace. It is generally encouraged to use
+one keyspace by *application*, and thus many cluster may define only one keyspace.
+
+This section describes the statements used to create, modify, and remove those keyspace and tables.
+
+Common definitions
+^^^^^^^^^^^^^^^^^^
+
+The names of the keyspaces and tables are defined by the following grammar:
+
+.. productionlist::
+   keyspace_name: `name`
+   table_name: [ `keyspace_name` '.' ] `name`
+   name: `unquoted_name` | `quoted_name`
+   unquoted_name: re('[a-zA-Z_0-9]{1, 48}')
+   quoted_name: '"' `unquoted_name` '"'
+
+Both keyspace and table name should be comprised of only alphanumeric characters, cannot be empty and are limited in
+size to 48 characters (that limit exists mostly to avoid filenames (which may include the keyspace and table name) to go
+over the limits of certain file systems). By default, keyspace and table names are case insensitive (``myTable`` is
+equivalent to ``mytable``) but case sensitivity can be forced by using double-quotes (``"myTable"`` is different from
+``mytable``).
+
+Further, a table is always part of a keyspace and a table name can be provided fully-qualified by the keyspace it is
+part of. If is is not fully-qualified, the table is assumed to be in the *current* keyspace (see :ref:`USE statement
+<use-statement>`).
+
+Further, the valid names for columns is simply defined as:
+
+.. productionlist::
+   column_name: `identifier`
+
+We also define the notion of statement options for use in the following section:
+
+.. productionlist::
+   options: `option` ( AND `option` )*
+   option: `identifier` '=' ( `identifier` | `constant` | `map_literal` )
+
+.. _create-keyspace-statement:
+
+CREATE KEYSPACE
+^^^^^^^^^^^^^^^
+
+A keyspace is created using a ``CREATE KEYSPACE`` statement:
+
+.. productionlist::
+   create_keyspace_statement: CREATE KEYSPACE [ IF NOT EXISTS ] `keyspace_name` WITH `options`
+
+For instance::
+
+    CREATE KEYSPACE excelsior
+        WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3};
+
+    CREATE KEYSPACE excalibur
+        WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1' : 1, 'DC2' : 3}
+        AND durable_writes = false;
+
+Attempting to create a keyspace that already exists will return an error unless the ``IF NOT EXISTS`` option is used. If
+it is used, the statement will be a no-op if the keyspace already exists.
+
+The supported ``options`` are:
+
+=================== ========== =========== ========= ===================================================================
+name                 kind       mandatory   default   description
+=================== ========== =========== ========= ===================================================================
+``replication``      *map*      yes                   The replication strategy and options to use for the keyspace (see
+                                                      details below).
+``durable_writes``   *simple*   no          true      Whether to use the commit log for updates on this keyspace
+                                                      (disable this option at your own risk!).
+=================== ========== =========== ========= ===================================================================
+
+The ``replication`` property is mandatory and must at least contains the ``'class'`` sub-option which defines the
+:ref:`replication strategy <replication-strategy>` class to use. The rest of the sub-options depends on what replication
+strategy is used. By default, Cassandra support the following ``'class'``:
+
+.. _replication-strategy:
+
+``SimpleStrategy``
+""""""""""""""""""
+
+A simple strategy that defines a replication factor for data to be spread
+across the entire cluster. This is generally not a wise choice for production
+because it does not respect datacenter layouts and can lead to wildly varying
+query latency. For a production ready strategy, see
+``NetworkTopologyStrategy``. ``SimpleStrategy`` supports a single mandatory argument:
+
+========================= ====== ======= =============================================
+sub-option                 type   since   description
+========================= ====== ======= =============================================
+``'replication_factor'``   int    all     The number of replicas to store per range
+========================= ====== ======= =============================================
+
+``NetworkTopologyStrategy``
+"""""""""""""""""""""""""""
+
+A production ready replication strategy that allows to set the replication
+factor independently for each data-center. The rest of the sub-options are
+key-value pairs where a key is a data-center name and its value is the
+associated replication factor. Options:
+
+===================================== ====== ====== =============================================
+sub-option                             type   since  description
+===================================== ====== ====== =============================================
+``'<datacenter>'``                     int    all    The number of replicas to store per range in
+                                                     the provided datacenter.
+``'replication_factor'``               int    4.0    The number of replicas to use as a default
+                                                     per datacenter if not specifically provided.
+                                                     Note that this always defers to existing
+                                                     definitions or explicit datacenter settings.
+                                                     For example, to have three replicas per
+                                                     datacenter, supply this with a value of 3.
+===================================== ====== ====== =============================================
+
+Note that when ``ALTER`` ing keyspaces and supplying ``replication_factor``,
+auto-expansion will only *add* new datacenters for safety, it will not alter
+existing datacenters or remove any even if they are no longer in the cluster.
+If you want to remove datacenters while still supplying ``replication_factor``,
+explicitly zero out the datacenter you want to have zero replicas.
+
+An example of auto-expanding datacenters with two datacenters: ``DC1`` and ``DC2``::
+
+    CREATE KEYSPACE excalibur
+        WITH replication = {'class': 'NetworkTopologyStrategy', 'replication_factor' : 3}
+
+    DESCRIBE KEYSPACE excalibur
+        CREATE KEYSPACE excalibur WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '3'} AND durable_writes = true;
+
+
+An example of auto-expanding and overriding a datacenter::
+
+    CREATE KEYSPACE excalibur
+        WITH replication = {'class': 'NetworkTopologyStrategy', 'replication_factor' : 3, 'DC2': 2}
+
+    DESCRIBE KEYSPACE excalibur
+        CREATE KEYSPACE excalibur WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '2'} AND durable_writes = true;
+
+An example that excludes a datacenter while using ``replication_factor``::
+
+    CREATE KEYSPACE excalibur
+        WITH replication = {'class': 'NetworkTopologyStrategy', 'replication_factor' : 3, 'DC2': 0} ;
+
+    DESCRIBE KEYSPACE excalibur
+        CREATE KEYSPACE excalibur WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true;
+
+If transient replication has been enabled, transient replicas can be configured for both
+``SimpleStrategy`` and ``NetworkTopologyStrategy`` by defining replication factors in the format ``'<total_replicas>/<transient_replicas>'``
+
+For instance, this keyspace will have 3 replicas in DC1, 1 of which is transient, and 5 replicas in DC2, 2 of which are transient::
+
+    CREATE KEYSPACE some_keysopace
+               WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1' : '3/1'', 'DC2' : '5/2'};
+
+.. _use-statement:
+
+USE
+^^^
+
+The ``USE`` statement allows to change the *current* keyspace (for the *connection* on which it is executed). A number
+of objects in CQL are bound to a keyspace (tables, user-defined types, functions, ...) and the current keyspace is the
+default keyspace used when those objects are referred without a fully-qualified name (that is, without being prefixed a
+keyspace name). A ``USE`` statement simply takes the keyspace to use as current as argument:
+
+.. productionlist::
+   use_statement: USE `keyspace_name`
+
+.. _alter-keyspace-statement:
+
+ALTER KEYSPACE
+^^^^^^^^^^^^^^
+
+An ``ALTER KEYSPACE`` statement allows to modify the options of a keyspace:
+
+.. productionlist::
+   alter_keyspace_statement: ALTER KEYSPACE `keyspace_name` WITH `options`
+
+For instance::
+
+    ALTER KEYSPACE Excelsior
+        WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 4};
+
+The supported options are the same than for :ref:`creating a keyspace <create-keyspace-statement>`.
+
+.. _drop-keyspace-statement:
+
+DROP KEYSPACE
+^^^^^^^^^^^^^
+
+Dropping a keyspace can be done using the ``DROP KEYSPACE`` statement:
+
+.. productionlist::
+   drop_keyspace_statement: DROP KEYSPACE [ IF EXISTS ] `keyspace_name`
+
+For instance::
+
+    DROP KEYSPACE Excelsior;
+
+Dropping a keyspace results in the immediate, irreversible removal of that keyspace, including all the tables, UTD and
+functions in it, and all the data contained in those tables.
+
+If the keyspace does not exists, the statement will return an error, unless ``IF EXISTS`` is used in which case the
+operation is a no-op.
+
+.. _create-table-statement:
+
+CREATE TABLE
+^^^^^^^^^^^^
+
+Creating a new table uses the ``CREATE TABLE`` statement:
+
+.. productionlist::
+   create_table_statement: CREATE TABLE [ IF NOT EXISTS ] `table_name`
+                         : '('
+                         :     `column_definition`
+                         :     ( ',' `column_definition` )*
+                         :     [ ',' PRIMARY KEY '(' `primary_key` ')' ]
+                         : ')' [ WITH `table_options` ]
+   column_definition: `column_name` `cql_type` [ STATIC ] [ PRIMARY KEY]
+   primary_key: `partition_key` [ ',' `clustering_columns` ]
+   partition_key: `column_name`
+                : | '(' `column_name` ( ',' `column_name` )* ')'
+   clustering_columns: `column_name` ( ',' `column_name` )*
+   table_options: CLUSTERING ORDER BY '(' `clustering_order` ')' [ AND `options` ]
+                   : | `options`
+   clustering_order: `column_name` (ASC | DESC) ( ',' `column_name` (ASC | DESC) )*
+
+For instance::
+
+    CREATE TABLE monkeySpecies (
+        species text PRIMARY KEY,
+        common_name text,
+        population varint,
+        average_size int
+    ) WITH comment='Important biological records';
+
+    CREATE TABLE timeline (
+        userid uuid,
+        posted_month int,
+        posted_time uuid,
+        body text,
+        posted_by text,
+        PRIMARY KEY (userid, posted_month, posted_time)
+    ) WITH compaction = { 'class' : 'LeveledCompactionStrategy' };
+
+    CREATE TABLE loads (
+        machine inet,
+        cpu int,
+        mtime timeuuid,
+        load float,
+        PRIMARY KEY ((machine, cpu), mtime)
+    ) WITH CLUSTERING ORDER BY (mtime DESC);
+
+A CQL table has a name and is composed of a set of *rows*. Creating a table amounts to defining which :ref:`columns
+<column-definition>` the rows will be composed, which of those columns compose the :ref:`primary key <primary-key>`, as
+well as optional :ref:`options <create-table-options>` for the table.
+
+Attempting to create an already existing table will return an error unless the ``IF NOT EXISTS`` directive is used. If
+it is used, the statement will be a no-op if the table already exists.
+
+
+.. _column-definition:
+
+Column definitions
+~~~~~~~~~~~~~~~~~~
+
+Every rows in a CQL table has a set of predefined columns defined at the time of the table creation (or added later
+using an :ref:`alter statement<alter-table-statement>`).
+
+A :token:`column_definition` is primarily comprised of the name of the column defined and it's :ref:`type <data-types>`,
+which restrict which values are accepted for that column. Additionally, a column definition can have the following
+modifiers:
+
+``STATIC``
+    it declares the column as being a :ref:`static column <static-columns>`.
+
+``PRIMARY KEY``
+    it declares the column as being the sole component of the :ref:`primary key <primary-key>` of the table.
+
+.. _static-columns:
+
+Static columns
+``````````````
+Some columns can be declared as ``STATIC`` in a table definition. A column that is static will be “shared” by all the
+rows belonging to the same partition (having the same :ref:`partition key <partition-key>`). For instance::
+
+    CREATE TABLE t (
+        pk int,
+        t int,
+        v text,
+        s text static,
+        PRIMARY KEY (pk, t)
+    );
+
+    INSERT INTO t (pk, t, v, s) VALUES (0, 0, 'val0', 'static0');
+    INSERT INTO t (pk, t, v, s) VALUES (0, 1, 'val1', 'static1');
+
+    SELECT * FROM t;
+       pk | t | v      | s
+      ----+---+--------+-----------
+       0  | 0 | 'val0' | 'static1'
+       0  | 1 | 'val1' | 'static1'
+
+As can be seen, the ``s`` value is the same (``static1``) for both of the row in the partition (the partition key in
+that example being ``pk``, both rows are in that same partition): the 2nd insertion has overridden the value for ``s``.
+
+The use of static columns as the following restrictions:
+
+- a table without clustering columns cannot have static columns (in a table without clustering columns, every partition
+  has only one row, and so every column is inherently static).
+- only non ``PRIMARY KEY`` columns can be static.
+
+.. _primary-key:
+
+The Primary key
+~~~~~~~~~~~~~~~
+
+Within a table, a row is uniquely identified by its ``PRIMARY KEY``, and hence all table **must** define a PRIMARY KEY
+(and only one). A ``PRIMARY KEY`` definition is composed of one or more of the columns defined in the table.
+Syntactically, the primary key is defined the keywords ``PRIMARY KEY`` followed by comma-separated list of the column
+names composing it within parenthesis, but if the primary key has only one column, one can alternatively follow that
+column definition by the ``PRIMARY KEY`` keywords. The order of the columns in the primary key definition matter.
+
+A CQL primary key is composed of 2 parts:
+
+- the :ref:`partition key <partition-key>` part. It is the first component of the primary key definition. It can be a
+  single column or, using additional parenthesis, can be multiple columns. A table always have at least a partition key,
+  the smallest possible table definition is::
+
+      CREATE TABLE t (k text PRIMARY KEY);
+
+- the :ref:`clustering columns <clustering-columns>`. Those are the columns after the first component of the primary key
+  definition, and the order of those columns define the *clustering order*.
+
+Some example of primary key definition are:
+
+- ``PRIMARY KEY (a)``: ``a`` is the partition key and there is no clustering columns.
+- ``PRIMARY KEY (a, b, c)`` : ``a`` is the partition key and ``b`` and ``c`` are the clustering columns.
+- ``PRIMARY KEY ((a, b), c)`` : ``a`` and ``b`` compose the partition key (this is often called a *composite* partition
+  key) and ``c`` is the clustering column.
+
+
+.. _partition-key:
+
+The partition key
+`````````````````
+
+Within a table, CQL defines the notion of a *partition*. A partition is simply the set of rows that share the same value
+for their partition key. Note that if the partition key is composed of multiple columns, then rows belong to the same
+partition only they have the same values for all those partition key column. So for instance, given the following table
+definition and content::
+
+    CREATE TABLE t (
+        a int,
+        b int,
+        c int,
+        d int,
+        PRIMARY KEY ((a, b), c, d)
+    );
+
+    SELECT * FROM t;
+       a | b | c | d
+      ---+---+---+---
+       0 | 0 | 0 | 0    // row 1
+       0 | 0 | 1 | 1    // row 2
+       0 | 1 | 2 | 2    // row 3
+       0 | 1 | 3 | 3    // row 4
+       1 | 1 | 4 | 4    // row 5
+
+``row 1`` and ``row 2`` are in the same partition, ``row 3`` and ``row 4`` are also in the same partition (but a
+different one) and ``row 5`` is in yet another partition.
+
+Note that a table always has a partition key, and that if the table has no :ref:`clustering columns
+<clustering-columns>`, then every partition of that table is only comprised of a single row (since the primary key
+uniquely identifies rows and the primary key is equal to the partition key if there is no clustering columns).
+
+The most important property of partition is that all the rows belonging to the same partition are guarantee to be stored
+on the same set of replica nodes. In other words, the partition key of a table defines which of the rows will be
+localized together in the Cluster, and it is thus important to choose your partition key wisely so that rows that needs
+to be fetch together are in the same partition (so that querying those rows together require contacting a minimum of
+nodes).
+
+Please note however that there is a flip-side to this guarantee: as all rows sharing a partition key are guaranteed to
+be stored on the same set of replica node, a partition key that groups too much data can create a hotspot.
+
+Another useful property of a partition is that when writing data, all the updates belonging to a single partition are
+done *atomically* and in *isolation*, which is not the case across partitions.
+
+The proper choice of the partition key and clustering columns for a table is probably one of the most important aspect
+of data modeling in Cassandra, and it largely impact which queries can be performed, and how efficiently they are.
+
+
+.. _clustering-columns:
+
+The clustering columns
+``````````````````````
+
+The clustering columns of a table defines the clustering order for the partition of that table. For a given
+:ref:`partition <partition-key>`, all the rows are physically ordered inside Cassandra by that clustering order. For
+instance, given::
+
+    CREATE TABLE t (
+        a int,
+        b int,
+        c int,
+        PRIMARY KEY (a, b, c)
+    );
+
+    SELECT * FROM t;
+       a | b | c
+      ---+---+---
+       0 | 0 | 4     // row 1
+       0 | 1 | 9     // row 2
+       0 | 2 | 2     // row 3
+       0 | 3 | 3     // row 4
+
+then the rows (which all belong to the same partition) are all stored internally in the order of the values of their
+``b`` column (the order they are displayed above). So where the partition key of the table allows to group rows on the
+same replica set, the clustering columns controls how those rows are stored on the replica. That sorting allows the
+retrieval of a range of rows within a partition (for instance, in the example above, ``SELECT * FROM t WHERE a = 0 AND b
+> 1 and b <= 3``) to be very efficient.
+
+
+.. _create-table-options:
+
+Table options
+~~~~~~~~~~~~~
+
+A CQL table has a number of options that can be set at creation (and, for most of them, :ref:`altered
+<alter-table-statement>` later). These options are specified after the ``WITH`` keyword and are described
+in the following sections.
+
+But please bear in mind that the ``CLUSTERING ORDER`` option cannot be changed after table creation and
+has influences on the performance of some queries.
+
+.. _clustering-order:
+
+Reversing the clustering order
+``````````````````````````````
+
+The clustering order of a table is defined by the :ref:`clustering columns <clustering-columns>` of that table. By
+default, that ordering is based on natural order of those clustering order, but the ``CLUSTERING ORDER`` allows to
+change that clustering order to use the *reverse* natural order for some (potentially all) of the columns.
+
+The ``CLUSTERING ORDER`` option takes the comma-separated list of the clustering column, each with a ``ASC`` (for
+*ascendant*, e.g. the natural order) or ``DESC`` (for *descendant*, e.g. the reverse natural order). Note in particular
+that the default (if the ``CLUSTERING ORDER`` option is not used) is strictly equivalent to using the option with all
+clustering columns using the ``ASC`` modifier.
+
+Note that this option is basically a hint for the storage engine to change the order in which it stores the row but it
+has 3 visible consequences:
+
+# it limits which ``ORDER BY`` clause are allowed for :ref:`selects <select-statement>` on that table. You can only
+  order results by the clustering order or the reverse clustering order. Meaning that if a table has 2 clustering column
+  ``a`` and ``b`` and you defined ``WITH CLUSTERING ORDER (a DESC, b ASC)``, then in queries you will be allowed to use
+  ``ORDER BY (a DESC, b ASC)`` and (reverse clustering order) ``ORDER BY (a ASC, b DESC)`` but **not** ``ORDER BY (a
+  ASC, b ASC)`` (nor ``ORDER BY (a DESC, b DESC)``).
+# it also change the default order of results when queried (if no ``ORDER BY`` is provided). Results are always returned
+  in clustering order (within a partition).
+# it has a small performance impact on some queries as queries in reverse clustering order are slower than the one in
+  forward clustering order. In practice, this means that if you plan on querying mostly in the reverse natural order of
+  your columns (which is common with time series for instance where you often want data from the newest to the oldest),
+  it is an optimization to declare a descending clustering order.
+
+.. _create-table-general-options:
+
+Other table options
+```````````````````
+
+.. todo:: review (misses cdc if nothing else) and link to proper categories when appropriate (compaction for instance)
+
+A table supports the following options:
+
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| option                         | kind     | default     | description                                               |
++================================+==========+=============+===========================================================+
+| ``comment``                    | *simple* | none        | A free-form, human-readable comment.                      |
+| ``speculative_retry``          | *simple* | 99PERCENTILE| :ref:`Speculative retry options                           |
+|                                |          |             | <speculative-retry-options>`.                             |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``cdc``                        | *boolean*| false       | Create a Change Data Capture (CDC) log on the table.      |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``additional_write_policy``    | *simple* | 99PERCENTILE| :ref:`Speculative retry options                           |
+|                                |          |             | <speculative-retry-options>`.                             |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``gc_grace_seconds``           | *simple* | 864000      | Time to wait before garbage collecting tombstones         |
+|                                |          |             | (deletion markers).                                       |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``bloom_filter_fp_chance``     | *simple* | 0.00075     | The target probability of false positive of the sstable   |
+|                                |          |             | bloom filters. Said bloom filters will be sized to provide|
+|                                |          |             | the provided probability (thus lowering this value impact |
+|                                |          |             | the size of bloom filters in-memory and on-disk)          |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``default_time_to_live``       | *simple* | 0           | The default expiration time (“TTL”) in seconds for a      |
+|                                |          |             | table.                                                    |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``compaction``                 | *map*    | *see below* | :ref:`Compaction options <cql-compaction-options>`.       |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``compression``                | *map*    | *see below* | :ref:`Compression options <cql-compression-options>`.     |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``caching``                    | *map*    | *see below* | :ref:`Caching options <cql-caching-options>`.             |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``memtable_flush_period_in_ms``| *simple* | 0           | Time (in ms) before Cassandra flushes memtables to disk.  |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+| ``read_repair``                | *simple* | BLOCKING    | Sets read repair behavior (see below)                     |
++--------------------------------+----------+-------------+-----------------------------------------------------------+
+
+.. _speculative-retry-options:
+
+Speculative retry options
+#########################
+
+By default, Cassandra read coordinators only query as many replicas as necessary to satisfy
+consistency levels: one for consistency level ``ONE``, a quorum for ``QUORUM``, and so on.
+``speculative_retry`` determines when coordinators may query additional replicas, which is useful
+when replicas are slow or unresponsive.  Speculative retries are used to reduce the latency.  The speculative_retry option may be
+used to configure rapid read protection with which a coordinator sends more requests than needed to satisfy the Consistency level.
+
+Pre-4.0 speculative Retry Policy takes a single string as a parameter, this can be ``NONE``, ``ALWAYS``, ``99PERCENTILE`` (PERCENTILE), ``50MS`` (CUSTOM).
+
+Examples of setting speculative retry are:
+
+::
+
+  ALTER TABLE users WITH speculative_retry = '10ms';
+
+
+Or,
+
+::
+
+  ALTER TABLE users WITH speculative_retry = '99PERCENTILE';
+
+The problem with these settings is when a single host goes into an unavailable state this drags up the percentiles. This means if we
+are set to use ``p99`` alone, we might not speculate when we intended to to because the value at the specified percentile has gone so high.
+As a fix 4.0 adds  support for hybrid ``MIN()``, ``MAX()`` speculative retry policies (`CASSANDRA-14293
+<https://issues.apache.org/jira/browse/CASSANDRA-14293>`_). This means if the normal ``p99`` for the
+table is <50ms, we will still speculate at this value and not drag the tail latencies up... but if the ``p99th`` goes above what we know we
+should never exceed we use that instead.
+
+In 4.0 the values (case-insensitive) discussed in the following table are supported:
+
+============================ ======================== =============================================================================
+ Format                       Example                  Description
+============================ ======================== =============================================================================
+ ``XPERCENTILE``             90.5PERCENTILE           Coordinators record average per-table response times for all replicas.
+                                                      If a replica takes longer than ``X`` percent of this table's average
+                                                      response time, the coordinator queries an additional replica.
+                                                      ``X`` must be between 0 and 100.
+ ``XP``                      90.5P                    Synonym for ``XPERCENTILE``
+ ``Yms``                     25ms                     If a replica takes more than ``Y`` milliseconds to respond,
+                                                      the coordinator queries an additional replica.
+ ``MIN(XPERCENTILE,YMS)``    MIN(99PERCENTILE,35MS)   A hybrid policy that will use either the specified percentile or fixed
+                                                      milliseconds depending on which value is lower at the time of calculation.
+                                                      Parameters are ``XPERCENTILE``, ``XP``, or ``Yms``.
+                                                      This is helpful to help protect against a single slow instance; in the
+                                                      happy case the 99th percentile is normally lower than the specified
+                                                      fixed value however, a slow host may skew the percentile very high
+                                                      meaning the slower the cluster gets, the higher the value of the percentile,
+                                                      and the higher the calculated time used to determine if we should
+                                                      speculate or not. This allows us to set an upper limit that we want to
+                                                      speculate at, but avoid skewing the tail latencies by speculating at the
+                                                      lower value when the percentile is less than the specified fixed upper bound.
+ ``MAX(XPERCENTILE,YMS)``    MAX(90.5P,25ms)          A hybrid policy that will use either the specified percentile or fixed
+                                                      milliseconds depending on which value is higher at the time of calculation.
+ ``ALWAYS``                                           Coordinators always query all replicas.
+ ``NEVER``                                            Coordinators never query additional replicas.
+============================ =================== =============================================================================
+
+As of version 4.0 speculative retry allows more friendly params (`CASSANDRA-13876
+<https://issues.apache.org/jira/browse/CASSANDRA-13876>`_). The ``speculative_retry`` is more flexible with case. As an example a
+value does not have to be ``NONE``, and the following are supported alternatives.
+
+::
+
+  alter table users WITH speculative_retry = 'none';
+  alter table users WITH speculative_retry = 'None';
+
+The text component is case insensitive and for ``nPERCENTILE`` version 4.0 allows ``nP``, for instance ``99p``.
+In a hybrid value for speculative retry, one of the two values must be a fixed millisecond value and the other a percentile value.
+
+Some examples:
+
+::
+
+ min(99percentile,50ms)
+ max(99p,50MS)
+ MAX(99P,50ms)
+ MIN(99.9PERCENTILE,50ms)
+ max(90percentile,100MS)
+ MAX(100.0PERCENTILE,60ms)
+
+Two values of the same kind cannot be specified such as ``min(90percentile,99percentile)`` as it wouldn’t be a hybrid value.
+This setting does not affect reads with consistency level ``ALL`` because they already query all replicas.
+
+Note that frequently reading from additional replicas can hurt cluster performance.
+When in doubt, keep the default ``99PERCENTILE``.
+
+
+``additional_write_policy`` specifies the threshold at which a cheap quorum write will be upgraded to include transient replicas.
+
+.. _cql-compaction-options:
+
+Compaction options
+##################
+
+The ``compaction`` options must at least define the ``'class'`` sub-option, that defines the compaction strategy class
+to use. The supported class are ``'SizeTieredCompactionStrategy'`` (:ref:`STCS <STCS>`),
+``'LeveledCompactionStrategy'`` (:ref:`LCS <LCS>`) and ``'TimeWindowCompactionStrategy'`` (:ref:`TWCS <TWCS>`) (the
+``'DateTieredCompactionStrategy'`` is also supported but is deprecated and ``'TimeWindowCompactionStrategy'`` should be
+preferred instead). The default is ``'SizeTieredCompactionStrategy'``. Custom strategy can be provided by specifying the full class name as a :ref:`string constant
+<constants>`.
+
+All default strategies support a number of :ref:`common options <compaction-options>`, as well as options specific to
+the strategy chosen (see the section corresponding to your strategy for details: :ref:`STCS <stcs-options>`, :ref:`LCS
+<lcs-options>` and :ref:`TWCS <TWCS>`).
+
+.. _cql-compression-options:
+
+Compression options
+###################
+
+The ``compression`` options define if and how the sstables of the table are compressed. Compression is configured on a per-table
+basis as an optional argument to ``CREATE TABLE`` or ``ALTER TABLE``. The following sub-options are
+available:
+
+========================= =============== =============================================================================
+ Option                    Default         Description
+========================= =============== =============================================================================
+ ``class``                 LZ4Compressor   The compression algorithm to use. Default compressor are: LZ4Compressor,
+                                           SnappyCompressor, DeflateCompressor and ZstdCompressor. Use ``'enabled' : false`` to disable
+                                           compression. Custom compressor can be provided by specifying the full class
+                                           name as a “string constant”:#constants.
+
+ ``enabled``               true            Enable/disable sstable compression. If the ``enabled`` option is set to ``false`` no other
+                                           options must be specified.
+
+ ``chunk_length_in_kb``    64              On disk SSTables are compressed by block (to allow random reads). This
+                                           defines the size (in KB) of said block. Bigger values may improve the
+                                           compression rate, but increases the minimum size of data to be read from disk
+                                           for a read. The default value is an optimal value for compressing tables. Chunk length must
+                                           be a power of 2 because so is assumed so when computing the chunk number from an uncompressed
+                                           file offset.  Block size may be adjusted based on read/write access patterns such as:
+
+                                             - How much data is typically requested at once
+                                             - Average size of rows in the table
+
+ ``crc_check_chance``      1.0             Determines how likely Cassandra is to verify the checksum on each compression chunk during
+                                           reads.
+
+  ``compression_level``    3               Compression level. It is only applicable for ``ZstdCompressor`` and accepts values between
+                                           ``-131072`` and ``22``.
+========================= =============== =============================================================================
+
+
+For instance, to create a table with LZ4Compressor and a chunk_lenth_in_kb of 4KB::
+
+   CREATE TABLE simple (
+      id int,
+      key text,
+      value text,
+      PRIMARY KEY (key, value)
+   ) with compression = {'class': 'LZ4Compressor', 'chunk_length_in_kb': 4};
+
+
+.. _cql-caching-options:
+
+Caching options
+###############
+
+Caching optimizes the use of cache memory of a table. The cached data is weighed by size and access frequency. The ``caching``
+options allows to configure both the *key cache* and the *row cache* for the table. The following
+sub-options are available:
+
+======================== ========= ====================================================================================
+ Option                   Default   Description
+======================== ========= ====================================================================================
+ ``keys``                 ALL       Whether to cache keys (“key cache”) for this table. Valid values are: ``ALL`` and
+                                    ``NONE``.
+ ``rows_per_partition``   NONE      The amount of rows to cache per partition (“row cache”). If an integer ``n`` is
+                                    specified, the first ``n`` queried rows of a partition will be cached. Other
+                                    possible options are ``ALL``, to cache all rows of a queried partition, or ``NONE``
+                                    to disable row caching.
+======================== ========= ====================================================================================
+
+
+For instance, to create a table with both a key cache and 10 rows per partition::
+
+    CREATE TABLE simple (
+    id int,
+    key text,
+    value text,
+    PRIMARY KEY (key, value)
+    ) WITH caching = {'keys': 'ALL', 'rows_per_partition': 10};
+
+
+Read Repair options
+###################
+
+The ``read_repair`` options configures the read repair behavior to allow tuning for various performance and
+consistency behaviors. Two consistency properties are affected by read repair behavior.
+
+- Monotonic Quorum Reads: Provided by ``BLOCKING``. Monotonic quorum reads prevents reads from appearing to go back
+  in time in some circumstances. When monotonic quorum reads are not provided and a write fails to reach a quorum of
+  replicas, it may be visible in one read, and then disappear in a subsequent read.
+- Write Atomicity: Provided by ``NONE``. Write atomicity prevents reads from returning partially applied writes.
+  Cassandra attempts to provide partition level write atomicity, but since only the data covered by a SELECT statement
+  is repaired by a read repair, read repair can break write atomicity when data is read at a more granular level than it
+  is written. For example read repair can break write atomicity if you write multiple rows to a clustered partition in a
+  batch, but then select a single row by specifying the clustering column in a SELECT statement.
+
+The available read repair settings are:
+
+Blocking
+````````
+The default setting. When ``read_repair`` is set to ``BLOCKING``, and a read repair is triggered, the read will block
+on writes sent to other replicas until the CL is reached by the writes. Provides monotonic quorum reads, but not partition
+level write atomicity
+
+None
+````
+
+When ``read_repair`` is set to ``NONE``, the coordinator will reconcile any differences between replicas, but will not
+attempt to repair them. Provides partition level write atomicity, but not monotonic quorum reads.
+
+
+Other considerations:
+#####################
+
+- Adding new columns (see ``ALTER TABLE`` below) is a constant time operation. There is thus no need to try to
+  anticipate future usage when creating a table.
+
+.. _alter-table-statement:
+
+ALTER TABLE
+^^^^^^^^^^^
+
+Altering an existing table uses the ``ALTER TABLE`` statement:
+
+.. productionlist::
+   alter_table_statement: ALTER TABLE `table_name` `alter_table_instruction`
+   alter_table_instruction: ADD `column_name` `cql_type` ( ',' `column_name` `cql_type` )*
+                          : | DROP `column_name` ( `column_name` )*
+                          : | WITH `options`
+
+For instance::
+
+    ALTER TABLE addamsFamily ADD gravesite varchar;
+
+    ALTER TABLE addamsFamily
+           WITH comment = 'A most excellent and useful table';
+
+The ``ALTER TABLE`` statement can:
+
+- Add new column(s) to the table (through the ``ADD`` instruction). Note that the primary key of a table cannot be
+  changed and thus newly added column will, by extension, never be part of the primary key. Also note that :ref:`compact
+  tables <compact-tables>` have restrictions regarding column addition. Note that this is constant (in the amount of
+  data the cluster contains) time operation.
+- Remove column(s) from the table. This drops both the column and all its content, but note that while the column
+  becomes immediately unavailable, its content is only removed lazily during compaction. Please also see the warnings
+  below. Due to lazy removal, the altering itself is a constant (in the amount of data removed or contained in the
+  cluster) time operation.
+- Change some of the table options (through the ``WITH`` instruction). The :ref:`supported options
+  <create-table-options>` are the same that when creating a table (outside of ``CLUSTERING
+  ORDER`` that cannot be changed after creation). Note that setting any ``compaction`` sub-options has the effect of
+  erasing all previous ``compaction`` options, so you need to re-specify all the sub-options if you want to keep them.
+  The same note applies to the set of ``compression`` sub-options.
+
+.. warning:: Dropping a column assumes that the timestamps used for the value of this column are "real" timestamp in
+   microseconds. Using "real" timestamps in microseconds is the default is and is **strongly** recommended but as
+   Cassandra allows the client to provide any timestamp on any table it is theoretically possible to use another
+   convention. Please be aware that if you do so, dropping a column will not work correctly.
+
+.. warning:: Once a column is dropped, it is allowed to re-add a column with the same name than the dropped one
+   **unless** the type of the dropped column was a (non-frozen) column (due to an internal technical limitation).
+
+
+.. _drop-table-statement:
+
+DROP TABLE
+^^^^^^^^^^
+
+Dropping a table uses the ``DROP TABLE`` statement:
+
+.. productionlist::
+   drop_table_statement: DROP TABLE [ IF EXISTS ] `table_name`
+
+Dropping a table results in the immediate, irreversible removal of the table, including all data it contains.
+
+If the table does not exist, the statement will return an error, unless ``IF EXISTS`` is used in which case the
+operation is a no-op.
+
+.. _truncate-statement:
+
+TRUNCATE
+^^^^^^^^
+
+A table can be truncated using the ``TRUNCATE`` statement:
+
+.. productionlist::
+   truncate_statement: TRUNCATE [ TABLE ] `table_name`
+
+Note that ``TRUNCATE TABLE foo`` is allowed for consistency with other DDL statements but tables are the only object
+that can be truncated currently and so the ``TABLE`` keyword can be omitted.
+
+Truncating a table permanently removes all existing data from the table, but without removing the table itself.
+
+.. _describe-statements:
+
+DESCRIBE
+^^^^^^^^
+
+Statements used to outputs information about the connected Cassandra cluster,
+or about the data objects stored in the cluster.
+
+.. warning:: Describe statement resultset that exceed the page size are paged. If a schema changes is detected between
+   two pages the query will fail with an ``InvalidRequestException``. It is safe to retry the whole ``DESCRIBE``
+   statement after such an error.
+ 
+DESCRIBE KEYSPACES
+""""""""""""""""""
+
+Output the names of all keyspaces.
+
+.. productionlist::
+   describe_keyspaces_statement: DESCRIBE KEYSPACES
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+======================== ========= ====================================================================================
+
+DESCRIBE KEYSPACE
+"""""""""""""""""
+
+Output CQL commands that could be used to recreate the given keyspace, and the objects in it 
+(such as tables, types, functions, etc.).
+
+.. productionlist::
+   describe_keyspace_statement: DESCRIBE [ONLY] KEYSPACE [`keyspace_name`] [WITH INTERNALS]
+
+The ``keyspace_name`` argument may be omitted, in which case the current keyspace will be described.
+
+If ``WITH INTERNALS`` is specified, the output contains the table IDs and is adopted to represent the DDL necessary 
+to "re-create" dropped columns.
+
+If ``ONLY`` is specified, only the DDL to recreate the keyspace will be created. All keyspace elements, like tables,
+types, functions, etc will be omitted.
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+DESCRIBE TABLES
+"""""""""""""""
+
+Output the names of all tables in the current keyspace, or in all keyspaces if there is no current keyspace.
+
+.. productionlist::
+   describe_tables_statement: DESCRIBE TABLES
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+======================== ========= ====================================================================================
+
+DESCRIBE TABLE
+""""""""""""""
+
+Output CQL commands that could be used to recreate the given table.
+
+.. productionlist::
+   describe_table_statement: DESCRIBE TABLE [`keyspace_name`.]`table_name` [WITH INTERNALS]
+
+If `WITH INTERNALS` is specified, the output contains the table ID and is adopted to represent the DDL necessary
+to "re-create" dropped columns.
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+DESCRIBE INDEX
+""""""""""""""
+
+Output the CQL command that could be used to recreate the given index.
+
+.. productionlist::
+   describe_index_statement: DESCRIBE INDEX [`keyspace_name`.]`index_name`
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+
+DESCRIBE MATERIALIZED VIEW
+""""""""""""""""""""""""""
+
+Output the CQL command that could be used to recreate the given materialized view.
+
+.. productionlist::
+   describe_materialized_view_statement: DESCRIBE MATERIALIZED VIEW [`keyspace_name`.]`view_name`
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+DESCRIBE CLUSTER
+""""""""""""""""
+
+Output information about the connected Cassandra cluster, such as the cluster name, and the partitioner and snitch
+in use. When you are connected to a non-system keyspace, also shows endpoint-range ownership information for
+the Cassandra ring.
+
+.. productionlist::
+   describe_cluster_statement: DESCRIBE CLUSTER
+
+Returned columns:
+
+======================== ====================== ========================================================================
+ Columns                  Type                   Description
+======================== ====================== ========================================================================
+ cluster                                   text  The cluster name
+ partitioner                               text  The partitioner being used by the cluster
+ snitch                                    text  The snitch being used by the cluster
+ range_ownership          map<text, list<text>>  The CQL statement to use to recreate the schema element
+======================== ====================== ========================================================================
+
+DESCRIBE SCHEMA
+"""""""""""""""
+
+Output CQL commands that could be used to recreate the entire (non-system) schema.
+Works as though "DESCRIBE KEYSPACE k" was invoked for each non-system keyspace
+
+.. productionlist::
+   describe_schema_statement: DESCRIBE [FULL] SCHEMA [WITH INTERNALS]
+
+Use ``DESCRIBE FULL SCHEMA`` to include the system keyspaces.
+
+If ``WITH INTERNALS`` is specified, the output contains the table IDs and is adopted to represent the DDL necessary
+to "re-create" dropped columns.
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+DESCRIBE TYPES
+""""""""""""""
+
+Output the names of all user-defined-types in the current keyspace, or in all keyspaces if there is no current keyspace.
+
+.. productionlist::
+   describe_types_statement: DESCRIBE TYPES
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+======================== ========= ====================================================================================
+
+DESCRIBE TYPE
+"""""""""""""
+
+Output the CQL command that could be used to recreate the given user-defined-type.
+
+.. productionlist::
+   describe_type_statement: DESCRIBE TYPE [`keyspace_name`.]`type_name`
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+DESCRIBE FUNCTIONS
+""""""""""""""""""
+
+Output the names of all user-defined-functions in the current keyspace, or in all keyspaces if there is no current
+keyspace.
+
+.. productionlist::
+   describe_functions_statement: DESCRIBE FUNCTIONS
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+======================== ========= ====================================================================================
+
+DESCRIBE FUNCTION
+"""""""""""""""""
+
+Output the CQL command that could be used to recreate the given user-defined-function.
+
+.. productionlist::
+   describe_function_statement: DESCRIBE FUNCTION [`keyspace_name`.]`function_name`
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+DESCRIBE AGGREGATES
+"""""""""""""""""""
+
+Output the names of all user-defined-aggregates in the current keyspace, or in all keyspaces if there is no current
+keyspace.
+
+.. productionlist::
+   describe_aggregates_statement: DESCRIBE AGGREGATES
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+======================== ========= ====================================================================================
+
+DESCRIBE AGGREGATE
+""""""""""""""""""
+
+Output the CQL command that could be used to recreate the given user-defined-aggregate.
+
+.. productionlist::
+   describe_aggregate_statement: DESCRIBE AGGREGATE [`keyspace_name`.]`aggregate_name`
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
+DESCRIBE object
+"""""""""""""""
+
+Output CQL commands that could be used to recreate the entire object schema, where object can be either a keyspace 
+or a table or an index or a materialized view (in this order).
+
+.. productionlist::
+   describe_object_statement: DESCRIBE `object_name` [WITH INTERNALS]
+
+If ``WITH INTERNALS`` is specified and ``object_name`` represents a keyspace or table the output contains the table IDs
+and is adopted to represent the DDL necessary to "re-create" dropped columns.
+
+``object_name`` cannot be any of the "describe what" qualifiers like "cluster", "table", etc.
+
+Returned columns:
+
+======================== ========= ====================================================================================
+ Columns                  Type      Description
+======================== ========= ====================================================================================
+ keyspace_name                text  The keyspace name
+ type                         text  The schema element type
+ name                         text  The schema element name
+ create_statement             text  The CQL statement to use to recreate the schema element
+======================== ========= ====================================================================================
+
diff --git a/src/doc/4.0-beta4/_sources/cql/definitions.rst.txt b/src/doc/4.0-beta4/_sources/cql/definitions.rst.txt
new file mode 100644
index 0000000..3df6f20
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/definitions.rst.txt
@@ -0,0 +1,234 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. _UUID: https://en.wikipedia.org/wiki/Universally_unique_identifier
+
+.. highlight:: cql
+
+Definitions
+-----------
+
+.. _conventions:
+
+Conventions
+^^^^^^^^^^^
+
+To aid in specifying the CQL syntax, we will use the following conventions in this document:
+
+- Language rules will be given in an informal `BNF variant
+  <http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form#Variants>`_ notation. In particular, we'll use square brakets
+  (``[ item ]``) for optional items, ``*`` and ``+`` for repeated items (where ``+`` imply at least one).
+- The grammar will also use the following convention for convenience: non-terminal term will be lowercase (and link to
+  their definition) while terminal keywords will be provided "all caps". Note however that keywords are
+  :ref:`identifiers` and are thus case insensitive in practice. We will also define some early construction using
+  regexp, which we'll indicate with ``re(<some regular expression>)``.
+- The grammar is provided for documentation purposes and leave some minor details out.  For instance, the comma on the
+  last column definition in a ``CREATE TABLE`` statement is optional but supported if present even though the grammar in
+  this document suggests otherwise. Also, not everything accepted by the grammar is necessarily valid CQL.
+- References to keywords or pieces of CQL code in running text will be shown in a ``fixed-width font``.
+
+
+.. _identifiers:
+
+Identifiers and keywords
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The CQL language uses *identifiers* (or *names*) to identify tables, columns and other objects. An identifier is a token
+matching the regular expression ``[a-zA-Z][a-zA-Z0-9_]*``.
+
+A number of such identifiers, like ``SELECT`` or ``WITH``, are *keywords*. They have a fixed meaning for the language
+and most are reserved. The list of those keywords can be found in :ref:`appendix-A`.
+
+Identifiers and (unquoted) keywords are case insensitive. Thus ``SELECT`` is the same than ``select`` or ``sElEcT``, and
+``myId`` is the same than ``myid`` or ``MYID``. A convention often used (in particular by the samples of this
+documentation) is to use upper case for keywords and lower case for other identifiers.
+
+There is a second kind of identifiers called *quoted identifiers* defined by enclosing an arbitrary sequence of
+characters (non empty) in double-quotes(``"``). Quoted identifiers are never keywords. Thus ``"select"`` is not a
+reserved keyword and can be used to refer to a column (note that using this is particularly advised), while ``select``
+would raise a parsing error. Also, contrarily to unquoted identifiers and keywords, quoted identifiers are case
+sensitive (``"My Quoted Id"`` is *different* from ``"my quoted id"``). A fully lowercase quoted identifier that matches
+``[a-zA-Z][a-zA-Z0-9_]*`` is however *equivalent* to the unquoted identifier obtained by removing the double-quote (so
+``"myid"`` is equivalent to ``myid`` and to ``myId`` but different from ``"myId"``).  Inside a quoted identifier, the
+double-quote character can be repeated to escape it, so ``"foo "" bar"`` is a valid identifier.
+
+.. note:: *quoted identifiers* allows to declare columns with arbitrary names, and those can sometime clash with
+   specific names used by the server. For instance, when using conditional update, the server will respond with a
+   result-set containing a special result named ``"[applied]"``. If you’ve declared a column with such a name, this
+   could potentially confuse some tools and should be avoided. In general, unquoted identifiers should be preferred but
+   if you use quoted identifiers, it is strongly advised to avoid any name enclosed by squared brackets (like
+   ``"[applied]"``) and any name that looks like a function call (like ``"f(x)"``).
+
+More formally, we have:
+
+.. productionlist::
+   identifier: `unquoted_identifier` | `quoted_identifier`
+   unquoted_identifier: re('[a-zA-Z][a-zA-Z0-9_]*')
+   quoted_identifier: '"' (any character where " can appear if doubled)+ '"'
+
+.. _constants:
+
+Constants
+^^^^^^^^^
+
+CQL defines the following kind of *constants*:
+
+.. productionlist::
+   constant: `string` | `integer` | `float` | `boolean` | `uuid` | `blob` | NULL
+   string: '\'' (any character where ' can appear if doubled)+ '\''
+         : '$$' (any character other than '$$') '$$'
+   integer: re('-?[0-9]+')
+   float: re('-?[0-9]+(\.[0-9]*)?([eE][+-]?[0-9+])?') | NAN | INFINITY
+   boolean: TRUE | FALSE
+   uuid: `hex`{8}-`hex`{4}-`hex`{4}-`hex`{4}-`hex`{12}
+   hex: re("[0-9a-fA-F]")
+   blob: '0' ('x' | 'X') `hex`+
+
+In other words:
+
+- A string constant is an arbitrary sequence of characters enclosed by single-quote(``'``). A single-quote
+  can be included by repeating it, e.g. ``'It''s raining today'``. Those are not to be confused with quoted
+  :ref:`identifiers` that use double-quotes. Alternatively, a string can be defined by enclosing the arbitrary sequence
+  of characters by two dollar characters, in which case single-quote can be used without escaping (``$$It's raining
+  today$$``). That latter form is often used when defining :ref:`user-defined functions <udfs>` to avoid having to
+  escape single-quote characters in function body (as they are more likely to occur than ``$$``).
+- Integer, float and boolean constant are defined as expected. Note however than float allows the special ``NaN`` and
+  ``Infinity`` constants.
+- CQL supports UUID_ constants.
+- Blobs content are provided in hexadecimal and prefixed by ``0x``.
+- The special ``NULL`` constant denotes the absence of value.
+
+For how these constants are typed, see the :ref:`data-types` section.
+
+Terms
+^^^^^
+
+CQL has the notion of a *term*, which denotes the kind of values that CQL support. Terms are defined by:
+
+.. productionlist::
+   term: `constant` | `literal` | `function_call` | `arithmetic_operation` | `type_hint` | `bind_marker`
+   literal: `collection_literal` | `udt_literal` | `tuple_literal`
+   function_call: `identifier` '(' [ `term` (',' `term`)* ] ')'
+   arithmetic_operation: '-' `term` | `term` ('+' | '-' | '*' | '/' | '%') `term`
+   type_hint: '(' `cql_type` `)` term
+   bind_marker: '?' | ':' `identifier`
+
+A term is thus one of:
+
+- A :ref:`constant <constants>`.
+- A literal for either :ref:`a collection <collections>`, :ref:`a user-defined type <udts>` or :ref:`a tuple <tuples>`
+  (see the linked sections for details).
+- A function call: see :ref:`the section on functions <cql-functions>` for details on which :ref:`native function
+  <native-functions>` exists and how to define your own :ref:`user-defined ones <udfs>`.
+- An arithmetic operation between terms. see :ref:`the section on arithmetic operations <arithmetic_operators>`
+- A *type hint*: see the :ref:`related section <type-hints>` for details.
+- A bind marker, which denotes a variable to be bound at execution time. See the section on :ref:`prepared-statements`
+  for details. A bind marker can be either anonymous (``?``) or named (``:some_name``). The latter form provides a more
+  convenient way to refer to the variable for binding it and should generally be preferred.
+
+
+Comments
+^^^^^^^^
+
+A comment in CQL is a line beginning by either double dashes (``--``) or double slash (``//``).
+
+Multi-line comments are also supported through enclosure within ``/*`` and ``*/`` (but nesting is not supported).
+
+::
+
+    -- This is a comment
+    // This is a comment too
+    /* This is
+       a multi-line comment */
+
+Statements
+^^^^^^^^^^
+
+CQL consists of statements that can be divided in the following categories:
+
+- :ref:`data-definition` statements, to define and change how the data is stored (keyspaces and tables).
+- :ref:`data-manipulation` statements, for selecting, inserting and deleting data.
+- :ref:`secondary-indexes` statements.
+- :ref:`materialized-views` statements.
+- :ref:`cql-roles` statements.
+- :ref:`cql-permissions` statements.
+- :ref:`User-Defined Functions <udfs>` statements.
+- :ref:`udts` statements.
+- :ref:`cql-triggers` statements.
+
+All the statements are listed below and are described in the rest of this documentation (see links above):
+
+.. productionlist::
+   cql_statement: `statement` [ ';' ]
+   statement: `ddl_statement`
+            : | `dml_statement`
+            : | `secondary_index_statement`
+            : | `materialized_view_statement`
+            : | `role_or_permission_statement`
+            : | `udf_statement`
+            : | `udt_statement`
+            : | `trigger_statement`
+   ddl_statement: `use_statement`
+                : | `create_keyspace_statement`
+                : | `alter_keyspace_statement`
+                : | `drop_keyspace_statement`
+                : | `create_table_statement`
+                : | `alter_table_statement`
+                : | `drop_table_statement`
+                : | `truncate_statement`
+    dml_statement: `select_statement`
+                 : | `insert_statement`
+                 : | `update_statement`
+                 : | `delete_statement`
+                 : | `batch_statement`
+    secondary_index_statement: `create_index_statement`
+                             : | `drop_index_statement`
+    materialized_view_statement: `create_materialized_view_statement`
+                               : | `drop_materialized_view_statement`
+    role_or_permission_statement: `create_role_statement`
+                                : | `alter_role_statement`
+                                : | `drop_role_statement`
+                                : | `grant_role_statement`
+                                : | `revoke_role_statement`
+                                : | `list_roles_statement`
+                                : | `grant_permission_statement`
+                                : | `revoke_permission_statement`
+                                : | `list_permissions_statement`
+                                : | `create_user_statement`
+                                : | `alter_user_statement`
+                                : | `drop_user_statement`
+                                : | `list_users_statement`
+    udf_statement: `create_function_statement`
+                 : | `drop_function_statement`
+                 : | `create_aggregate_statement`
+                 : | `drop_aggregate_statement`
+    udt_statement: `create_type_statement`
+                 : | `alter_type_statement`
+                 : | `drop_type_statement`
+    trigger_statement: `create_trigger_statement`
+                     : | `drop_trigger_statement`
+
+.. _prepared-statements:
+
+Prepared Statements
+^^^^^^^^^^^^^^^^^^^
+
+CQL supports *prepared statements*. Prepared statements are an optimization that allows to parse a query only once but
+execute it multiple times with different concrete values.
+
+Any statement that uses at least one bind marker (see :token:`bind_marker`) will need to be *prepared*. After which the statement
+can be *executed* by provided concrete values for each of its marker. The exact details of how a statement is prepared
+and then executed depends on the CQL driver used and you should refer to your driver documentation.
diff --git a/src/doc/4.0-beta4/_sources/cql/dml.rst.txt b/src/doc/4.0-beta4/_sources/cql/dml.rst.txt
new file mode 100644
index 0000000..1308de5
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/dml.rst.txt
@@ -0,0 +1,522 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _data-manipulation:
+
+Data Manipulation
+-----------------
+
+This section describes the statements supported by CQL to insert, update, delete and query data.
+
+.. _select-statement:
+
+SELECT
+^^^^^^
+
+Querying data from data is done using a ``SELECT`` statement:
+
+.. productionlist::
+   select_statement: SELECT [ JSON | DISTINCT ] ( `select_clause` | '*' )
+                   : FROM `table_name`
+                   : [ WHERE `where_clause` ]
+                   : [ GROUP BY `group_by_clause` ]
+                   : [ ORDER BY `ordering_clause` ]
+                   : [ PER PARTITION LIMIT (`integer` | `bind_marker`) ]
+                   : [ LIMIT (`integer` | `bind_marker`) ]
+                   : [ ALLOW FILTERING ]
+   select_clause: `selector` [ AS `identifier` ] ( ',' `selector` [ AS `identifier` ] )
+   selector: `column_name`
+           : | `term`
+           : | CAST '(' `selector` AS `cql_type` ')'
+           : | `function_name` '(' [ `selector` ( ',' `selector` )* ] ')'
+           : | COUNT '(' '*' ')'
+   where_clause: `relation` ( AND `relation` )*
+   relation: `column_name` `operator` `term`
+           : '(' `column_name` ( ',' `column_name` )* ')' `operator` `tuple_literal`
+           : TOKEN '(' `column_name` ( ',' `column_name` )* ')' `operator` `term`
+   operator: '=' | '<' | '>' | '<=' | '>=' | '!=' | IN | CONTAINS | CONTAINS KEY
+   group_by_clause: `column_name` ( ',' `column_name` )*
+   ordering_clause: `column_name` [ ASC | DESC ] ( ',' `column_name` [ ASC | DESC ] )*
+
+For instance::
+
+    SELECT name, occupation FROM users WHERE userid IN (199, 200, 207);
+    SELECT JSON name, occupation FROM users WHERE userid = 199;
+    SELECT name AS user_name, occupation AS user_occupation FROM users;
+
+    SELECT time, value
+    FROM events
+    WHERE event_type = 'myEvent'
+      AND time > '2011-02-03'
+      AND time <= '2012-01-01'
+
+    SELECT COUNT (*) AS user_count FROM users;
+
+The ``SELECT`` statements reads one or more columns for one or more rows in a table. It returns a result-set of the rows
+matching the request, where each row contains the values for the selection corresponding to the query. Additionally,
+:ref:`functions <cql-functions>` including :ref:`aggregation <aggregate-functions>` ones can be applied to the result.
+
+A ``SELECT`` statement contains at least a :ref:`selection clause <selection-clause>` and the name of the table on which
+the selection is on (note that CQL does **not** joins or sub-queries and thus a select statement only apply to a single
+table). In most case, a select will also have a :ref:`where clause <where-clause>` and it can optionally have additional
+clauses to :ref:`order <ordering-clause>` or :ref:`limit <limit-clause>` the results. Lastly, :ref:`queries that require
+filtering <allow-filtering>` can be allowed if the ``ALLOW FILTERING`` flag is provided.
+
+.. _selection-clause:
+
+Selection clause
+~~~~~~~~~~~~~~~~
+
+The :token:`select_clause` determines which columns needs to be queried and returned in the result-set, as well as any
+transformation to apply to this result before returning. It consists of a comma-separated list of *selectors* or,
+alternatively, of the wildcard character (``*``) to select all the columns defined in the table.
+
+Selectors
+`````````
+
+A :token:`selector` can be one of:
+
+- A column name of the table selected, to retrieve the values for that column.
+- A term, which is usually used nested inside other selectors like functions (if a term is selected directly, then the
+  corresponding column of the result-set will simply have the value of this term for every row returned).
+- A casting, which allows to convert a nested selector to a (compatible) type.
+- A function call, where the arguments are selector themselves. See the section on :ref:`functions <cql-functions>` for
+  more details.
+- The special call ``COUNT(*)`` to the :ref:`COUNT function <count-function>`, which counts all non-null results.
+
+Aliases
+```````
+
+Every *top-level* selector can also be aliased (using `AS`). If so, the name of the corresponding column in the result
+set will be that of the alias. For instance::
+
+    // Without alias
+    SELECT intAsBlob(4) FROM t;
+
+    //  intAsBlob(4)
+    // --------------
+    //  0x00000004
+
+    // With alias
+    SELECT intAsBlob(4) AS four FROM t;
+
+    //  four
+    // ------------
+    //  0x00000004
+
+.. note:: Currently, aliases aren't recognized anywhere else in the statement where they are used (not in the ``WHERE``
+   clause, not in the ``ORDER BY`` clause, ...). You must use the orignal column name instead.
+
+
+``WRITETIME`` and ``TTL`` function
+```````````````````````````````````
+
+Selection supports two special functions (that aren't allowed anywhere else): ``WRITETIME`` and ``TTL``. Both function
+take only one argument and that argument *must* be a column name (so for instance ``TTL(3)`` is invalid).
+
+Those functions allow to retrieve meta-information that are stored internally for each column, namely:
+
+- the timestamp of the value of the column for ``WRITETIME``.
+- the remaining time to live (in seconds) for the value of the column if it set to expire (and ``null`` otherwise).
+
+.. _where-clause:
+
+The ``WHERE`` clause
+~~~~~~~~~~~~~~~~~~~~
+
+The ``WHERE`` clause specifies which rows must be queried. It is composed of relations on the columns that are part of
+the ``PRIMARY KEY`` and/or have a `secondary index <#createIndexStmt>`__ defined on them.
+
+Not all relations are allowed in a query. For instance, non-equal relations (where ``IN`` is considered as an equal
+relation) on a partition key are not supported (but see the use of the ``TOKEN`` method below to do non-equal queries on
+the partition key). Moreover, for a given partition key, the clustering columns induce an ordering of rows and relations
+on them is restricted to the relations that allow to select a **contiguous** (for the ordering) set of rows. For
+instance, given::
+
+    CREATE TABLE posts (
+        userid text,
+        blog_title text,
+        posted_at timestamp,
+        entry_title text,
+        content text,
+        category int,
+        PRIMARY KEY (userid, blog_title, posted_at)
+    )
+
+The following query is allowed::
+
+    SELECT entry_title, content FROM posts
+     WHERE userid = 'john doe'
+       AND blog_title='John''s Blog'
+       AND posted_at >= '2012-01-01' AND posted_at < '2012-01-31'
+
+But the following one is not, as it does not select a contiguous set of rows (and we suppose no secondary indexes are
+set)::
+
+    // Needs a blog_title to be set to select ranges of posted_at
+    SELECT entry_title, content FROM posts
+     WHERE userid = 'john doe'
+       AND posted_at >= '2012-01-01' AND posted_at < '2012-01-31'
+
+When specifying relations, the ``TOKEN`` function can be used on the ``PARTITION KEY`` column to query. In that case,
+rows will be selected based on the token of their ``PARTITION_KEY`` rather than on the value. Note that the token of a
+key depends on the partitioner in use, and that in particular the RandomPartitioner won't yield a meaningful order. Also
+note that ordering partitioners always order token values by bytes (so even if the partition key is of type int,
+``token(-1) > token(0)`` in particular). Example::
+
+    SELECT * FROM posts
+     WHERE token(userid) > token('tom') AND token(userid) < token('bob')
+
+Moreover, the ``IN`` relation is only allowed on the last column of the partition key and on the last column of the full
+primary key.
+
+It is also possible to “group” ``CLUSTERING COLUMNS`` together in a relation using the tuple notation. For instance::
+
+    SELECT * FROM posts
+     WHERE userid = 'john doe'
+       AND (blog_title, posted_at) > ('John''s Blog', '2012-01-01')
+
+will request all rows that sorts after the one having “John's Blog” as ``blog_tile`` and '2012-01-01' for ``posted_at``
+in the clustering order. In particular, rows having a ``post_at <= '2012-01-01'`` will be returned as long as their
+``blog_title > 'John''s Blog'``, which would not be the case for::
+
+    SELECT * FROM posts
+     WHERE userid = 'john doe'
+       AND blog_title > 'John''s Blog'
+       AND posted_at > '2012-01-01'
+
+The tuple notation may also be used for ``IN`` clauses on clustering columns::
+
+    SELECT * FROM posts
+     WHERE userid = 'john doe'
+       AND (blog_title, posted_at) IN (('John''s Blog', '2012-01-01'), ('Extreme Chess', '2014-06-01'))
+
+The ``CONTAINS`` operator may only be used on collection columns (lists, sets, and maps). In the case of maps,
+``CONTAINS`` applies to the map values. The ``CONTAINS KEY`` operator may only be used on map columns and applies to the
+map keys.
+
+.. _group-by-clause:
+
+Grouping results
+~~~~~~~~~~~~~~~~
+
+The ``GROUP BY`` option allows to condense into a single row all selected rows that share the same values for a set
+of columns.
+
+Using the ``GROUP BY`` option, it is only possible to group rows at the partition key level or at a clustering column
+level. By consequence, the ``GROUP BY`` option only accept as arguments primary key column names in the primary key
+order. If a primary key column is restricted by an equality restriction it is not required to be present in the
+``GROUP BY`` clause.
+
+Aggregate functions will produce a separate value for each group. If no ``GROUP BY`` clause is specified,
+aggregates functions will produce a single value for all the rows.
+
+If a column is selected without an aggregate function, in a statement with a ``GROUP BY``, the first value encounter
+in each group will be returned.
+
+.. _ordering-clause:
+
+Ordering results
+~~~~~~~~~~~~~~~~
+
+The ``ORDER BY`` clause allows to select the order of the returned results. It takes as argument a list of column names
+along with the order for the column (``ASC`` for ascendant and ``DESC`` for descendant, omitting the order being
+equivalent to ``ASC``). Currently the possible orderings are limited by the :ref:`clustering order <clustering-order>`
+defined on the table:
+
+- if the table has been defined without any specific ``CLUSTERING ORDER``, then then allowed orderings are the order
+  induced by the clustering columns and the reverse of that one.
+- otherwise, the orderings allowed are the order of the ``CLUSTERING ORDER`` option and the reversed one.
+
+.. _limit-clause:
+
+Limiting results
+~~~~~~~~~~~~~~~~
+
+The ``LIMIT`` option to a ``SELECT`` statement limits the number of rows returned by a query, while the ``PER PARTITION
+LIMIT`` option limits the number of rows returned for a given partition by the query. Note that both type of limit can
+used in the same statement.
+
+.. _allow-filtering:
+
+Allowing filtering
+~~~~~~~~~~~~~~~~~~
+
+By default, CQL only allows select queries that don't involve “filtering” server side, i.e. queries where we know that
+all (live) record read will be returned (maybe partly) in the result set. The reasoning is that those “non filtering”
+queries have predictable performance in the sense that they will execute in a time that is proportional to the amount of
+data **returned** by the query (which can be controlled through ``LIMIT``).
+
+The ``ALLOW FILTERING`` option allows to explicitly allow (some) queries that require filtering. Please note that a
+query using ``ALLOW FILTERING`` may thus have unpredictable performance (for the definition above), i.e. even a query
+that selects a handful of records **may** exhibit performance that depends on the total amount of data stored in the
+cluster.
+
+For instance, considering the following table holding user profiles with their year of birth (with a secondary index on
+it) and country of residence::
+
+    CREATE TABLE users (
+        username text PRIMARY KEY,
+        firstname text,
+        lastname text,
+        birth_year int,
+        country text
+    )
+
+    CREATE INDEX ON users(birth_year);
+
+Then the following queries are valid::
+
+    SELECT * FROM users;
+    SELECT * FROM users WHERE birth_year = 1981;
+
+because in both case, Cassandra guarantees that these queries performance will be proportional to the amount of data
+returned. In particular, if no users are born in 1981, then the second query performance will not depend of the number
+of user profile stored in the database (not directly at least: due to secondary index implementation consideration, this
+query may still depend on the number of node in the cluster, which indirectly depends on the amount of data stored.
+Nevertheless, the number of nodes will always be multiple number of magnitude lower than the number of user profile
+stored). Of course, both query may return very large result set in practice, but the amount of data returned can always
+be controlled by adding a ``LIMIT``.
+
+However, the following query will be rejected::
+
+    SELECT * FROM users WHERE birth_year = 1981 AND country = 'FR';
+
+because Cassandra cannot guarantee that it won't have to scan large amount of data even if the result to those query is
+small. Typically, it will scan all the index entries for users born in 1981 even if only a handful are actually from
+France. However, if you “know what you are doing”, you can force the execution of this query by using ``ALLOW
+FILTERING`` and so the following query is valid::
+
+    SELECT * FROM users WHERE birth_year = 1981 AND country = 'FR' ALLOW FILTERING;
+
+.. _insert-statement:
+
+INSERT
+^^^^^^
+
+Inserting data for a row is done using an ``INSERT`` statement:
+
+.. productionlist::
+   insert_statement: INSERT INTO `table_name` ( `names_values` | `json_clause` )
+                   : [ IF NOT EXISTS ]
+                   : [ USING `update_parameter` ( AND `update_parameter` )* ]
+   names_values: `names` VALUES `tuple_literal`
+   json_clause: JSON `string` [ DEFAULT ( NULL | UNSET ) ]
+   names: '(' `column_name` ( ',' `column_name` )* ')'
+
+For instance::
+
+    INSERT INTO NerdMovies (movie, director, main_actor, year)
+                    VALUES ('Serenity', 'Joss Whedon', 'Nathan Fillion', 2005)
+          USING TTL 86400;
+
+    INSERT INTO NerdMovies JSON '{"movie": "Serenity",
+                                  "director": "Joss Whedon",
+                                  "year": 2005}';
+
+The ``INSERT`` statement writes one or more columns for a given row in a table. Note that since a row is identified by
+its ``PRIMARY KEY``, at least the columns composing it must be specified. The list of columns to insert to must be
+supplied when using the ``VALUES`` syntax. When using the ``JSON`` syntax, they are optional. See the
+section on :ref:`JSON support <cql-json>` for more detail.
+
+Note that unlike in SQL, ``INSERT`` does not check the prior existence of the row by default: the row is created if none
+existed before, and updated otherwise. Furthermore, there is no mean to know which of creation or update happened.
+
+It is however possible to use the ``IF NOT EXISTS`` condition to only insert if the row does not exist prior to the
+insertion. But please note that using ``IF NOT EXISTS`` will incur a non negligible performance cost (internally, Paxos
+will be used) so this should be used sparingly.
+
+All updates for an ``INSERT`` are applied atomically and in isolation.
+
+Please refer to the :ref:`UPDATE <update-parameters>` section for informations on the :token:`update_parameter`.
+
+Also note that ``INSERT`` does not support counters, while ``UPDATE`` does.
+
+.. _update-statement:
+
+UPDATE
+^^^^^^
+
+Updating a row is done using an ``UPDATE`` statement:
+
+.. productionlist::
+   update_statement: UPDATE `table_name`
+                   : [ USING `update_parameter` ( AND `update_parameter` )* ]
+                   : SET `assignment` ( ',' `assignment` )*
+                   : WHERE `where_clause`
+                   : [ IF ( EXISTS | `condition` ( AND `condition` )*) ]
+   update_parameter: ( TIMESTAMP | TTL ) ( `integer` | `bind_marker` )
+   assignment: `simple_selection` '=' `term`
+             :| `column_name` '=' `column_name` ( '+' | '-' ) `term`
+             :| `column_name` '=' `list_literal` '+' `column_name`
+   simple_selection: `column_name`
+                   :| `column_name` '[' `term` ']'
+                   :| `column_name` '.' `field_name
+   condition: `simple_selection` `operator` `term`
+
+For instance::
+
+    UPDATE NerdMovies USING TTL 400
+       SET director   = 'Joss Whedon',
+           main_actor = 'Nathan Fillion',
+           year       = 2005
+     WHERE movie = 'Serenity';
+
+    UPDATE UserActions
+       SET total = total + 2
+       WHERE user = B70DE1D0-9908-4AE3-BE34-5573E5B09F14
+         AND action = 'click';
+
+The ``UPDATE`` statement writes one or more columns for a given row in a table. The :token:`where_clause` is used to
+select the row to update and must include all columns composing the ``PRIMARY KEY``. Non primary key columns are then
+set using the ``SET`` keyword.
+
+Note that unlike in SQL, ``UPDATE`` does not check the prior existence of the row by default (except through ``IF``, see
+below): the row is created if none existed before, and updated otherwise. Furthermore, there are no means to know
+whether a creation or update occurred.
+
+It is however possible to use the conditions on some columns through ``IF``, in which case the row will not be updated
+unless the conditions are met. But, please note that using ``IF`` conditions will incur a non-negligible performance
+cost (internally, Paxos will be used) so this should be used sparingly.
+
+In an ``UPDATE`` statement, all updates within the same partition key are applied atomically and in isolation.
+
+Regarding the :token:`assignment`:
+
+- ``c = c + 3`` is used to increment/decrement counters. The column name after the '=' sign **must** be the same than
+  the one before the '=' sign. Note that increment/decrement is only allowed on counters, and are the *only* update
+  operations allowed on counters. See the section on :ref:`counters <counters>` for details.
+- ``id = id + <some-collection>`` and ``id[value1] = value2`` are for collections, see the :ref:`relevant section
+  <collections>` for details.
+- ``id.field = 3`` is for setting the value of a field on a non-frozen user-defined types. see the :ref:`relevant section
+  <udts>` for details.
+
+.. _update-parameters:
+
+Update parameters
+~~~~~~~~~~~~~~~~~
+
+The ``UPDATE``, ``INSERT`` (and ``DELETE`` and ``BATCH`` for the ``TIMESTAMP``) statements support the following
+parameters:
+
+- ``TIMESTAMP``: sets the timestamp for the operation. If not specified, the coordinator will use the current time (in
+  microseconds) at the start of statement execution as the timestamp. This is usually a suitable default.
+- ``TTL``: specifies an optional Time To Live (in seconds) for the inserted values. If set, the inserted values are
+  automatically removed from the database after the specified time. Note that the TTL concerns the inserted values, not
+  the columns themselves. This means that any subsequent update of the column will also reset the TTL (to whatever TTL
+  is specified in that update). By default, values never expire. A TTL of 0 is equivalent to no TTL. If the table has a
+  default_time_to_live, a TTL of 0 will remove the TTL for the inserted or updated values. A TTL of ``null`` is equivalent
+  to inserting with a TTL of 0.
+
+.. _delete_statement:
+
+DELETE
+^^^^^^
+
+Deleting rows or parts of rows uses the ``DELETE`` statement:
+
+.. productionlist::
+   delete_statement: DELETE [ `simple_selection` ( ',' `simple_selection` ) ]
+                   : FROM `table_name`
+                   : [ USING `update_parameter` ( AND `update_parameter` )* ]
+                   : WHERE `where_clause`
+                   : [ IF ( EXISTS | `condition` ( AND `condition` )*) ]
+
+For instance::
+
+    DELETE FROM NerdMovies USING TIMESTAMP 1240003134
+     WHERE movie = 'Serenity';
+
+    DELETE phone FROM Users
+     WHERE userid IN (C73DE1D3-AF08-40F3-B124-3FF3E5109F22, B70DE1D0-9908-4AE3-BE34-5573E5B09F14);
+
+The ``DELETE`` statement deletes columns and rows. If column names are provided directly after the ``DELETE`` keyword,
+only those columns are deleted from the row indicated by the ``WHERE`` clause. Otherwise, whole rows are removed.
+
+The ``WHERE`` clause specifies which rows are to be deleted. Multiple rows may be deleted with one statement by using an
+``IN`` operator. A range of rows may be deleted using an inequality operator (such as ``>=``).
+
+``DELETE`` supports the ``TIMESTAMP`` option with the same semantics as in :ref:`updates <update-parameters>`.
+
+In a ``DELETE`` statement, all deletions within the same partition key are applied atomically and in isolation.
+
+A ``DELETE`` operation can be conditional through the use of an ``IF`` clause, similar to ``UPDATE`` and ``INSERT``
+statements. However, as with ``INSERT`` and ``UPDATE`` statements, this will incur a non-negligible performance cost
+(internally, Paxos will be used) and so should be used sparingly.
+
+.. _batch_statement:
+
+BATCH
+^^^^^
+
+Multiple ``INSERT``, ``UPDATE`` and ``DELETE`` can be executed in a single statement by grouping them through a
+``BATCH`` statement:
+
+.. productionlist::
+   batch_statement: BEGIN [ UNLOGGED | COUNTER ] BATCH
+                   : [ USING `update_parameter` ( AND `update_parameter` )* ]
+                   : `modification_statement` ( ';' `modification_statement` )*
+                   : APPLY BATCH
+   modification_statement: `insert_statement` | `update_statement` | `delete_statement`
+
+For instance::
+
+    BEGIN BATCH
+       INSERT INTO users (userid, password, name) VALUES ('user2', 'ch@ngem3b', 'second user');
+       UPDATE users SET password = 'ps22dhds' WHERE userid = 'user3';
+       INSERT INTO users (userid, password) VALUES ('user4', 'ch@ngem3c');
+       DELETE name FROM users WHERE userid = 'user1';
+    APPLY BATCH;
+
+The ``BATCH`` statement group multiple modification statements (insertions/updates and deletions) into a single
+statement. It serves several purposes:
+
+- It saves network round-trips between the client and the server (and sometimes between the server coordinator and the
+  replicas) when batching multiple updates.
+- All updates in a ``BATCH`` belonging to a given partition key are performed in isolation.
+- By default, all operations in the batch are performed as *logged*, to ensure all mutations eventually complete (or
+  none will). See the notes on :ref:`UNLOGGED batches <unlogged-batches>` for more details.
+
+Note that:
+
+- ``BATCH`` statements may only contain ``UPDATE``, ``INSERT`` and ``DELETE`` statements (not other batches for instance).
+- Batches are *not* a full analogue for SQL transactions.
+- If a timestamp is not specified for each operation, then all operations will be applied with the same timestamp
+  (either one generated automatically, or the timestamp provided at the batch level). Due to Cassandra's conflict
+  resolution procedure in the case of `timestamp ties <http://wiki.apache.org/cassandra/FAQ#clocktie>`__, operations may
+  be applied in an order that is different from the order they are listed in the ``BATCH`` statement. To force a
+  particular operation ordering, you must specify per-operation timestamps.
+- A LOGGED batch to a single partition will be converted to an UNLOGGED batch as an optimization.
+
+.. _unlogged-batches:
+
+``UNLOGGED`` batches
+~~~~~~~~~~~~~~~~~~~~
+
+By default, Cassandra uses a batch log to ensure all operations in a batch eventually complete or none will (note
+however that operations are only isolated within a single partition).
+
+There is a performance penalty for batch atomicity when a batch spans multiple partitions. If you do not want to incur
+this penalty, you can tell Cassandra to skip the batchlog with the ``UNLOGGED`` option. If the ``UNLOGGED`` option is
+used, a failed batch might leave the patch only partly applied.
+
+``COUNTER`` batches
+~~~~~~~~~~~~~~~~~~~
+
+Use the ``COUNTER`` option for batched counter updates. Unlike other
+updates in Cassandra, counter updates are not idempotent.
diff --git a/src/doc/4.0-beta4/_sources/cql/functions.rst.txt b/src/doc/4.0-beta4/_sources/cql/functions.rst.txt
new file mode 100644
index 0000000..965125a
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/functions.rst.txt
@@ -0,0 +1,581 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _cql-functions:
+
+.. Need some intro for UDF and native functions in general and point those to it.
+.. _udfs:
+.. _native-functions:
+
+Functions
+---------
+
+CQL supports 2 main categories of functions:
+
+- the :ref:`scalar functions <scalar-functions>`, which simply take a number of values and produce an output with it.
+- the :ref:`aggregate functions <aggregate-functions>`, which are used to aggregate multiple rows results from a
+  ``SELECT`` statement.
+
+In both cases, CQL provides a number of native "hard-coded" functions as well as the ability to create new user-defined
+functions.
+
+.. note:: By default, the use of user-defined functions is disabled by default for security concerns (even when
+   enabled, the execution of user-defined functions is sandboxed and a "rogue" function should not be allowed to do
+   evil, but no sandbox is perfect so using user-defined functions is opt-in). See the ``enable_user_defined_functions``
+   in ``cassandra.yaml`` to enable them.
+
+A function is identifier by its name:
+
+.. productionlist::
+   function_name: [ `keyspace_name` '.' ] `name`
+
+.. _scalar-functions:
+
+Scalar functions
+^^^^^^^^^^^^^^^^
+
+.. _scalar-native-functions:
+
+Native functions
+~~~~~~~~~~~~~~~~
+
+Cast
+````
+
+The ``cast`` function can be used to converts one native datatype to another.
+
+The following table describes the conversions supported by the ``cast`` function. Cassandra will silently ignore any
+cast converting a datatype into its own datatype.
+
+=============== =======================================================================================================
+ From            To
+=============== =======================================================================================================
+ ``ascii``       ``text``, ``varchar``
+ ``bigint``      ``tinyint``, ``smallint``, ``int``, ``float``, ``double``, ``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``boolean``     ``text``, ``varchar``
+ ``counter``     ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``,
+                 ``text``, ``varchar``
+ ``date``        ``timestamp``
+ ``decimal``     ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``varint``, ``text``,
+                 ``varchar``
+ ``double``      ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``float``       ``tinyint``, ``smallint``, ``int``, ``bigint``, ``double``, ``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``inet``        ``text``, ``varchar``
+ ``int``         ``tinyint``, ``smallint``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``smallint``    ``tinyint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``time``        ``text``, ``varchar``
+ ``timestamp``   ``date``, ``text``, ``varchar``
+ ``timeuuid``    ``timestamp``, ``date``, ``text``, ``varchar``
+ ``tinyint``     ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``,
+                 ``text``, ``varchar``
+ ``uuid``        ``text``, ``varchar``
+ ``varint``      ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``text``,
+                 ``varchar``
+=============== =======================================================================================================
+
+The conversions rely strictly on Java's semantics. For example, the double value 1 will be converted to the text value
+'1.0'. For instance::
+
+    SELECT avg(cast(count as double)) FROM myTable
+
+Token
+`````
+
+The ``token`` function allows to compute the token for a given partition key. The exact signature of the token function
+depends on the table concerned and of the partitioner used by the cluster.
+
+The type of the arguments of the ``token`` depend on the type of the partition key columns. The return type depend on
+the partitioner in use:
+
+- For Murmur3Partitioner, the return type is ``bigint``.
+- For RandomPartitioner, the return type is ``varint``.
+- For ByteOrderedPartitioner, the return type is ``blob``.
+
+For instance, in a cluster using the default Murmur3Partitioner, if a table is defined by::
+
+    CREATE TABLE users (
+        userid text PRIMARY KEY,
+        username text,
+    )
+
+then the ``token`` function will take a single argument of type ``text`` (in that case, the partition key is ``userid``
+(there is no clustering columns so the partition key is the same than the primary key)), and the return type will be
+``bigint``.
+
+Uuid
+````
+The ``uuid`` function takes no parameters and generates a random type 4 uuid suitable for use in ``INSERT`` or
+``UPDATE`` statements.
+
+.. _timeuuid-functions:
+
+Timeuuid functions
+``````````````````
+
+``now``
+#######
+
+The ``now`` function takes no arguments and generates, on the coordinator node, a new unique timeuuid at the 
+time the function is invoked. Note that this method is useful for insertion but is largely non-sensical in
+``WHERE`` clauses. For instance, a query of the form::
+
+    SELECT * FROM myTable WHERE t = now()
+
+will never return any result by design, since the value returned by ``now()`` is guaranteed to be unique.
+
+``currentTimeUUID`` is an alias of ``now``.
+
+``minTimeuuid`` and ``maxTimeuuid``
+###################################
+
+The ``minTimeuuid`` (resp. ``maxTimeuuid``) function takes a ``timestamp`` value ``t`` (which can be `either a timestamp
+or a date string <timestamps>`) and return a *fake* ``timeuuid`` corresponding to the *smallest* (resp. *biggest*)
+possible ``timeuuid`` having for timestamp ``t``. So for instance::
+
+    SELECT * FROM myTable
+     WHERE t > maxTimeuuid('2013-01-01 00:05+0000')
+       AND t < minTimeuuid('2013-02-02 10:00+0000')
+
+will select all rows where the ``timeuuid`` column ``t`` is strictly older than ``'2013-01-01 00:05+0000'`` but strictly
+younger than ``'2013-02-02 10:00+0000'``. Please note that ``t >= maxTimeuuid('2013-01-01 00:05+0000')`` would still
+*not* select a ``timeuuid`` generated exactly at '2013-01-01 00:05+0000' and is essentially equivalent to ``t >
+maxTimeuuid('2013-01-01 00:05+0000')``.
+
+.. note:: We called the values generated by ``minTimeuuid`` and ``maxTimeuuid`` *fake* UUID because they do no respect
+   the Time-Based UUID generation process specified by the `RFC 4122 <http://www.ietf.org/rfc/rfc4122.txt>`__. In
+   particular, the value returned by these 2 methods will not be unique. This means you should only use those methods
+   for querying (as in the example above). Inserting the result of those methods is almost certainly *a bad idea*.
+
+Datetime functions
+``````````````````
+
+Retrieving the current date/time
+################################
+
+The following functions can be used to retrieve the date/time at the time where the function is invoked:
+
+===================== ===============
+ Function name         Output type
+===================== ===============
+ ``currentTimestamp``  ``timestamp``
+ ``currentDate``       ``date``
+ ``currentTime``       ``time``
+ ``currentTimeUUID``   ``timeUUID``
+===================== ===============
+
+For example the last 2 days of data can be retrieved using::
+
+    SELECT * FROM myTable WHERE date >= currentDate() - 2d
+
+Time conversion functions
+#########################
+
+A number of functions are provided to “convert” a ``timeuuid``, a ``timestamp`` or a ``date`` into another ``native``
+type.
+
+===================== =============== ===================================================================
+ Function name         Input type      Description
+===================== =============== ===================================================================
+ ``toDate``            ``timeuuid``    Converts the ``timeuuid`` argument into a ``date`` type
+ ``toDate``            ``timestamp``   Converts the ``timestamp`` argument into a ``date`` type
+ ``toTimestamp``       ``timeuuid``    Converts the ``timeuuid`` argument into a ``timestamp`` type
+ ``toTimestamp``       ``date``        Converts the ``date`` argument into a ``timestamp`` type
+ ``toUnixTimestamp``   ``timeuuid``    Converts the ``timeuuid`` argument into a ``bigInt`` raw value
+ ``toUnixTimestamp``   ``timestamp``   Converts the ``timestamp`` argument into a ``bigInt`` raw value
+ ``toUnixTimestamp``   ``date``        Converts the ``date`` argument into a ``bigInt`` raw value
+ ``dateOf``            ``timeuuid``    Similar to ``toTimestamp(timeuuid)`` (DEPRECATED)
+ ``unixTimestampOf``   ``timeuuid``    Similar to ``toUnixTimestamp(timeuuid)`` (DEPRECATED)
+===================== =============== ===================================================================
+
+Blob conversion functions
+`````````````````````````
+A number of functions are provided to “convert” the native types into binary data (``blob``). For every
+``<native-type>`` ``type`` supported by CQL (a notable exceptions is ``blob``, for obvious reasons), the function
+``typeAsBlob`` takes a argument of type ``type`` and return it as a ``blob``. Conversely, the function ``blobAsType``
+takes a 64-bit ``blob`` argument and convert it to a ``bigint`` value. And so for instance, ``bigintAsBlob(3)`` is
+``0x0000000000000003`` and ``blobAsBigint(0x0000000000000003)`` is ``3``.
+
+.. _user-defined-scalar-functions:
+
+User-defined functions
+~~~~~~~~~~~~~~~~~~~~~~
+
+User-defined functions allow execution of user-provided code in Cassandra. By default, Cassandra supports defining
+functions in *Java* and *JavaScript*. Support for other JSR 223 compliant scripting languages (such as Python, Ruby, and
+Scala) can be added by adding a JAR to the classpath.
+
+UDFs are part of the Cassandra schema. As such, they are automatically propagated to all nodes in the cluster.
+
+UDFs can be *overloaded* - i.e. multiple UDFs with different argument types but the same function name. Example::
+
+    CREATE FUNCTION sample ( arg int ) ...;
+    CREATE FUNCTION sample ( arg text ) ...;
+
+User-defined functions are susceptible to all of the normal problems with the chosen programming language. Accordingly,
+implementations should be safe against null pointer exceptions, illegal arguments, or any other potential source of
+exceptions. An exception during function execution will result in the entire statement failing.
+
+It is valid to use *complex* types like collections, tuple types and user-defined types as argument and return types.
+Tuple types and user-defined types are handled by the conversion functions of the DataStax Java Driver. Please see the
+documentation of the Java Driver for details on handling tuple types and user-defined types.
+
+Arguments for functions can be literals or terms. Prepared statement placeholders can be used, too.
+
+Note that you can use the double-quoted string syntax to enclose the UDF source code. For example::
+
+    CREATE FUNCTION some_function ( arg int )
+        RETURNS NULL ON NULL INPUT
+        RETURNS int
+        LANGUAGE java
+        AS $$ return arg; $$;
+
+    SELECT some_function(column) FROM atable ...;
+    UPDATE atable SET col = some_function(?) ...;
+
+    CREATE TYPE custom_type (txt text, i int);
+    CREATE FUNCTION fct_using_udt ( udtarg frozen )
+        RETURNS NULL ON NULL INPUT
+        RETURNS text
+        LANGUAGE java
+        AS $$ return udtarg.getString("txt"); $$;
+
+User-defined functions can be used in ``SELECT``, ``INSERT`` and ``UPDATE`` statements.
+
+The implicitly available ``udfContext`` field (or binding for script UDFs) provides the necessary functionality to
+create new UDT and tuple values::
+
+    CREATE TYPE custom_type (txt text, i int);
+    CREATE FUNCTION fct\_using\_udt ( somearg int )
+        RETURNS NULL ON NULL INPUT
+        RETURNS custom_type
+        LANGUAGE java
+        AS $$
+            UDTValue udt = udfContext.newReturnUDTValue();
+            udt.setString("txt", "some string");
+            udt.setInt("i", 42);
+            return udt;
+        $$;
+
+The definition of the ``UDFContext`` interface can be found in the Apache Cassandra source code for
+``org.apache.cassandra.cql3.functions.UDFContext``.
+
+.. code-block:: java
+
+    public interface UDFContext
+    {
+        UDTValue newArgUDTValue(String argName);
+        UDTValue newArgUDTValue(int argNum);
+        UDTValue newReturnUDTValue();
+        UDTValue newUDTValue(String udtName);
+        TupleValue newArgTupleValue(String argName);
+        TupleValue newArgTupleValue(int argNum);
+        TupleValue newReturnTupleValue();
+        TupleValue newTupleValue(String cqlDefinition);
+    }
+
+Java UDFs already have some imports for common interfaces and classes defined. These imports are:
+
+.. code-block:: java
+
+    import java.nio.ByteBuffer;
+    import java.util.List;
+    import java.util.Map;
+    import java.util.Set;
+    import org.apache.cassandra.cql3.functions.UDFContext;
+    import com.datastax.driver.core.TypeCodec;
+    import com.datastax.driver.core.TupleValue;
+    import com.datastax.driver.core.UDTValue;
+
+Please note, that these convenience imports are not available for script UDFs.
+
+.. _create-function-statement:
+
+CREATE FUNCTION
+```````````````
+
+Creating a new user-defined function uses the ``CREATE FUNCTION`` statement:
+
+.. productionlist::
+   create_function_statement: CREATE [ OR REPLACE ] FUNCTION [ IF NOT EXISTS]
+                            :     `function_name` '(' `arguments_declaration` ')'
+                            :     [ CALLED | RETURNS NULL ] ON NULL INPUT
+                            :     RETURNS `cql_type`
+                            :     LANGUAGE `identifier`
+                            :     AS `string`
+   arguments_declaration: `identifier` `cql_type` ( ',' `identifier` `cql_type` )*
+
+For instance::
+
+    CREATE OR REPLACE FUNCTION somefunction(somearg int, anotherarg text, complexarg frozen<someUDT>, listarg list)
+        RETURNS NULL ON NULL INPUT
+        RETURNS text
+        LANGUAGE java
+        AS $$
+            // some Java code
+        $$;
+
+    CREATE FUNCTION IF NOT EXISTS akeyspace.fname(someArg int)
+        CALLED ON NULL INPUT
+        RETURNS text
+        LANGUAGE java
+        AS $$
+            // some Java code
+        $$;
+
+``CREATE FUNCTION`` with the optional ``OR REPLACE`` keywords either creates a function or replaces an existing one with
+the same signature. A ``CREATE FUNCTION`` without ``OR REPLACE`` fails if a function with the same signature already
+exists.
+
+If the optional ``IF NOT EXISTS`` keywords are used, the function will
+only be created if another function with the same signature does not
+exist.
+
+``OR REPLACE`` and ``IF NOT EXISTS`` cannot be used together.
+
+Behavior on invocation with ``null`` values must be defined for each
+function. There are two options:
+
+#. ``RETURNS NULL ON NULL INPUT`` declares that the function will always
+   return ``null`` if any of the input arguments is ``null``.
+#. ``CALLED ON NULL INPUT`` declares that the function will always be
+   executed.
+
+Function Signature
+##################
+
+Signatures are used to distinguish individual functions. The signature consists of:
+
+#. The fully qualified function name - i.e *keyspace* plus *function-name*
+#. The concatenated list of all argument types
+
+Note that keyspace names, function names and argument types are subject to the default naming conventions and
+case-sensitivity rules.
+
+Functions belong to a keyspace. If no keyspace is specified in ``<function-name>``, the current keyspace is used (i.e.
+the keyspace specified using the ``USE`` statement). It is not possible to create a user-defined function in one of the
+system keyspaces.
+
+.. _drop-function-statement:
+
+DROP FUNCTION
+`````````````
+
+Dropping a function uses the ``DROP FUNCTION`` statement:
+
+.. productionlist::
+   drop_function_statement: DROP FUNCTION [ IF EXISTS ] `function_name` [ '(' `arguments_signature` ')' ]
+   arguments_signature: `cql_type` ( ',' `cql_type` )*
+
+For instance::
+
+    DROP FUNCTION myfunction;
+    DROP FUNCTION mykeyspace.afunction;
+    DROP FUNCTION afunction ( int );
+    DROP FUNCTION afunction ( text );
+
+You must specify the argument types (:token:`arguments_signature`) of the function to drop if there are multiple
+functions with the same name but a different signature (overloaded functions).
+
+``DROP FUNCTION`` with the optional ``IF EXISTS`` keywords drops a function if it exists, but does not throw an error if
+it doesn't
+
+.. _aggregate-functions:
+
+Aggregate functions
+^^^^^^^^^^^^^^^^^^^
+
+Aggregate functions work on a set of rows. They receive values for each row and returns one value for the whole set.
+
+If ``normal`` columns, ``scalar functions``, ``UDT`` fields, ``writetime`` or ``ttl`` are selected together with
+aggregate functions, the values returned for them will be the ones of the first row matching the query.
+
+Native aggregates
+~~~~~~~~~~~~~~~~~
+
+.. _count-function:
+
+Count
+`````
+
+The ``count`` function can be used to count the rows returned by a query. Example::
+
+    SELECT COUNT (*) FROM plays;
+    SELECT COUNT (1) FROM plays;
+
+It also can be used to count the non null value of a given column::
+
+    SELECT COUNT (scores) FROM plays;
+
+Max and Min
+```````````
+
+The ``max`` and ``min`` functions can be used to compute the maximum and the minimum value returned by a query for a
+given column. For instance::
+
+    SELECT MIN (players), MAX (players) FROM plays WHERE game = 'quake';
+
+Sum
+```
+
+The ``sum`` function can be used to sum up all the values returned by a query for a given column. For instance::
+
+    SELECT SUM (players) FROM plays;
+
+Avg
+```
+
+The ``avg`` function can be used to compute the average of all the values returned by a query for a given column. For
+instance::
+
+    SELECT AVG (players) FROM plays;
+
+.. _user-defined-aggregates-functions:
+
+User-Defined Aggregates
+~~~~~~~~~~~~~~~~~~~~~~~
+
+User-defined aggregates allow the creation of custom aggregate functions. Common examples of aggregate functions are
+*count*, *min*, and *max*.
+
+Each aggregate requires an *initial state* (``INITCOND``, which defaults to ``null``) of type ``STYPE``. The first
+argument of the state function must have type ``STYPE``. The remaining arguments of the state function must match the
+types of the user-defined aggregate arguments. The state function is called once for each row, and the value returned by
+the state function becomes the new state. After all rows are processed, the optional ``FINALFUNC`` is executed with last
+state value as its argument.
+
+``STYPE`` is mandatory in order to be able to distinguish possibly overloaded versions of the state and/or final
+function (since the overload can appear after creation of the aggregate).
+
+User-defined aggregates can be used in ``SELECT`` statement.
+
+A complete working example for user-defined aggregates (assuming that a keyspace has been selected using the ``USE``
+statement)::
+
+    CREATE OR REPLACE FUNCTION averageState(state tuple<int,bigint>, val int)
+        CALLED ON NULL INPUT
+        RETURNS tuple
+        LANGUAGE java
+        AS $$
+            if (val != null) {
+                state.setInt(0, state.getInt(0)+1);
+                state.setLong(1, state.getLong(1)+val.intValue());
+            }
+            return state;
+        $$;
+
+    CREATE OR REPLACE FUNCTION averageFinal (state tuple<int,bigint>)
+        CALLED ON NULL INPUT
+        RETURNS double
+        LANGUAGE java
+        AS $$
+            double r = 0;
+            if (state.getInt(0) == 0) return null;
+            r = state.getLong(1);
+            r /= state.getInt(0);
+            return Double.valueOf(r);
+        $$;
+
+    CREATE OR REPLACE AGGREGATE average(int)
+        SFUNC averageState
+        STYPE tuple
+        FINALFUNC averageFinal
+        INITCOND (0, 0);
+
+    CREATE TABLE atable (
+        pk int PRIMARY KEY,
+        val int
+    );
+
+    INSERT INTO atable (pk, val) VALUES (1,1);
+    INSERT INTO atable (pk, val) VALUES (2,2);
+    INSERT INTO atable (pk, val) VALUES (3,3);
+    INSERT INTO atable (pk, val) VALUES (4,4);
+
+    SELECT average(val) FROM atable;
+
+.. _create-aggregate-statement:
+
+CREATE AGGREGATE
+````````````````
+
+Creating (or replacing) a user-defined aggregate function uses the ``CREATE AGGREGATE`` statement:
+
+.. productionlist::
+   create_aggregate_statement: CREATE [ OR REPLACE ] AGGREGATE [ IF NOT EXISTS ]
+                             :     `function_name` '(' `arguments_signature` ')'
+                             :     SFUNC `function_name`
+                             :     STYPE `cql_type`
+                             :     [ FINALFUNC `function_name` ]
+                             :     [ INITCOND `term` ]
+
+See above for a complete example.
+
+``CREATE AGGREGATE`` with the optional ``OR REPLACE`` keywords either creates an aggregate or replaces an existing one
+with the same signature. A ``CREATE AGGREGATE`` without ``OR REPLACE`` fails if an aggregate with the same signature
+already exists.
+
+``CREATE AGGREGATE`` with the optional ``IF NOT EXISTS`` keywords either creates an aggregate if it does not already
+exist.
+
+``OR REPLACE`` and ``IF NOT EXISTS`` cannot be used together.
+
+``STYPE`` defines the type of the state value and must be specified.
+
+The optional ``INITCOND`` defines the initial state value for the aggregate. It defaults to ``null``. A non-\ ``null``
+``INITCOND`` must be specified for state functions that are declared with ``RETURNS NULL ON NULL INPUT``.
+
+``SFUNC`` references an existing function to be used as the state modifying function. The type of first argument of the
+state function must match ``STYPE``. The remaining argument types of the state function must match the argument types of
+the aggregate function. State is not updated for state functions declared with ``RETURNS NULL ON NULL INPUT`` and called
+with ``null``.
+
+The optional ``FINALFUNC`` is called just before the aggregate result is returned. It must take only one argument with
+type ``STYPE``. The return type of the ``FINALFUNC`` may be a different type. A final function declared with ``RETURNS
+NULL ON NULL INPUT`` means that the aggregate's return value will be ``null``, if the last state is ``null``.
+
+If no ``FINALFUNC`` is defined, the overall return type of the aggregate function is ``STYPE``. If a ``FINALFUNC`` is
+defined, it is the return type of that function.
+
+.. _drop-aggregate-statement:
+
+DROP AGGREGATE
+``````````````
+
+Dropping an user-defined aggregate function uses the ``DROP AGGREGATE`` statement:
+
+.. productionlist::
+   drop_aggregate_statement: DROP AGGREGATE [ IF EXISTS ] `function_name` [ '(' `arguments_signature` ')' ]
+
+For instance::
+
+    DROP AGGREGATE myAggregate;
+    DROP AGGREGATE myKeyspace.anAggregate;
+    DROP AGGREGATE someAggregate ( int );
+    DROP AGGREGATE someAggregate ( text );
+
+The ``DROP AGGREGATE`` statement removes an aggregate created using ``CREATE AGGREGATE``. You must specify the argument
+types of the aggregate to drop if there are multiple aggregates with the same name but a different signature (overloaded
+aggregates).
+
+``DROP AGGREGATE`` with the optional ``IF EXISTS`` keywords drops an aggregate if it exists, and does nothing if a
+function with the signature does not exist.
diff --git a/src/doc/4.0-beta4/_sources/cql/index.rst.txt b/src/doc/4.0-beta4/_sources/cql/index.rst.txt
new file mode 100644
index 0000000..b4c21cf
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/index.rst.txt
@@ -0,0 +1,47 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. _cql:
+
+The Cassandra Query Language (CQL)
+==================================
+
+This document describes the Cassandra Query Language (CQL) [#]_. Note that this document describes the last version of
+the languages. However, the `changes <#changes>`_ section provides the diff between the different versions of CQL.
+
+CQL offers a model close to SQL in the sense that data is put in *tables* containing *rows* of *columns*. For
+that reason, when used in this document, these terms (tables, rows and columns) have the same definition than they have
+in SQL.
+
+.. toctree::
+   :maxdepth: 2
+
+   definitions
+   types
+   ddl
+   dml
+   indexes
+   mvs
+   security
+   functions
+   operators
+   json
+   triggers
+   appendices
+   changes
+
+.. [#] Technically, this document CQL version 3, which is not backward compatible with CQL version 1 and 2 (which have
+   been deprecated and remove) and differs from it in numerous ways.
diff --git a/src/doc/4.0-beta4/_sources/cql/indexes.rst.txt b/src/doc/4.0-beta4/_sources/cql/indexes.rst.txt
new file mode 100644
index 0000000..81fe429
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/indexes.rst.txt
@@ -0,0 +1,83 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _secondary-indexes:
+
+Secondary Indexes
+-----------------
+
+CQL supports creating secondary indexes on tables, allowing queries on the table to use those indexes. A secondary index
+is identified by a name defined by:
+
+.. productionlist::
+   index_name: re('[a-zA-Z_0-9]+')
+
+
+
+.. _create-index-statement:
+
+CREATE INDEX
+^^^^^^^^^^^^
+
+Creating a secondary index on a table uses the ``CREATE INDEX`` statement:
+
+.. productionlist::
+   create_index_statement: CREATE [ CUSTOM ] INDEX [ IF NOT EXISTS ] [ `index_name` ]
+                         :     ON `table_name` '(' `index_identifier` ')'
+                         :     [ USING `string` [ WITH OPTIONS = `map_literal` ] ]
+   index_identifier: `column_name`
+                   :| ( KEYS | VALUES | ENTRIES | FULL ) '(' `column_name` ')'
+
+For instance::
+
+    CREATE INDEX userIndex ON NerdMovies (user);
+    CREATE INDEX ON Mutants (abilityId);
+    CREATE INDEX ON users (keys(favs));
+    CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass';
+    CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass' WITH OPTIONS = {'storage': '/mnt/ssd/indexes/'};
+
+The ``CREATE INDEX`` statement is used to create a new (automatic) secondary index for a given (existing) column in a
+given table. A name for the index itself can be specified before the ``ON`` keyword, if desired. If data already exists
+for the column, it will be indexed asynchronously. After the index is created, new data for the column is indexed
+automatically at insertion time.
+
+Attempting to create an already existing index will return an error unless the ``IF NOT EXISTS`` option is used. If it
+is used, the statement will be a no-op if the index already exists.
+
+Indexes on Map Keys
+~~~~~~~~~~~~~~~~~~~
+
+When creating an index on a :ref:`maps <maps>`, you may index either the keys or the values. If the column identifier is
+placed within the ``keys()`` function, the index will be on the map keys, allowing you to use ``CONTAINS KEY`` in
+``WHERE`` clauses. Otherwise, the index will be on the map values.
+
+.. _drop-index-statement:
+
+DROP INDEX
+^^^^^^^^^^
+
+Dropping a secondary index uses the ``DROP INDEX`` statement:
+
+.. productionlist::
+   drop_index_statement: DROP INDEX [ IF EXISTS ] `index_name`
+
+The ``DROP INDEX`` statement is used to drop an existing secondary index. The argument of the statement is the index
+name, which may optionally specify the keyspace of the index.
+
+If the index does not exists, the statement will return an error, unless ``IF EXISTS`` is used in which case the
+operation is a no-op.
diff --git a/src/doc/4.0-beta4/_sources/cql/json.rst.txt b/src/doc/4.0-beta4/_sources/cql/json.rst.txt
new file mode 100644
index 0000000..539180a
--- /dev/null
+++ b/src/doc/4.0-beta4/_sources/cql/json.rst.txt
@@ -0,0 +1,115 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _cql-json:
+
+JSON Support
+------------
+
+Cassandra 2.2 introduces JSON support to :ref:`SELECT <select-statement>` and :ref:`INSERT <insert-statement>`
+statements. This support does not fundamentally alter the CQL API (for example, the schema is still enforced), it simply
+provides a convenient way to work with JSON documents.
+
+SELECT JSON
+^^^^^^^^^^^
+
+With ``SELECT`` statements, the ``JSON`` keyword can be used to return each row as a single ``JSON`` encoded map. The
+remainder of the ``SELECT`` statement behavior is the same.
+
+The result map keys are the same as the column names in a normal result set. For example, a statement like ``SELECT JSON
+a, ttl(b) FROM ...`` would result in a map with keys ``"a"`` and ``"ttl(b)"``. However, this is one notable exception:
+for symmetry with ``INSERT JSON`` behavior, case-sensitive column names with upper-case letters will be surrounded with
+double quotes. For example, ``SELECT JSON myColumn FROM ...`` would result in a map key ``"\"myColumn\""`` (note the
+escaped quotes).
+
+The map values will ``JSON``-encoded representations (as described below) of the result set values.
+
+INSERT JSON
+^^^^^^^^^^^
+
+With ``INSERT`` statements, the new ``JSON`` keyword can be used to enable inserting a ``JSON`` encoded map as a single
+row. The format of the ``JSON`` map should generally match that returned by a ``SELECT JSON`` statement on the same
+table. In particular, case-sensitive column names should be surrounded with double quotes. For example, to insert into a
+table with two columns named "myKey" and "value", you would do the following::
+
+    INSERT INTO mytable JSON '{ "\"myKey\"": 0, "value": 0}'
+
+By default (or if ``DEFAULT NULL`` is explicitly used), a column omitted from the ``JSON`` map will be set to ``NULL``,
+meaning that any pre-existing value for that column will be removed (resulting in a tombstone being created).
+Alternatively, if the ``DEFAULT UNSET`` directive is used after the value, omitted column values will be left unset,
+meaning that pre-existing values for those column will be preserved.
+
+
+JSON Encoding of Cassandra Data Types
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Where possible, Cassandra will represent and accept data types in their native ``JSON`` representation. Cassandra will
+also accept string representations matching the CQL literal format for all single-field types. For example, floats,
+ints, UUIDs, and dates can be represented by CQL literal strings. However, compound types, such as collections, tuples,
+and user-defined types must be represented by native ``JSON`` collections (maps and lists) or a JSON-encoded string
+representation of the collection.
+
+The following table describes the encodings that Cassandra will accept in ``INSERT JSON`` values (and ``fromJson()``
+arguments) as well as the format Cassandra will use when returning data for ``SELECT JSON`` statements (and
+``fromJson()``):
+
+=============== ======================== =============== ==============================================================
+ Type            Formats accepted         Return format   Notes
+=============== ======================== =============== ==============================================================
+ ``ascii``       string                   string          Uses JSON's ``\u`` character escape
+ ``bigint``      integer, string          integer         String must be valid 64 bit integer
+ ``blob``        string                   string          String should be 0x followed by an even number of hex digits
+ ``boolean``     boolean, string          boolean         String must be "true" or "false"
+ ``date``        string                   string          Date in format ``YYYY-MM-DD``, timezone UTC
+ ``decimal``     integer, float, string   float           May exceed 32 or 64-bit IEEE-754 floating point precision in
+                                                          client-side decoder
+ ``double``      integer, float, string   float           String must be valid integer or float
+ ``float``       integer, float, string   float           String must be valid integer or float
+ ``inet``        string                   string          IPv4 or IPv6 address
+ ``int``         integer, string          integer         String must be valid 32 bit integer
+ ``list``        list, string             list            Uses JSON's native list representation
+ ``map``         map, string              map             Uses JSON's native map representation
+ ``smallint``    integer, string          integer         String must be valid 16 bit integer
+ ``set``         list, string             list            Uses JSON's native list representation
+ ``text``        string                   string          Uses JSON's ``\u`` character escape
+ ``time``        string                   string          Time of day in format ``HH-MM-SS[.fffffffff]``
+ ``timestamp``   integer, string          string          A timestamp. Strings constant allows to input :ref:`timestamps
+                                                          as dates <timestamps>`. Datestamps with format ``YYYY-MM-DD
+                                                          HH:MM:SS.SSS`` are returned.
+ ``timeuuid``    string                   string          Type 1 UUID. See :token:`constant` for the UUID format
+ ``tinyint``     integer, string          integer         String must be valid 8 bit integer
+ ``tuple``       list, string             list            Uses JSON's native list representation
+ ``UDT``         map, string              map             Uses JSON's native map representation with field names as keys
+ ``uuid``        string                   string          See :token:`constant` for the UUID format
+ ``varchar``     string                   string          Uses JSON's ``\u`` character escape
+ ``varint``      integer, string          integer         Variable length; may overflow 32 or 64 bit integers in
+                                                          client-side decoder
+=============== ======================== =============== ==============================================================
+
+The fromJson() Function
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``fromJson()`` function may be used similarly to ``INSERT JSON``, but for a single column value. It may only be used
+in the ``VALUES`` clause of an ``INSERT`` statement or as one of the column values in an ``UPDATE``, ``DELETE``, or
+``SELECT`` statement. For example, it cannot be used in the selection clause of a ``SELECT`` statement.
+
+The toJson() Function
+^^^^^^^^^^^^^^^^^^^^^
+
+The ``toJson()`` function may be used similarly to ``SELECT JSON``, but for a single column value. It may only be used
+in the selection clause of a ``SELECT`` statement.
diff --git a/src/doc/4.0-beta4/_sources/cql/mvs.rst.txt b/src/doc/4.0-beta4/_sources/cql/mvs.rst.txt
new file mode 100644
index 0000000..200090a
... 86365 lines suppressed ...

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org