You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by sl...@apache.org on 2016/06/27 18:33:57 UTC

[02/34] cassandra git commit: Add initial in-tree documentation (very incomplete so far)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cad277be/doc/source/cql.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql.rst b/doc/source/cql.rst
new file mode 100644
index 0000000..8d36258
--- /dev/null
+++ b/doc/source/cql.rst
@@ -0,0 +1,3916 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: sql
+
+The Cassandra Query Language (CQL)
+==================================
+
+CQL Syntax
+----------
+
+Preamble
+^^^^^^^^
+
+This document describes the Cassandra Query Language (CQL) [#]_. Note that this document describes the last version of
+the languages. However, the `changes <#changes>`_ section provides the diff between the different versions of CQL.
+
+CQL offers a model close to SQL in the sense that data is put in *tables* containing *rows* of *columns*. For
+that reason, when used in this document, these terms (tables, rows and columns) have the same definition than they have
+in SQL. But please note that as such, they do **not** refer to the concept of rows and columns found in the deprecated
+thrift API (and earlier version 1 and 2 of CQL).
+
+Conventions
+^^^^^^^^^^^
+
+To aid in specifying the CQL syntax, we will use the following conventions in this document:
+
+- Language rules will be given in an informal `BNF variant
+  <http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form#Variants>`_ notation. In particular, we'll use square brakets
+  (``[ item ]``) for optional items, ``*`` and ``+`` for repeated items (where ``+`` imply at least one).
+- The grammar is provided for documentation purposes and leave some minor details out (only conveniences that can be
+  ignored). For instance, the comma on the last column definition in a ``CREATE TABLE`` statement is optional but
+  supported if present even though the grammar in this document suggests otherwise. Also, not everything accepted by the
+  grammar will be valid CQL.
+- References to keywords or pieces of CQL code in running text will be shown in a ``fixed-width font``.
+
+Identifiers and keywords
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The CQL language uses *identifiers* (or *names*) to identify tables, columns and other objects. An identifier is a token
+matching the regular expression ``[a-zA-Z][a-zA-Z0-9_]*``.
+
+A number of such identifiers, like ``SELECT`` or ``WITH``, are *keywords*. They have a fixed meaning for the language
+and most are reserved. The list of those keywords can be found in `Appendix A <#appendixA>`__.
+
+Identifiers and (unquoted) keywords are case insensitive. Thus ``SELECT`` is the same than ``select`` or ``sElEcT``, and
+``myId`` is the same than ``myid`` or ``MYID``. A convention often used (in particular by the samples of this
+documentation) is to use upper case for keywords and lower case for other identifiers.
+
+There is a second kind of identifiers called *quoted identifiers* defined by enclosing an arbitrary sequence of
+characters (non empty) in double-quotes(``"``). Quoted identifiers are never keywords. Thus ``"select"`` is not a
+reserved keyword and can be used to refer to a column (note that using this is particularly advised), while ``select``
+would raise a parsing error. Also, contrarily to unquoted identifiers and keywords, quoted identifiers are case
+sensitive (``"My Quoted Id"`` is *different* from ``"my quoted id"``). A fully lowercase quoted identifier that matches
+``[a-zA-Z][a-zA-Z0-9_]*`` is however *equivalent* to the unquoted identifier obtained by removing the double-quote (so
+``"myid"`` is equivalent to ``myid`` and to ``myId`` but different from ``"myId"``).  Inside a quoted identifier, the
+double-quote character can be repeated to escape it, so ``"foo "" bar"`` is a valid identifier.
+
+**Warning**: *quoted identifiers* allows to declare columns with arbitrary names, and those can sometime clash with
+specific names used by the server. For instance, when using conditional update, the server will respond with a
+result-set containing a special result named ``"[applied]"``. If you\u2019ve declared a column with such a name, this could
+potentially confuse some tools and should be avoided. In general, unquoted identifiers should be preferred but if you
+use quoted identifiers, it is strongly advised to avoid any name enclosed by squared brackets (like ``"[applied]"``) and
+any name that looks like a function call (like ``"f(x)"``).
+
+To sum up, we have:
+
+.. productionlist::
+   identifer: `unquoted_identifier` | `quoted_identifier`
+   unquoted_identifier: [a-zA-Z][a-zA-Z0-9_]*
+   quoted_identifier: "\"" [any character where " can appear if doubled]+ "\""
+
+Constants
+^^^^^^^^^
+
+CQL defines the following kind of *constants*: strings, integers,
+floats, booleans, uuids and blobs:
+
+-  A string constant is an arbitrary sequence of characters characters
+   enclosed by single-quote(\ ``'``). One can include a single-quote in
+   a string by repeating it, e.g. ``'It''s raining today'``. Those are
+   not to be confused with quoted identifiers that use double-quotes.
+-  An integer constant is defined by ``'-'?[0-9]+``.
+-  A float constant is defined by
+   ``'-'?[0-9]+('.'[0-9]*)?([eE][+-]?[0-9+])?``. On top of that, ``NaN``
+   and ``Infinity`` are also float constants.
+-  A boolean constant is either ``true`` or ``false`` up to
+   case-insensitivity (i.e. ``True`` is a valid boolean constant).
+-  A
+   `UUID <http://en.wikipedia.org/wiki/Universally_unique_identifier>`__
+   constant is defined by ``hex{8}-hex{4}-hex{4}-hex{4}-hex{12}`` where
+   ``hex`` is an hexadecimal character, e.g. ``[0-9a-fA-F]`` and ``{4}``
+   is the number of such characters.
+-  A blob constant is an hexadecimal number defined by ``0[xX](hex)+``
+   where ``hex`` is an hexadecimal character, e.g. ``[0-9a-fA-F]``.
+
+For how these constants are typed, see the `data types
+section <#types>`__.
+
+Comments
+^^^^^^^^
+
+A comment in CQL is a line beginning by either double dashes (``--``) or
+double slash (``//``).
+
+Multi-line comments are also supported through enclosure within ``/*``
+and ``*/`` (but nesting is not supported).
+
+| bc(sample).
+| \u2014 This is a comment
+| // This is a comment too
+| /\* This is
+|  a multi-line comment \*/
+
+Statements
+^^^^^^^^^^
+
+CQL consists of statements. As in SQL, these statements can be divided
+in 3 categories:
+
+-  Data definition statements, that allow to set and change the way data
+   is stored.
+-  Data manipulation statements, that allow to change data
+-  Queries, to look up data
+
+All statements end with a semicolon (``;``) but that semicolon can be
+omitted when dealing with a single statement. The supported statements
+are described in the following sections. When describing the grammar of
+said statements, we will reuse the non-terminal symbols defined below:
+
+| bc(syntax)..
+|  ::= any quoted or unquoted identifier, excluding reserved keywords
+|  ::= ( \u2018.\u2019)? 
+
+|  ::= a string constant
+|  ::= an integer constant
+|  ::= a float constant
+|  ::= \| 
+|  ::= a uuid constant
+|  ::= a boolean constant
+|  ::= a blob constant
+
+|  ::= 
+|  \| 
+|  \| 
+|  \| 
+|  \| 
+|  ::= \u2018?\u2019
+|  \| \u2018:\u2019 
+|  ::= 
+|  \| 
+|  \| 
+|  \| \u2018(\u2019 ( (\u2018,\u2019 )\*)? \u2018)\u2019
+
+|  ::= 
+|  \| 
+|  \| 
+|  ::= \u2018{\u2019 ( \u2018:\u2019 ( \u2018,\u2019 \u2018:\u2019 )\* )? \u2018}\u2019
+|  ::= \u2018{\u2019 ( ( \u2018,\u2019 )\* )? \u2018}\u2019
+|  ::= \u2018[\u2019 ( ( \u2018,\u2019 )\* )? \u2018]\u2019
+
+ ::=
+
+|  ::= (AND )\*
+|  ::= \u2018=\u2019 ( \| \| )
+| p.
+| Please note that not every possible productions of the grammar above
+  will be valid in practice. Most notably, ``<variable>`` and nested
+  ``<collection-literal>`` are currently not allowed inside
+  ``<collection-literal>``.
+
+A ``<variable>`` can be either anonymous (a question mark (``?``)) or
+named (an identifier preceded by ``:``). Both declare a bind variables
+for `prepared statements <#preparedStatement>`__. The only difference
+between an anymous and a named variable is that a named one will be
+easier to refer to (how exactly depends on the client driver used).
+
+The ``<properties>`` production is use by statement that create and
+alter keyspaces and tables. Each ``<property>`` is either a *simple*
+one, in which case it just has a value, or a *map* one, in which case
+it\u2019s value is a map grouping sub-options. The following will refer to
+one or the other as the *kind* (*simple* or *map*) of the property.
+
+A ``<tablename>`` will be used to identify a table. This is an
+identifier representing the table name that can be preceded by a
+keyspace name. The keyspace name, if provided, allow to identify a table
+in another keyspace than the currently active one (the currently active
+keyspace is set through the \ ``USE``\  statement).
+
+For supported ``<function>``, see the section on
+`functions <#functions>`__.
+
+Strings can be either enclosed with single quotes or two dollar
+characters. The second syntax has been introduced to allow strings that
+contain single quotes. Typical candidates for such strings are source
+code fragments for user-defined functions.
+
+*Sample:*
+
+| bc(sample)..
+|  \u2018some string value\u2019
+
+| $$double-dollar string can contain single \u2019 quotes$$
+| p.
+
+Prepared Statement
+^^^^^^^^^^^^^^^^^^
+
+CQL supports *prepared statements*. Prepared statement is an
+optimization that allows to parse a query only once but execute it
+multiple times with different concrete values.
+
+In a statement, each time a column value is expected (in the data
+manipulation and query statements), a ``<variable>`` (see above) can be
+used instead. A statement with bind variables must then be *prepared*.
+Once it has been prepared, it can executed by providing concrete values
+for the bind variables. The exact procedure to prepare a statement and
+execute a prepared statement depends on the CQL driver used and is
+beyond the scope of this document.
+
+In addition to providing column values, bind markers may be used to
+provide values for ``LIMIT``, ``TIMESTAMP``, and ``TTL`` clauses. If
+anonymous bind markers are used, the names for the query parameters will
+be ``[limit]``, ``[timestamp]``, and ``[ttl]``, respectively.
+
+Data Definition
+---------------
+
+CREATE KEYSPACE
+^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE KEYSPACE (IF NOT EXISTS)? WITH 
+| p.
+| *Sample:*
+
+| bc(sample)..
+| CREATE KEYSPACE Excelsior
+|  WITH replication = {\u2019class\u2019: \u2018SimpleStrategy\u2019, \u2018replication\_factor\u2019
+  : 3};
+
+| CREATE KEYSPACE Excalibur
+|  WITH replication = {\u2019class\u2019: \u2018NetworkTopologyStrategy\u2019, \u2018DC1\u2019 : 1,
+  \u2018DC2\u2019 : 3}
+|  AND durable\_writes = false;
+| p.
+| The ``CREATE KEYSPACE`` statement creates a new top-level *keyspace*.
+  A keyspace is a namespace that defines a replication strategy and some
+  options for a set of tables. Valid keyspaces names are identifiers
+  composed exclusively of alphanumerical characters and whose length is
+  lesser or equal to 32. Note that as identifiers, keyspace names are
+  case insensitive: use a quoted identifier for case sensitive keyspace
+  names.
+
+The supported ``<properties>`` for ``CREATE KEYSPACE`` are:
+
++----------------------+------------+-------------+-----------+-------------------------------------------------------------------------------------------------------+
+| name                 | kind       | mandatory   | default   | description                                                                                           |
++======================+============+=============+===========+=======================================================================================================+
+| ``replication``      | *map*      | yes         |           | The replication strategy and options to use for the keyspace.                                         |
++----------------------+------------+-------------+-----------+-------------------------------------------------------------------------------------------------------+
+| ``durable_writes``   | *simple*   | no          | true      | Whether to use the commit log for updates on this keyspace (disable this option at your own risk!).   |
++----------------------+------------+-------------+-----------+-------------------------------------------------------------------------------------------------------+
+
+The ``replication`` ``<property>`` is mandatory. It must at least
+contains the ``'class'`` sub-option which defines the replication
+strategy class to use. The rest of the sub-options depends on that
+replication strategy class. By default, Cassandra support the following
+``'class'``:
+
+-  ``'SimpleStrategy'``: A simple strategy that defines a simple
+   replication factor for the whole cluster. The only sub-options
+   supported is ``'replication_factor'`` to define that replication
+   factor and is mandatory.
+-  ``'NetworkTopologyStrategy'``: A replication strategy that allows to
+   set the replication factor independently for each data-center. The
+   rest of the sub-options are key-value pairs where each time the key
+   is the name of a datacenter and the value the replication factor for
+   that data-center.
+-  ``'OldNetworkTopologyStrategy'``: A legacy replication strategy. You
+   should avoid this strategy for new keyspaces and prefer
+   ``'NetworkTopologyStrategy'``.
+
+Attempting to create an already existing keyspace will return an error
+unless the ``IF NOT EXISTS`` option is used. If it is used, the
+statement will be a no-op if the keyspace already exists.
+
+USE
+^^^
+
+*Syntax:*
+
+bc(syntax). ::= USE
+
+*Sample:*
+
+bc(sample). USE myApp;
+
+The ``USE`` statement takes an existing keyspace name as argument and
+set it as the per-connection current working keyspace. All subsequent
+keyspace-specific actions will be performed in the context of the
+selected keyspace, unless `otherwise specified <#statements>`__, until
+another USE statement is issued or the connection terminates.
+
+ALTER KEYSPACE
+^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= ALTER KEYSPACE WITH 
+| p.
+| *Sample:*
+
+| bc(sample)..
+| ALTER KEYSPACE Excelsior
+|  WITH replication = {\u2019class\u2019: \u2018SimpleStrategy\u2019, \u2018replication\_factor\u2019
+  : 4};
+
+The ``ALTER KEYSPACE`` statement alters the properties of an existing
+keyspace. The supported ``<properties>`` are the same as for the
+```CREATE KEYSPACE`` <#createKeyspaceStmt>`__ statement.
+
+DROP KEYSPACE
+^^^^^^^^^^^^^
+
+*Syntax:*
+
+bc(syntax). ::= DROP KEYSPACE ( IF EXISTS )?
+
+*Sample:*
+
+bc(sample). DROP KEYSPACE myApp;
+
+A ``DROP KEYSPACE`` statement results in the immediate, irreversible
+removal of an existing keyspace, including all column families in it,
+and all data contained in those column families.
+
+If the keyspace does not exists, the statement will return an error,
+unless ``IF EXISTS`` is used in which case the operation is a no-op.
+
+CREATE TABLE
+^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE ( TABLE \| COLUMNFAMILY ) ( IF NOT EXISTS )? 
+|  \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+|  ( WITH ( AND )\* )?
+
+|  ::= ( STATIC )? ( PRIMARY KEY )?
+|  \| PRIMARY KEY \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+
+|  ::= 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019
+
+|  ::= 
+|  \| COMPACT STORAGE
+|  \| CLUSTERING ORDER
+| p.
+| *Sample:*
+
+| bc(sample)..
+| CREATE TABLE monkeySpecies (
+|  species text PRIMARY KEY,
+|  common\_name text,
+|  population varint,
+|  average\_size int
+| ) WITH comment=\u2018Important biological records\u2019
+|  AND read\_repair\_chance = 1.0;
+
+| CREATE TABLE timeline (
+|  userid uuid,
+|  posted\_month int,
+|  posted\_time uuid,
+|  body text,
+|  posted\_by text,
+|  PRIMARY KEY (userid, posted\_month, posted\_time)
+| ) WITH compaction = { \u2018class\u2019 : \u2018LeveledCompactionStrategy\u2019 };
+| p.
+| The ``CREATE TABLE`` statement creates a new table. Each such table is
+  a set of *rows* (usually representing related entities) for which it
+  defines a number of properties. A table is defined by a
+  `name <#createTableName>`__, it defines the columns composing rows of
+  the table and have a number of `options <#createTableOptions>`__. Note
+  that the ``CREATE COLUMNFAMILY`` syntax is supported as an alias for
+  ``CREATE TABLE`` (for historical reasons).
+
+Attempting to create an already existing table will return an error
+unless the ``IF NOT EXISTS`` option is used. If it is used, the
+statement will be a no-op if the table already exists.
+
+``<tablename>``
+^^^^^^^^^^^^^^^
+
+Valid table names are the same as valid `keyspace
+names <#createKeyspaceStmt>`__ (up to 32 characters long alphanumerical
+identifiers). If the table name is provided alone, the table is created
+within the current keyspace (see \ ``USE``\ ), but if it is prefixed by
+an existing keyspace name (see ```<tablename>`` <#statements>`__
+grammar), it is created in the specified keyspace (but does **not**
+change the current keyspace).
+
+``<column-definition>``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+A ``CREATE TABLE`` statement defines the columns that rows of the table
+can have. A *column* is defined by its name (an identifier) and its type
+(see the `data types <#types>`__ section for more details on allowed
+types and their properties).
+
+Within a table, a row is uniquely identified by its ``PRIMARY KEY`` (or
+more simply the key), and hence all table definitions **must** define a
+PRIMARY KEY (and only one). A ``PRIMARY KEY`` is composed of one or more
+of the columns defined in the table. If the ``PRIMARY KEY`` is only one
+column, this can be specified directly after the column definition.
+Otherwise, it must be specified by following ``PRIMARY KEY`` by the
+comma-separated list of column names composing the key within
+parenthesis. Note that:
+
+| bc(sample).
+| CREATE TABLE t (
+|  k int PRIMARY KEY,
+|  other text
+| )
+
+is equivalent to
+
+| bc(sample).
+| CREATE TABLE t (
+|  k int,
+|  other text,
+|  PRIMARY KEY (k)
+| )
+
+Partition key and clustering columns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In CQL, the order in which columns are defined for the ``PRIMARY KEY``
+matters. The first column of the key is called the *partition key*. It
+has the property that all the rows sharing the same partition key (even
+across table in fact) are stored on the same physical node. Also,
+insertion/update/deletion on rows sharing the same partition key for a
+given table are performed *atomically* and in *isolation*. Note that it
+is possible to have a composite partition key, i.e. a partition key
+formed of multiple columns, using an extra set of parentheses to define
+which columns forms the partition key.
+
+The remaining columns of the ``PRIMARY KEY`` definition, if any, are
+called \_\_clustering columns. On a given physical node, rows for a
+given partition key are stored in the order induced by the clustering
+columns, making the retrieval of rows in that clustering order
+particularly efficient (see \ ``SELECT``\ ).
+
+``STATIC`` columns
+^^^^^^^^^^^^^^^^^^
+
+Some columns can be declared as ``STATIC`` in a table definition. A
+column that is static will be \u201cshared\u201d by all the rows belonging to the
+same partition (having the same partition key). For instance, in:
+
+| bc(sample).
+| CREATE TABLE test (
+|  pk int,
+|  t int,
+|  v text,
+|  s text static,
+|  PRIMARY KEY (pk, t)
+| );
+| INSERT INTO test(pk, t, v, s) VALUES (0, 0, \u2018val0\u2019, \u2018static0\u2019);
+| INSERT INTO test(pk, t, v, s) VALUES (0, 1, \u2018val1\u2019, \u2018static1\u2019);
+| SELECT \* FROM test WHERE pk=0 AND t=0;
+
+the last query will return ``'static1'`` as value for ``s``, since ``s``
+is static and thus the 2nd insertion modified this \u201cshared\u201d value. Note
+however that static columns are only static within a given partition,
+and if in the example above both rows where from different partitions
+(i.e. if they had different value for ``pk``), then the 2nd insertion
+would not have modified the value of ``s`` for the first row.
+
+A few restrictions applies to when static columns are allowed:
+
+-  tables with the ``COMPACT STORAGE`` option (see below) cannot have
+   them
+-  a table without clustering columns cannot have static columns (in a
+   table without clustering columns, every partition has only one row,
+   and so every column is inherently static).
+-  only non ``PRIMARY KEY`` columns can be static
+
+``<option>``
+^^^^^^^^^^^^
+
+The ``CREATE TABLE`` statement supports a number of options that
+controls the configuration of a new table. These options can be
+specified after the ``WITH`` keyword.
+
+The first of these option is ``COMPACT STORAGE``. This option is mainly
+targeted towards backward compatibility for definitions created before
+CQL3 (see
+`www.datastax.com/dev/blog/thrift-to-cql3 <http://www.datastax.com/dev/blog/thrift-to-cql3>`__
+for more details). The option also provides a slightly more compact
+layout of data on disk but at the price of diminished flexibility and
+extensibility for the table. Most notably, ``COMPACT STORAGE`` tables
+cannot have collections nor static columns and a ``COMPACT STORAGE``
+table with at least one clustering column supports exactly one (as in
+not 0 nor more than 1) column not part of the ``PRIMARY KEY`` definition
+(which imply in particular that you cannot add nor remove columns after
+creation). For those reasons, ``COMPACT STORAGE`` is not recommended
+outside of the backward compatibility reason evoked above.
+
+Another option is ``CLUSTERING ORDER``. It allows to define the ordering
+of rows on disk. It takes the list of the clustering column names with,
+for each of them, the on-disk order (Ascending or descending). Note that
+this option affects `what ``ORDER BY`` are allowed during
+``SELECT`` <#selectOrderBy>`__.
+
+Table creation supports the following other ``<property>``:
+
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| option                           | kind       | default       | description                                                                                                                                                                                                                     |
++==================================+============+===============+=================================================================================================================================================================================================================================+
+| ``comment``                      | *simple*   | none          | A free-form, human-readable comment.                                                                                                                                                                                            |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``read_repair_chance``           | *simple*   | 0.1           | The probability with which to query extra nodes (e.g. more nodes than required by the consistency level) for the purpose of read repairs.                                                                                       |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``dclocal_read_repair_chance``   | *simple*   | 0             | The probability with which to query extra nodes (e.g. more nodes than required by the consistency level) belonging to the same data center than the read coordinator for the purpose of read repairs.                           |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``gc_grace_seconds``             | *simple*   | 864000        | Time to wait before garbage collecting tombstones (deletion markers).                                                                                                                                                           |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``bloom_filter_fp_chance``       | *simple*   | 0.00075       | The target probability of false positive of the sstable bloom filters. Said bloom filters will be sized to provide the provided probability (thus lowering this value impact the size of bloom filters in-memory and on-disk)   |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``default_time_to_live``         | *simple*   | 0             | The default expiration time (\u201cTTL\u201d) in seconds for a table.                                                                                                                                                                     |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``compaction``                   | *map*      | *see below*   | Compaction options, see \u201cbelow\u201d:#compactionOptions.                                                                                                                                                                             |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``compression``                  | *map*      | *see below*   | Compression options, see \u201cbelow\u201d:#compressionOptions.                                                                                                                                                                           |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``caching``                      | *map*      | *see below*   | Caching options, see \u201cbelow\u201d:#cachingOptions.                                                                                                                                                                                   |
++----------------------------------+------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+Compaction options
+^^^^^^^^^^^^^^^^^^
+
+The ``compaction`` property must at least define the ``'class'``
+sub-option, that defines the compaction strategy class to use. The
+default supported class are ``'SizeTieredCompactionStrategy'``,
+``'LeveledCompactionStrategy'``, ``'DateTieredCompactionStrategy'`` and
+``'TimeWindowCompactionStrategy'``. Custom strategy can be provided by
+specifying the full class name as a `string constant <#constants>`__.
+The rest of the sub-options depends on the chosen class. The sub-options
+supported by the default classes are:
+
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| option                               | supported compaction strategy   | default        | description                                                                                                                                                                                                                                                                                                                            |
++======================================+=================================+================+========================================================================================================================================================================================================================================================================================================================================+
+| ``enabled``                          | *all*                           | true           | A boolean denoting whether compaction should be enabled or not.                                                                                                                                                                                                                                                                        |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``tombstone_threshold``              | *all*                           | 0.2            | A ratio such that if a sstable has more than this ratio of gcable tombstones over all contained columns, the sstable will be compacted (with no other sstables) for the purpose of purging those tombstones.                                                                                                                           |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``tombstone_compaction_interval``    | *all*                           | 1 day          | The minimum time to wait after an sstable creation time before considering it for \u201ctombstone compaction\u201d, where \u201ctombstone compaction\u201d is the compaction triggered if the sstable has more gcable tombstones than ``tombstone_threshold``.                                                                                             |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``unchecked_tombstone_compaction``   | *all*                           | false          | Setting this to true enables more aggressive tombstone compactions - single sstable tombstone compactions will run without checking how likely it is that they will be successful.                                                                                                                                                     |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``min_sstable_size``                 | SizeTieredCompactionStrategy    | 50MB           | The size tiered strategy groups SSTables to compact in buckets. A bucket groups SSTables that differs from less than 50% in size. However, for small sizes, this would result in a bucketing that is too fine grained. ``min_sstable_size`` defines a size threshold (in bytes) below which all SSTables belong to one unique bucket   |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``min_threshold``                    | SizeTieredCompactionStrategy    | 4              | Minimum number of SSTables needed to start a minor compaction.                                                                                                                                                                                                                                                                         |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``max_threshold``                    | SizeTieredCompactionStrategy    | 32             | Maximum number of SSTables processed by one minor compaction.                                                                                                                                                                                                                                                                          |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``bucket_low``                       | SizeTieredCompactionStrategy    | 0.5            | Size tiered consider sstables to be within the same bucket if their size is within [average\_size \* ``bucket_low``, average\_size \* ``bucket_high`` ] (i.e the default groups sstable whose sizes diverges by at most 50%)                                                                                                           |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``bucket_high``                      | SizeTieredCompactionStrategy    | 1.5            | Size tiered consider sstables to be within the same bucket if their size is within [average\_size \* ``bucket_low``, average\_size \* ``bucket_high`` ] (i.e the default groups sstable whose sizes diverges by at most 50%).                                                                                                          |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``sstable_size_in_mb``               | LeveledCompactionStrategy       | 5MB            | The target size (in MB) for sstables in the leveled strategy. Note that while sstable sizes should stay less or equal to ``sstable_size_in_mb``, it is possible to exceptionally have a larger sstable as during compaction, data for a given partition key are never split into 2 sstables                                            |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``timestamp_resolution``             | DateTieredCompactionStrategy    | MICROSECONDS   | The timestamp resolution used when inserting data, could be MILLISECONDS, MICROSECONDS etc (should be understandable by Java TimeUnit) - don\u2019t change this unless you do mutations with USING TIMESTAMP (or equivalent directly in the client)                                                                                         |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``base_time_seconds``                | DateTieredCompactionStrategy    | 60             | The base size of the time windows.                                                                                                                                                                                                                                                                                                     |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``max_sstable_age_days``             | DateTieredCompactionStrategy    | 365            | SSTables only containing data that is older than this will never be compacted.                                                                                                                                                                                                                                                         |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``timestamp_resolution``             | TimeWindowCompactionStrategy    | MICROSECONDS   | The timestamp resolution used when inserting data, could be MILLISECONDS, MICROSECONDS etc (should be understandable by Java TimeUnit) - don\u2019t change this unless you do mutations with USING TIMESTAMP (or equivalent directly in the client)                                                                                         |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``compaction_window_unit``           | TimeWindowCompactionStrategy    | DAYS           | The Java TimeUnit used for the window size, set in conjunction with ``compaction_window_size``. Must be one of DAYS, HOURS, MINUTES                                                                                                                                                                                                    |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``compaction_window_size``           | TimeWindowCompactionStrategy    | 1              | The number of ``compaction_window_unit`` units that make up a time window.                                                                                                                                                                                                                                                             |
++--------------------------------------+---------------------------------+----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+Compression options
+^^^^^^^^^^^^^^^^^^^
+
+For the ``compression`` property, the following sub-options are
+available:
+
++------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| option                 | default         | description                                                                                                                                                                                                                                                                                                                                                                                                           |
++========================+=================+=======================================================================================================================================================================================================================================================================================================================================================================================================================+
+| ``class``              | LZ4Compressor   | The compression algorithm to use. Default compressor are: LZ4Compressor, SnappyCompressor and DeflateCompressor. Use ``'enabled' : false`` to disable compression. Custom compressor can be provided by specifying the full class name as a \u201cstring constant\u201d:#constants.                                                                                                                                             |
++------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``enabled``            | true            | By default compression is enabled. To disable it, set ``enabled`` to ``false``                                                                                                                                                                                                                                                                                                                                        |
++------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+|``chunk_length_in_kb``  | 64KB            | On disk SSTables are compressed by block (to allow random reads). This defines the size (in KB) of said block. Bigger values may improve the compression rate, but increases the minimum size of data to be read from disk for a read                                                                                                                                                                                 |
++------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``crc_check_chance``   | 1.0             | When compression is enabled, each compressed block includes a checksum of that block for the purpose of detecting disk bitrot and avoiding the propagation of corruption to other replica. This option defines the probability with which those checksums are checked during read. By default they are always checked. Set to 0 to disable checksum checking and to 0.5 for instance to check them every other read   |
++------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+Caching options
+^^^^^^^^^^^^^^^
+
+For the ``caching`` property, the following sub-options are available:
+
++--------------------------+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| option                   | default   | description                                                                                                                                                                                                                                                                |
++==========================+===========+============================================================================================================================================================================================================================================================================+
+| ``keys``                 | ALL       | Whether to cache keys (\u201ckey cache\u201d) for this table. Valid values are: ``ALL`` and ``NONE``.                                                                                                                                                                                |
++--------------------------+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``rows_per_partition``   | NONE      | The amount of rows to cache per partition (\u201crow cache\u201d). If an integer ``n`` is specified, the first ``n`` queried rows of a partition will be cached. Other possible options are ``ALL``, to cache all rows of a queried partition, or ``NONE`` to disable row caching.   |
++--------------------------+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+Other considerations:
+^^^^^^^^^^^^^^^^^^^^^
+
+-  When `inserting <#insertStmt>`__ / `updating <#updateStmt>`__ a given
+   row, not all columns needs to be defined (except for those part of
+   the key), and missing columns occupy no space on disk. Furthermore,
+   adding new columns (see \ ``ALTER TABLE``\ ) is a constant time
+   operation. There is thus no need to try to anticipate future usage
+   (or to cry when you haven\u2019t) when creating a table.
+
+ALTER TABLE
+^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= ALTER (TABLE \| COLUMNFAMILY) 
+
+|  ::= ALTER TYPE 
+|  \| ADD 
+|  \| ADD ( ( , )\* )
+|  \| DROP 
+|  \| DROP ( ( , )\* )
+|  \| WITH ( AND )\*
+| p.
+| *Sample:*
+
+| bc(sample)..
+| ALTER TABLE addamsFamily
+| ALTER lastKnownLocation TYPE uuid;
+
+| ALTER TABLE addamsFamily
+| ADD gravesite varchar;
+
+| ALTER TABLE addamsFamily
+| WITH comment = \u2018A most excellent and useful column family\u2019
+|  AND read\_repair\_chance = 0.2;
+| p.
+| The ``ALTER`` statement is used to manipulate table definitions. It
+  allows for adding new columns, dropping existing ones, changing the
+  type of existing columns, or updating the table options. As with table
+  creation, ``ALTER COLUMNFAMILY`` is allowed as an alias for
+  ``ALTER TABLE``.
+
+The ``<tablename>`` is the table name optionally preceded by the
+keyspace name. The ``<instruction>`` defines the alteration to perform:
+
+-  ``ALTER``: Update the type of a given defined column. Note that the
+   type of the `clustering columns <#createTablepartitionClustering>`__
+   can be modified only in very limited cases, as it induces the on-disk
+   ordering of rows. Columns on which a `secondary
+   index <#createIndexStmt>`__ is defined have the same restriction. To
+   change the type of any other column, the column must already exist in
+   type definition and its type should be compatible with the new type.
+   No validation of existing data is performed. The compatibility table
+   is available below.
+-  ``ADD``: Adds a new column to the table. The ``<identifier>`` for the
+   new column must not conflict with an existing column. Moreover,
+   columns cannot be added to tables defined with the
+   ``COMPACT STORAGE`` option.
+-  ``DROP``: Removes a column from the table. Dropped columns will
+   immediately become unavailable in the queries and will not be
+   included in compacted sstables in the future. If a column is readded,
+   queries won\u2019t return values written before the column was last
+   dropped. It is assumed that timestamps represent actual time, so if
+   this is not your case, you should NOT readd previously dropped
+   columns. Columns can\u2019t be dropped from tables defined with the
+   ``COMPACT STORAGE`` option.
+-  ``WITH``: Allows to update the options of the table. The `supported
+   ``<option>`` <#createTableOptions>`__ (and syntax) are the same as
+   for the ``CREATE TABLE`` statement except that ``COMPACT STORAGE`` is
+   not supported. Note that setting any ``compaction`` sub-options has
+   the effect of erasing all previous ``compaction`` options, so you
+   need to re-specify all the sub-options if you want to keep them. The
+   same note applies to the set of ``compression`` sub-options.
+
+CQL type compatibility:
+^^^^^^^^^^^^^^^^^^^^^^^
+
+CQL data types may be converted only as the following table.
+
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| Data type may be altered to:                                                                                                                 | Data type   |
++==============================================================================================================================================+=============+
+| timestamp                                                                                                                                    | bigint      |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| ascii, bigint, boolean, date, decimal, double, float, inet, int, smallint, text, time, timestamp, timeuuid, tinyint, uuid, varchar, varint   | blob        |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| int                                                                                                                                          | date        |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| ascii, varchar                                                                                                                               | text        |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| bigint                                                                                                                                       | time        |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| bigint                                                                                                                                       | timestamp   |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| timeuuid                                                                                                                                     | uuid        |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| ascii, text                                                                                                                                  | varchar     |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+| bigint, int, timestamp                                                                                                                       | varint      |
++----------------------------------------------------------------------------------------------------------------------------------------------+-------------+
+
+Clustering columns have stricter requirements, only the below
+conversions are allowed.
+
++--------------------------------+-------------+
+| Data type may be altered to:   | Data type   |
++================================+=============+
+| ascii, text, varchar           | blob        |
++--------------------------------+-------------+
+| ascii, varchar                 | text        |
++--------------------------------+-------------+
+| ascii, text                    | varchar     |
++--------------------------------+-------------+
+
+DROP TABLE
+^^^^^^^^^^
+
+*Syntax:*
+
+bc(syntax). ::= DROP TABLE ( IF EXISTS )?
+
+*Sample:*
+
+bc(sample). DROP TABLE worldSeriesAttendees;
+
+The ``DROP TABLE`` statement results in the immediate, irreversible
+removal of a table, including all data contained in it. As for table
+creation, ``DROP COLUMNFAMILY`` is allowed as an alias for
+``DROP TABLE``.
+
+If the table does not exist, the statement will return an error, unless
+``IF EXISTS`` is used in which case the operation is a no-op.
+
+TRUNCATE
+^^^^^^^^
+
+*Syntax:*
+
+bc(syntax). ::= TRUNCATE ( TABLE \| COLUMNFAMILY )?
+
+*Sample:*
+
+bc(sample). TRUNCATE superImportantData;
+
+The ``TRUNCATE`` statement permanently removes all data from a table.
+
+CREATE INDEX
+^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE ( CUSTOM )? INDEX ( IF NOT EXISTS )? ( )?
+|  ON \u2018(\u2019 \u2018)\u2019
+|  ( USING ( WITH OPTIONS = )? )?
+
+|  ::= 
+|  \| keys( )
+| p.
+| *Sample:*
+
+| bc(sample).
+| CREATE INDEX userIndex ON NerdMovies (user);
+| CREATE INDEX ON Mutants (abilityId);
+| CREATE INDEX ON users (keys(favs));
+| CREATE CUSTOM INDEX ON users (email) USING \u2018path.to.the.IndexClass\u2019;
+| CREATE CUSTOM INDEX ON users (email) USING \u2018path.to.the.IndexClass\u2019
+  WITH OPTIONS = {\u2019storage\u2019: \u2018/mnt/ssd/indexes/\u2019};
+
+The ``CREATE INDEX`` statement is used to create a new (automatic)
+secondary index for a given (existing) column in a given table. A name
+for the index itself can be specified before the ``ON`` keyword, if
+desired. If data already exists for the column, it will be indexed
+asynchronously. After the index is created, new data for the column is
+indexed automatically at insertion time.
+
+Attempting to create an already existing index will return an error
+unless the ``IF NOT EXISTS`` option is used. If it is used, the
+statement will be a no-op if the index already exists.
+
+Indexes on Map Keys
+^^^^^^^^^^^^^^^^^^^
+
+When creating an index on a `map column <#map>`__, you may index either
+the keys or the values. If the column identifier is placed within the
+``keys()`` function, the index will be on the map keys, allowing you to
+use ``CONTAINS KEY`` in ``WHERE`` clauses. Otherwise, the index will be
+on the map values.
+
+DROP INDEX
+^^^^^^^^^^
+
+*Syntax:*
+
+bc(syntax). ::= DROP INDEX ( IF EXISTS )? ( \u2018.\u2019 )?
+
+*Sample:*
+
+| bc(sample)..
+| DROP INDEX userIndex;
+
+| DROP INDEX userkeyspace.address\_index;
+| p.
+| The ``DROP INDEX`` statement is used to drop an existing secondary
+  index. The argument of the statement is the index name, which may
+  optionally specify the keyspace of the index.
+
+If the index does not exists, the statement will return an error, unless
+``IF EXISTS`` is used in which case the operation is a no-op.
+
+CREATE MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE MATERIALIZED VIEW ( IF NOT EXISTS )? AS
+|  SELECT ( \u2018(\u2019 ( \u2018,\u2019 ) \* \u2018)\u2019 \| \u2018\*\u2019 )
+|  FROM 
+|  ( WHERE )?
+|  PRIMARY KEY \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+|  ( WITH ( AND )\* )?
+| p.
+| *Sample:*
+
+| bc(sample)..
+| CREATE MATERIALIZED VIEW monkeySpecies\_by\_population AS
+|  SELECT \*
+|  FROM monkeySpecies
+|  WHERE population IS NOT NULL AND species IS NOT NULL
+|  PRIMARY KEY (population, species)
+|  WITH comment=\u2018Allow query by population instead of species\u2019;
+| p.
+| The ``CREATE MATERIALIZED VIEW`` statement creates a new materialized
+  view. Each such view is a set of *rows* which corresponds to rows
+  which are present in the underlying, or base, table specified in the
+  ``SELECT`` statement. A materialized view cannot be directly updated,
+  but updates to the base table will cause corresponding updates in the
+  view.
+
+Attempting to create an already existing materialized view will return
+an error unless the ``IF NOT EXISTS`` option is used. If it is used, the
+statement will be a no-op if the materialized view already exists.
+
+``WHERE`` Clause
+^^^^^^^^^^^^^^^^
+
+The ``<where-clause>`` is similar to the `where clause of a ``SELECT``
+statement <#selectWhere>`__, with a few differences. First, the where
+clause must contain an expression that disallows ``NULL`` values in
+columns in the view\u2019s primary key. If no other restriction is desired,
+this can be accomplished with an ``IS NOT NULL`` expression. Second,
+only columns which are in the base table\u2019s primary key may be restricted
+with expressions other than ``IS NOT NULL``. (Note that this second
+restriction may be lifted in the future.)
+
+ALTER MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax). ::= ALTER MATERIALIZED VIEW 
+|  WITH ( AND )\*
+
+The ``ALTER MATERIALIZED VIEW`` statement allows options to be update;
+these options are the same as \ ``CREATE TABLE``\ \u2019s options.
+
+DROP MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+bc(syntax). ::= DROP MATERIALIZED VIEW ( IF EXISTS )?
+
+*Sample:*
+
+bc(sample). DROP MATERIALIZED VIEW monkeySpecies\_by\_population;
+
+The ``DROP MATERIALIZED VIEW`` statement is used to drop an existing
+materialized view.
+
+If the materialized view does not exists, the statement will return an
+error, unless ``IF EXISTS`` is used in which case the operation is a
+no-op.
+
+CREATE TYPE
+^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE TYPE ( IF NOT EXISTS )? 
+|  \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+
+ ::= ( \u2018.\u2019 )?
+
+ ::=
+
+*Sample:*
+
+| bc(sample)..
+| CREATE TYPE address (
+|  street\_name text,
+|  street\_number int,
+|  city text,
+|  state text,
+|  zip int
+| )
+
+| CREATE TYPE work\_and\_home\_addresses (
+|  home\_address address,
+|  work\_address address
+| )
+| p.
+| The ``CREATE TYPE`` statement creates a new user-defined type. Each
+  type is a set of named, typed fields. Field types may be any valid
+  type, including collections and other existing user-defined types.
+
+Attempting to create an already existing type will result in an error
+unless the ``IF NOT EXISTS`` option is used. If it is used, the
+statement will be a no-op if the type already exists.
+
+``<typename>``
+^^^^^^^^^^^^^^
+
+Valid type names are identifiers. The names of existing CQL types and
+`reserved type names <#appendixB>`__ may not be used.
+
+If the type name is provided alone, the type is created with the current
+keyspace (see \ ``USE``\ ). If it is prefixed by an existing keyspace
+name, the type is created within the specified keyspace instead of the
+current keyspace.
+
+ALTER TYPE
+^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= ALTER TYPE 
+
+|  ::= ALTER TYPE 
+|  \| ADD 
+|  \| RENAME TO ( AND TO )\*
+| p.
+| *Sample:*
+
+| bc(sample)..
+| ALTER TYPE address ALTER zip TYPE varint
+
+ALTER TYPE address ADD country text
+
+| ALTER TYPE address RENAME zip TO zipcode AND street\_name TO street
+| p.
+| The ``ALTER TYPE`` statement is used to manipulate type definitions.
+  It allows for adding new fields, renaming existing fields, or changing
+  the type of existing fields.
+
+When altering the type of a column, the new type must be compatible with
+the previous type.
+
+DROP TYPE
+^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DROP TYPE ( IF EXISTS )? 
+| p.
+| The ``DROP TYPE`` statement results in the immediate, irreversible
+  removal of a type. Attempting to drop a type that is still in use by
+  another type or a table will result in an error.
+
+If the type does not exist, an error will be returned unless
+``IF EXISTS`` is used, in which case the operation is a no-op.
+
+CREATE TRIGGER
+^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE TRIGGER ( IF NOT EXISTS )? ( )?
+|  ON 
+|  USING 
+
+*Sample:*
+
+| bc(sample).
+| CREATE TRIGGER myTrigger ON myTable USING
+  \u2018org.apache.cassandra.triggers.InvertedIndex\u2019;
+
+The actual logic that makes up the trigger can be written in any Java
+(JVM) language and exists outside the database. You place the trigger
+code in a ``lib/triggers`` subdirectory of the Cassandra installation
+directory, it loads during cluster startup, and exists on every node
+that participates in a cluster. The trigger defined on a table fires
+before a requested DML statement occurs, which ensures the atomicity of
+the transaction.
+
+DROP TRIGGER
+^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DROP TRIGGER ( IF EXISTS )? ( )?
+|  ON 
+| p.
+| *Sample:*
+
+| bc(sample).
+| DROP TRIGGER myTrigger ON myTable;
+
+``DROP TRIGGER`` statement removes the registration of a trigger created
+using ``CREATE TRIGGER``.
+
+CREATE FUNCTION
+^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE ( OR REPLACE )?
+|  FUNCTION ( IF NOT EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+|  ( CALLED \| RETURNS NULL ) ON NULL INPUT
+|  RETURNS 
+|  LANGUAGE 
+|  AS 
+| p.
+| *Sample:*
+
+| bc(sample).
+| CREATE OR REPLACE FUNCTION somefunction
+|  ( somearg int, anotherarg text, complexarg frozen, listarg list )
+|  RETURNS NULL ON NULL INPUT
+|  RETURNS text
+|  LANGUAGE java
+|  AS $$
+|  // some Java code
+|  $$;
+| CREATE FUNCTION akeyspace.fname IF NOT EXISTS
+|  ( someArg int )
+|  CALLED ON NULL INPUT
+|  RETURNS text
+|  LANGUAGE java
+|  AS $$
+|  // some Java code
+|  $$;
+
+``CREATE FUNCTION`` creates or replaces a user-defined function.
+
+Function Signature
+^^^^^^^^^^^^^^^^^^
+
+Signatures are used to distinguish individual functions. The signature
+consists of:
+
+#. The fully qualified function name - i.e *keyspace* plus
+   *function-name*
+#. The concatenated list of all argument types
+
+Note that keyspace names, function names and argument types are subject
+to the default naming conventions and case-sensitivity rules.
+
+``CREATE FUNCTION`` with the optional ``OR REPLACE`` keywords either
+creates a function or replaces an existing one with the same signature.
+A ``CREATE FUNCTION`` without ``OR REPLACE`` fails if a function with
+the same signature already exists.
+
+Behavior on invocation with ``null`` values must be defined for each
+function. There are two options:
+
+#. ``RETURNS NULL ON NULL INPUT`` declares that the function will always
+   return ``null`` if any of the input arguments is ``null``.
+#. ``CALLED ON NULL INPUT`` declares that the function will always be
+   executed.
+
+If the optional ``IF NOT EXISTS`` keywords are used, the function will
+only be created if another function with the same signature does not
+exist.
+
+``OR REPLACE`` and ``IF NOT EXIST`` cannot be used together.
+
+Functions belong to a keyspace. If no keyspace is specified in
+``<function-name>``, the current keyspace is used (i.e. the keyspace
+specified using the ```USE`` <#useStmt>`__ statement). It is not
+possible to create a user-defined function in one of the system
+keyspaces.
+
+See the section on `user-defined functions <#udfs>`__ for more
+information.
+
+DROP FUNCTION
+^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DROP FUNCTION ( IF EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  ( \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019 )?
+
+*Sample:*
+
+| bc(sample).
+| DROP FUNCTION myfunction;
+| DROP FUNCTION mykeyspace.afunction;
+| DROP FUNCTION afunction ( int );
+| DROP FUNCTION afunction ( text );
+
+| ``DROP FUNCTION`` statement removes a function created using
+  ``CREATE FUNCTION``.
+| You must specify the argument types
+  (`signature <#functionSignature>`__ ) of the function to drop if there
+  are multiple functions with the same name but a different signature
+  (overloaded functions).
+
+``DROP FUNCTION`` with the optional ``IF EXISTS`` keywords drops a
+function if it exists.
+
+CREATE AGGREGATE
+^^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE ( OR REPLACE )?
+|  AGGREGATE ( IF NOT EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+|  SFUNC 
+|  STYPE 
+|  ( FINALFUNC )?
+|  ( INITCOND )?
+| p.
+| *Sample:*
+
+| bc(sample).
+| CREATE AGGREGATE myaggregate ( val text )
+|  SFUNC myaggregate\_state
+|  STYPE text
+|  FINALFUNC myaggregate\_final
+|  INITCOND \u2018foo\u2019;
+
+See the section on `user-defined aggregates <#udas>`__ for a complete
+example.
+
+``CREATE AGGREGATE`` creates or replaces a user-defined aggregate.
+
+``CREATE AGGREGATE`` with the optional ``OR REPLACE`` keywords either
+creates an aggregate or replaces an existing one with the same
+signature. A ``CREATE AGGREGATE`` without ``OR REPLACE`` fails if an
+aggregate with the same signature already exists.
+
+``CREATE AGGREGATE`` with the optional ``IF NOT EXISTS`` keywords either
+creates an aggregate if it does not already exist.
+
+``OR REPLACE`` and ``IF NOT EXIST`` cannot be used together.
+
+Aggregates belong to a keyspace. If no keyspace is specified in
+``<aggregate-name>``, the current keyspace is used (i.e. the keyspace
+specified using the ```USE`` <#useStmt>`__ statement). It is not
+possible to create a user-defined aggregate in one of the system
+keyspaces.
+
+Signatures for user-defined aggregates follow the `same
+rules <#functionSignature>`__ as for user-defined functions.
+
+``STYPE`` defines the type of the state value and must be specified.
+
+The optional ``INITCOND`` defines the initial state value for the
+aggregate. It defaults to ``null``. A non-\ ``null`` ``INITCOND`` must
+be specified for state functions that are declared with
+``RETURNS NULL ON NULL INPUT``.
+
+``SFUNC`` references an existing function to be used as the state
+modifying function. The type of first argument of the state function
+must match ``STYPE``. The remaining argument types of the state function
+must match the argument types of the aggregate function. State is not
+updated for state functions declared with ``RETURNS NULL ON NULL INPUT``
+and called with ``null``.
+
+The optional ``FINALFUNC`` is called just before the aggregate result is
+returned. It must take only one argument with type ``STYPE``. The return
+type of the ``FINALFUNC`` may be a different type. A final function
+declared with ``RETURNS NULL ON NULL INPUT`` means that the aggregate\u2019s
+return value will be ``null``, if the last state is ``null``.
+
+If no ``FINALFUNC`` is defined, the overall return type of the aggregate
+function is ``STYPE``. If a ``FINALFUNC`` is defined, it is the return
+type of that function.
+
+See the section on `user-defined aggregates <#udas>`__ for more
+information.
+
+DROP AGGREGATE
+^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DROP AGGREGATE ( IF EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  ( \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019 )?
+| p.
+
+*Sample:*
+
+| bc(sample).
+| DROP AGGREGATE myAggregate;
+| DROP AGGREGATE myKeyspace.anAggregate;
+| DROP AGGREGATE someAggregate ( int );
+| DROP AGGREGATE someAggregate ( text );
+
+The ``DROP AGGREGATE`` statement removes an aggregate created using
+``CREATE AGGREGATE``. You must specify the argument types of the
+aggregate to drop if there are multiple aggregates with the same name
+but a different signature (overloaded aggregates).
+
+``DROP AGGREGATE`` with the optional ``IF EXISTS`` keywords drops an
+aggregate if it exists, and does nothing if a function with the
+signature does not exist.
+
+Signatures for user-defined aggregates follow the `same
+rules <#functionSignature>`__ as for user-defined functions.
+
+Data Manipulation
+-----------------
+
+INSERT
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= INSERT INTO 
+|  ( ( VALUES )
+|  \| ( JSON ))
+|  ( IF NOT EXISTS )?
+|  ( USING ( AND )\* )?
+
+ ::= \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+
+ ::= \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+
+|  ::= 
+|  \| 
+
+|  ::= TIMESTAMP 
+|  \| TTL 
+| p.
+| *Sample:*
+
+| bc(sample)..
+| INSERT INTO NerdMovies (movie, director, main\_actor, year)
+|  VALUES (\u2018Serenity\u2019, \u2018Joss Whedon\u2019, \u2018Nathan Fillion\u2019, 2005)
+| USING TTL 86400;
+
+| INSERT INTO NerdMovies JSON \u2018{`movie <>`__ \u201cSerenity\u201d, `director <>`__
+  \u201cJoss Whedon\u201d, `year <>`__ 2005}\u2019
+| p.
+| The ``INSERT`` statement writes one or more columns for a given row in
+  a table. Note that since a row is identified by its ``PRIMARY KEY``,
+  at least the columns composing it must be specified. The list of
+  columns to insert to must be supplied when using the ``VALUES``
+  syntax. When using the ``JSON`` syntax, they are optional. See the
+  section on ```INSERT JSON`` <#insertJson>`__ for more details.
+
+Note that unlike in SQL, ``INSERT`` does not check the prior existence
+of the row by default: the row is created if none existed before, and
+updated otherwise. Furthermore, there is no mean to know which of
+creation or update happened.
+
+It is however possible to use the ``IF NOT EXISTS`` condition to only
+insert if the row does not exist prior to the insertion. But please note
+that using ``IF NOT EXISTS`` will incur a non negligible performance
+cost (internally, Paxos will be used) so this should be used sparingly.
+
+All updates for an ``INSERT`` are applied atomically and in isolation.
+
+Please refer to the ```UPDATE`` <#updateOptions>`__ section for
+information on the ``<option>`` available and to the
+`collections <#collections>`__ section for use of
+``<collection-literal>``. Also note that ``INSERT`` does not support
+counters, while ``UPDATE`` does.
+
+UPDATE
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= UPDATE 
+|  ( USING ( AND )\* )?
+|  SET ( \u2018,\u2019 )\*
+|  WHERE 
+|  ( IF ( AND condition )\* )?
+
+|  ::= \u2018=\u2019 
+|  \| \u2018=\u2019 (\u2018+\u2019 \| \u2018-\u2019) ( \| \| )
+|  \| \u2018=\u2019 \u2018+\u2019 
+|  \| \u2018[\u2019 \u2018]\u2019 \u2018=\u2019 
+|  \| \u2018.\u2019 \u2018=\u2019 
+
+|  ::= 
+|  \| IN 
+|  \| \u2018[\u2019 \u2018]\u2019 
+|  \| \u2018[\u2019 \u2018]\u2019 IN 
+|  \| \u2018.\u2019 
+|  \| \u2018.\u2019 IN 
+
+|  ::= \u2018<\u2019 \| \u2018<=\u2019 \| \u2018=\u2019 \| \u2018!=\u2019 \| \u2018>=\u2019 \| \u2018>\u2019
+|  ::= ( \| \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019)
+
+ ::= ( AND )\*
+
+|  ::= \u2018=\u2019 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 \u2018=\u2019 
+|  \| IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| IN 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN 
+
+|  ::= TIMESTAMP 
+|  \| TTL 
+| p.
+| *Sample:*
+
+| bc(sample)..
+| UPDATE NerdMovies USING TTL 400
+| SET director = \u2018Joss Whedon\u2019,
+|  main\_actor = \u2018Nathan Fillion\u2019,
+|  year = 2005
+| WHERE movie = \u2018Serenity\u2019;
+
+| UPDATE UserActions SET total = total + 2 WHERE user =
+  B70DE1D0-9908-4AE3-BE34-5573E5B09F14 AND action = \u2018click\u2019;
+| p.
+| The ``UPDATE`` statement writes one or more columns for a given row in
+  a table. The ``<where-clause>`` is used to select the row to update
+  and must include all columns composing the ``PRIMARY KEY``. Other
+  columns values are specified through ``<assignment>`` after the
+  ``SET`` keyword.
+
+Note that unlike in SQL, ``UPDATE`` does not check the prior existence
+of the row by default (except through the use of ``<condition>``, see
+below): the row is created if none existed before, and updated
+otherwise. Furthermore, there are no means to know whether a creation or
+update occurred.
+
+It is however possible to use the conditions on some columns through
+``IF``, in which case the row will not be updated unless the conditions
+are met. But, please note that using ``IF`` conditions will incur a
+non-negligible performance cost (internally, Paxos will be used) so this
+should be used sparingly.
+
+In an ``UPDATE`` statement, all updates within the same partition key
+are applied atomically and in isolation.
+
+The ``c = c + 3`` form of ``<assignment>`` is used to
+increment/decrement counters. The identifier after the \u2018=\u2019 sign **must**
+be the same than the one before the \u2018=\u2019 sign (Only increment/decrement
+is supported on counters, not the assignment of a specific value).
+
+The ``id = id + <collection-literal>`` and ``id[value1] = value2`` forms
+of ``<assignment>`` are for collections. Please refer to the `relevant
+section <#collections>`__ for more details.
+
+The ``id.field = <term>`` form of ``<assignemt>`` is for setting the
+value of a single field on a non-frozen user-defined types.
+
+``<options>``
+^^^^^^^^^^^^^
+
+The ``UPDATE`` and ``INSERT`` statements support the following options:
+
+-  ``TIMESTAMP``: sets the timestamp for the operation. If not
+   specified, the coordinator will use the current time (in
+   microseconds) at the start of statement execution as the timestamp.
+   This is usually a suitable default.
+-  ``TTL``: specifies an optional Time To Live (in seconds) for the
+   inserted values. If set, the inserted values are automatically
+   removed from the database after the specified time. Note that the TTL
+   concerns the inserted values, not the columns themselves. This means
+   that any subsequent update of the column will also reset the TTL (to
+   whatever TTL is specified in that update). By default, values never
+   expire. A TTL of 0 is equivalent to no TTL. If the table has a
+   default\_time\_to\_live, a TTL of 0 will remove the TTL for the
+   inserted or updated values.
+
+DELETE
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DELETE ( ( \u2018,\u2019 )\* )?
+|  FROM 
+|  ( USING TIMESTAMP )?
+|  WHERE 
+|  ( IF ( EXISTS \| ( ( AND )\*) ) )?
+
+|  ::= 
+|  \| \u2018[\u2019 \u2018]\u2019
+|  \| \u2018.\u2019 
+
+ ::= ( AND )\*
+
+|  ::= 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 
+|  \| IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| IN 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN 
+
+|  ::= \u2018=\u2019 \| \u2018<\u2019 \| \u2018>\u2019 \| \u2018<=\u2019 \| \u2018>=\u2019
+|  ::= ( \| \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019)
+
+|  ::= ( \| \u2018!=\u2019) 
+|  \| IN 
+|  \| \u2018[\u2019 \u2018]\u2019 ( \| \u2018!=\u2019) 
+|  \| \u2018[\u2019 \u2018]\u2019 IN 
+|  \| \u2018.\u2019 ( \| \u2018!=\u2019) 
+|  \| \u2018.\u2019 IN 
+
+*Sample:*
+
+| bc(sample)..
+| DELETE FROM NerdMovies USING TIMESTAMP 1240003134 WHERE movie =
+  \u2018Serenity\u2019;
+
+| DELETE phone FROM Users WHERE userid IN
+  (C73DE1D3-AF08-40F3-B124-3FF3E5109F22,
+  B70DE1D0-9908-4AE3-BE34-5573E5B09F14);
+| p.
+| The ``DELETE`` statement deletes columns and rows. If column names are
+  provided directly after the ``DELETE`` keyword, only those columns are
+  deleted from the row indicated by the ``<where-clause>``. The
+  ``id[value]`` syntax in ``<selection>`` is for non-frozen collections
+  (please refer to the `collection section <#collections>`__ for more
+  details). The ``id.field`` syntax is for the deletion of non-frozen
+  user-defined types. Otherwise, whole rows are removed. The
+  ``<where-clause>`` specifies which rows are to be deleted. Multiple
+  rows may be deleted with one statement by using an ``IN`` clause. A
+  range of rows may be deleted using an inequality operator (such as
+  ``>=``).
+
+``DELETE`` supports the ``TIMESTAMP`` option with the same semantics as
+the ```UPDATE`` <#updateStmt>`__ statement.
+
+In a ``DELETE`` statement, all deletions within the same partition key
+are applied atomically and in isolation.
+
+A ``DELETE`` operation can be conditional through the use of an ``IF``
+clause, similar to ``UPDATE`` and ``INSERT`` statements. However, as
+with ``INSERT`` and ``UPDATE`` statements, this will incur a
+non-negligible performance cost (internally, Paxos will be used) and so
+should be used sparingly.
+
+BATCH
+^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= BEGIN ( UNLOGGED \| COUNTER ) BATCH
+|  ( USING ( AND )\* )?
+|  ( \u2018;\u2019 )\*
+|  APPLY BATCH
+
+|  ::= 
+|  \| 
+|  \| 
+
+|  ::= TIMESTAMP 
+| p.
+| *Sample:*
+
+| bc(sample).
+| BEGIN BATCH
+|  INSERT INTO users (userid, password, name) VALUES (\u2018user2\u2019,
+  \u2018ch@ngem3b\u2019, \u2018second user\u2019);
+|  UPDATE users SET password = \u2018ps22dhds\u2019 WHERE userid = \u2018user3\u2019;
+|  INSERT INTO users (userid, password) VALUES (\u2018user4\u2019, \u2018ch@ngem3c\u2019);
+|  DELETE name FROM users WHERE userid = \u2018user1\u2019;
+| APPLY BATCH;
+
+The ``BATCH`` statement group multiple modification statements
+(insertions/updates and deletions) into a single statement. It serves
+several purposes:
+
+#. It saves network round-trips between the client and the server (and
+   sometimes between the server coordinator and the replicas) when
+   batching multiple updates.
+#. All updates in a ``BATCH`` belonging to a given partition key are
+   performed in isolation.
+#. By default, all operations in the batch are performed as ``LOGGED``,
+   to ensure all mutations eventually complete (or none will). See the
+   notes on ```UNLOGGED`` <#unloggedBatch>`__ for more details.
+
+Note that:
+
+-  ``BATCH`` statements may only contain ``UPDATE``, ``INSERT`` and
+   ``DELETE`` statements.
+-  Batches are *not* a full analogue for SQL transactions.
+-  If a timestamp is not specified for each operation, then all
+   operations will be applied with the same timestamp. Due to
+   Cassandra\u2019s conflict resolution procedure in the case of `timestamp
+   ties <http://wiki.apache.org/cassandra/FAQ#clocktie>`__, operations
+   may be applied in an order that is different from the order they are
+   listed in the ``BATCH`` statement. To force a particular operation
+   ordering, you must specify per-operation timestamps.
+
+``UNLOGGED``
+^^^^^^^^^^^^
+
+By default, Cassandra uses a batch log to ensure all operations in a
+batch eventually complete or none will (note however that operations are
+only isolated within a single partition).
+
+There is a performance penalty for batch atomicity when a batch spans
+multiple partitions. If you do not want to incur this penalty, you can
+tell Cassandra to skip the batchlog with the ``UNLOGGED`` option. If the
+``UNLOGGED`` option is used, a failed batch might leave the patch only
+partly applied.
+
+``COUNTER``
+^^^^^^^^^^^
+
+Use the ``COUNTER`` option for batched counter updates. Unlike other
+updates in Cassandra, counter updates are not idempotent.
+
+``<option>``
+^^^^^^^^^^^^
+
+``BATCH`` supports both the ``TIMESTAMP`` option, with similar semantic
+to the one described in the ```UPDATE`` <#updateOptions>`__ statement
+(the timestamp applies to all the statement inside the batch). However,
+if used, ``TIMESTAMP`` **must not** be used in the statements within the
+batch.
+
+Queries
+-------
+
+SELECT
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= SELECT ( JSON )? 
+|  FROM 
+|  ( WHERE )?
+|  ( ORDER BY )?
+|  ( PER PARTITION LIMIT )?
+|  ( LIMIT )?
+|  ( ALLOW FILTERING )?
+
+|  ::= DISTINCT? 
+|  \| COUNT \u2018(\u2019 ( \u2018\*\u2019 \| \u20181\u2019 ) \u2018)\u2019 (AS )?
+
+|  ::= (AS )? ( \u2018,\u2019 (AS )? )\*
+|  \| \u2018\*\u2019
+
+|  ::= 
+|  \| WRITETIME \u2018(\u2019 \u2018)\u2019
+|  \| TTL \u2018(\u2019 \u2018)\u2019
+|  \| CAST \u2018(\u2019 AS \u2018)\u2019
+|  \| \u2018(\u2019 ( (\u2018,\u2019 )\*)? \u2018)\u2019
+
+ ::= ( AND )\*
+
+|  ::= 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 
+|  \| IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| TOKEN \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019 
+
+|  ::= \u2018=\u2019 \| \u2018<\u2019 \| \u2018>\u2019 \| \u2018<=\u2019 \| \u2018>=\u2019 \| CONTAINS \| CONTAINS KEY
+|  ::= ( \u2018,\u2019 )\*
+|  ::= ( ASC \| DESC )?
+|  ::= \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019
+| p.
+| *Sample:*
+
+| bc(sample)..
+| SELECT name, occupation FROM users WHERE userid IN (199, 200, 207);
+
+SELECT JSON name, occupation FROM users WHERE userid = 199;
+
+SELECT name AS user\_name, occupation AS user\_occupation FROM users;
+
+| SELECT time, value
+| FROM events
+| WHERE event\_type = \u2018myEvent\u2019
+|  AND time > \u20182011-02-03\u2019
+|  AND time <= \u20182012-01-01\u2019
+
+SELECT COUNT (\*) FROM users;
+
+SELECT COUNT (\*) AS user\_count FROM users;
+
+The ``SELECT`` statements reads one or more columns for one or more rows
+in a table. It returns a result-set of rows, where each row contains the
+collection of columns corresponding to the query. If the ``JSON``
+keyword is used, the results for each row will contain only a single
+column named \u201cjson\u201d. See the section on
+```SELECT JSON`` <#selectJson>`__ for more details.
+
+``<select-clause>``
+^^^^^^^^^^^^^^^^^^^
+
+The ``<select-clause>`` determines which columns needs to be queried and
+returned in the result-set. It consists of either the comma-separated
+list of or the wildcard character (``*``) to select all the columns
+defined for the table.
+
+A ``<selector>`` is either a column name to retrieve or a ``<function>``
+of one or more ``<term>``\ s. The function allowed are the same as for
+``<term>`` and are described in the `function section <#functions>`__.
+In addition to these generic functions, the ``WRITETIME`` (resp.
+``TTL``) function allows to select the timestamp of when the column was
+inserted (resp. the time to live (in seconds) for the column (or null if
+the column has no expiration set)) and the ```CAST`` <#castFun>`__
+function can be used to convert one data type to another.
+
+Any ``<selector>`` can be aliased using ``AS`` keyword (see examples).
+Please note that ``<where-clause>`` and ``<order-by>`` clause should
+refer to the columns by their original names and not by their aliases.
+
+The ``COUNT`` keyword can be used with parenthesis enclosing ``*``. If
+so, the query will return a single result: the number of rows matching
+the query. Note that ``COUNT(1)`` is supported as an alias.
+
+``<where-clause>``
+^^^^^^^^^^^^^^^^^^
+
+The ``<where-clause>`` specifies which rows must be queried. It is
+composed of relations on the columns that are part of the
+``PRIMARY KEY`` and/or have a `secondary index <#createIndexStmt>`__
+defined on them.
+
+Not all relations are allowed in a query. For instance, non-equal
+relations (where ``IN`` is considered as an equal relation) on a
+partition key are not supported (but see the use of the ``TOKEN`` method
+below to do non-equal queries on the partition key). Moreover, for a
+given partition key, the clustering columns induce an ordering of rows
+and relations on them is restricted to the relations that allow to
+select a **contiguous** (for the ordering) set of rows. For instance,
+given
+
+| bc(sample).
+| CREATE TABLE posts (
+|  userid text,
+|  blog\_title text,
+|  posted\_at timestamp,
+|  entry\_title text,
+|  content text,
+|  category int,
+|  PRIMARY KEY (userid, blog\_title, posted\_at)
+| )
+
+The following query is allowed:
+
+| bc(sample).
+| SELECT entry\_title, content FROM posts WHERE userid=\u2018john doe\u2019 AND
+  blog\_title=\u2018John\u2019\u2018s Blog\u2019 AND posted\_at >= \u20182012-01-01\u2019 AND
+  posted\_at < \u20182012-01-31\u2019
+
+But the following one is not, as it does not select a contiguous set of
+rows (and we suppose no secondary indexes are set):
+
+| bc(sample).
+| // Needs a blog\_title to be set to select ranges of posted\_at
+| SELECT entry\_title, content FROM posts WHERE userid=\u2018john doe\u2019 AND
+  posted\_at >= \u20182012-01-01\u2019 AND posted\_at < \u20182012-01-31\u2019
+
+When specifying relations, the ``TOKEN`` function can be used on the
+``PARTITION KEY`` column to query. In that case, rows will be selected
+based on the token of their ``PARTITION_KEY`` rather than on the value.
+Note that the token of a key depends on the partitioner in use, and that
+in particular the RandomPartitioner won\u2019t yield a meaningful order. Also
+note that ordering partitioners always order token values by bytes (so
+even if the partition key is of type int, ``token(-1) > token(0)`` in
+particular). Example:
+
+| bc(sample).
+| SELECT \* FROM posts WHERE token(userid) > token(\u2018tom\u2019) AND
+  token(userid) < token(\u2018bob\u2019)
+
+Moreover, the ``IN`` relation is only allowed on the last column of the
+partition key and on the last column of the full primary key.
+
+It is also possible to \u201cgroup\u201d ``CLUSTERING COLUMNS`` together in a
+relation using the tuple notation. For instance:
+
+| bc(sample).
+| SELECT \* FROM posts WHERE userid=\u2018john doe\u2019 AND (blog\_title,
+  posted\_at) > (\u2018John\u2019\u2018s Blog\u2019, \u20182012-01-01\u2019)
+
+will request all rows that sorts after the one having \u201cJohn\u2019s Blog\u201d as
+``blog_tile`` and \u20182012-01-01\u2019 for ``posted_at`` in the clustering
+order. In particular, rows having a ``post_at <= '

<TRUNCATED>