You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by sl...@apache.org on 2016/06/27 18:34:16 UTC

[21/34] cassandra git commit: Reorganize document

http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f7335c/doc/source/cql/definitions.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql/definitions.rst b/doc/source/cql/definitions.rst
new file mode 100644
index 0000000..61ed47c
--- /dev/null
+++ b/doc/source/cql/definitions.rst
@@ -0,0 +1,225 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+Definitions
+-----------
+
+.. _conventions:
+
+Conventions
+^^^^^^^^^^^
+
+To aid in specifying the CQL syntax, we will use the following conventions in this document:
+
+- Language rules will be given in an informal `BNF variant
+  <http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form#Variants>`_ notation. In particular, we'll use square brakets
+  (``[ item ]``) for optional items, ``*`` and ``+`` for repeated items (where ``+`` imply at least one).
+- The grammar will also use the following convention for convenience: non-terminal term will be lowercase (and link to
+  their definition) while terminal keywords will be provided "all caps". Note however that keywords are
+  :ref:`identifiers` and are thus case insensitive in practice. We will also define some early construction using
+  regexp, which we'll indicate with ``re(<some regular expression>)``.
+- The grammar is provided for documentation purposes and leave some minor details out.  For instance, the comma on the
+  last column definition in a ``CREATE TABLE`` statement is optional but supported if present even though the grammar in
+  this document suggests otherwise. Also, not everything accepted by the grammar is necessarily valid CQL.
+- References to keywords or pieces of CQL code in running text will be shown in a ``fixed-width font``.
+
+
+.. _identifiers:
+
+Identifiers and keywords
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The CQL language uses *identifiers* (or *names*) to identify tables, columns and other objects. An identifier is a token
+matching the regular expression ``[a-zA-Z][a-zA-Z0-9_]*``.
+
+A number of such identifiers, like ``SELECT`` or ``WITH``, are *keywords*. They have a fixed meaning for the language
+and most are reserved. The list of those keywords can be found in :ref:`appendix-A`.
+
+Identifiers and (unquoted) keywords are case insensitive. Thus ``SELECT`` is the same than ``select`` or ``sElEcT``, and
+``myId`` is the same than ``myid`` or ``MYID``. A convention often used (in particular by the samples of this
+documentation) is to use upper case for keywords and lower case for other identifiers.
+
+There is a second kind of identifiers called *quoted identifiers* defined by enclosing an arbitrary sequence of
+characters (non empty) in double-quotes(``"``). Quoted identifiers are never keywords. Thus ``"select"`` is not a
+reserved keyword and can be used to refer to a column (note that using this is particularly advised), while ``select``
+would raise a parsing error. Also, contrarily to unquoted identifiers and keywords, quoted identifiers are case
+sensitive (``"My Quoted Id"`` is *different* from ``"my quoted id"``). A fully lowercase quoted identifier that matches
+``[a-zA-Z][a-zA-Z0-9_]*`` is however *equivalent* to the unquoted identifier obtained by removing the double-quote (so
+``"myid"`` is equivalent to ``myid`` and to ``myId`` but different from ``"myId"``).  Inside a quoted identifier, the
+double-quote character can be repeated to escape it, so ``"foo "" bar"`` is a valid identifier.
+
+.. note:: *quoted identifiers* allows to declare columns with arbitrary names, and those can sometime clash with
+   specific names used by the server. For instance, when using conditional update, the server will respond with a
+   result-set containing a special result named ``"[applied]"``. If you\u2019ve declared a column with such a name, this
+   could potentially confuse some tools and should be avoided. In general, unquoted identifiers should be preferred but
+   if you use quoted identifiers, it is strongly advised to avoid any name enclosed by squared brackets (like
+   ``"[applied]"``) and any name that looks like a function call (like ``"f(x)"``).
+
+More formally, we have:
+
+.. productionlist::
+   identifier: `unquoted_identifier` | `quoted_identifier`
+   unquoted_identifier: re('[a-zA-Z][a-zA-Z0-9_]*')
+   quoted_identifier: '"' (any character where " can appear if doubled)+ '"'
+
+.. _constants:
+
+Constants
+^^^^^^^^^
+
+CQL defines the following kind of *constants*:
+
+.. productionlist::
+   constant: `string` | `integer` | `float` | `boolean` | `uuid` | `blob` | NULL
+   string: '\'' (any character where ' can appear if doubled)+ '\''
+         : '$$' (any character other than '$$') '$$'
+   integer: re('-?[0-9]+')
+   float: re('-?[0-9]+(\.[0-9]*)?([eE][+-]?[0-9+])?') | NAN | INFINITY
+   boolean: TRUE | FALSE
+   uuid: `hex`{8}-`hex`{4}-`hex`{4}-`hex`{4}-`hex`{12}
+   hex: re("[0-9a-fA-F]")
+   blob: '0' ('x' | 'X') `hex`+
+
+In other words:
+
+- A string constant is an arbitrary sequence of characters enclosed by single-quote(``'``). A single-quote
+  can be included by repeating it, e.g. ``'It''s raining today'``. Those are not to be confused with quoted
+  :ref:`identifiers` that use double-quotes. Alternatively, a string can be defined by enclosing the arbitrary sequence
+  of characters by two dollar characters, in which case single-quote can be use without escaping (``$$It's raining
+  today$$``). That latter form is often used when defining :ref:`user-defined functions <udfs>` to avoid having to
+  escape single-quote characters in function body (as they are more likely to occur than ``$$``).
+- Integer, float and boolean constant are defined as expected. Note however than float allows the special ``NaN`` and
+  ``Infinity`` constants.
+- CQL supports UUID_ constants.
+- Blobs content are provided in hexadecimal and prefixed by ``0x``.
+- The special ``NULL`` constant denotes the absence of value.
+
+For how these constants are typed, see the :ref:`data-types` section.
+
+Terms
+^^^^^
+
+CQL has the notion of a *term*, which denotes the kind of values that CQL support. Terms are defined by:
+
+.. productionlist::
+   term: `constant` | `literal` | `function_call` | `type_hint` | `bind_marker`
+   literal: `collection_literal` | `udt_literal` | `tuple_literal`
+   function_call: `identifier` '(' [ `term` (',' `term`)* ] ')'
+   type_hint: '(' `cql_type` `)` term
+   bind_marker: '?' | ':' `identifier`
+
+A term is thus one of:
+
+- A :ref:`constant <constants>`.
+- A literal for either :ref:`a collection <collections>`, :ref:`a user-defined type <udts>` or :ref:`a tuple <tuples>`
+  (see the linked sections for details).
+- A function call: see :ref:`the section on functions <functions>` for details on which :ref:`native function
+  <native-functions>` exists and how to define your own :ref:`user-defined ones <user-defined-functions>`.
+- A *type hint*: see the :ref:`related section <type-hints>` for details.
+- A bind marker, which denotes a variable to be bound at execution time. See the section on :ref:`prepared-statements`
+  for details. A bind marker can be either anonymous (``?``) or named (``:some_name``). The latter form provides a more
+  convenient way to refer to the variable for binding it and should generally be preferred.
+
+
+Comments
+^^^^^^^^
+
+A comment in CQL is a line beginning by either double dashes (``--``) or double slash (``//``).
+
+Multi-line comments are also supported through enclosure within ``/*`` and ``*/`` (but nesting is not supported).
+
+::
+
+    \u2014 This is a comment
+    // This is a comment too
+    /* This is
+       a multi-line comment */
+
+Statements
+^^^^^^^^^^
+
+CQL consists of statements that can be divided in the following categories:
+
+- :ref:`data-definition` statements, to define and change how the data is stored (keyspaces and tables).
+- :ref:`data-manipulation` statements, for selecting, inserting and deleting data.
+- :ref:`index-and-views` statements.
+- :ref:`roles-and-permissions` statements.
+- :ref:`udfs` statements.
+- :ref:`udts` statements.
+- :ref:`triggers` statements.
+
+All the statements are listed below and are described in the rest of this documentation (see links above):
+
+.. productionlist::
+   cql_statement: `statement` [ ';' ]
+   statement: `ddl_statement`
+            : | `dml_statement`
+            : | `index_or_view_statement`
+            : | `role_or_permission_statement`
+            : | `udf_statement`
+            : | `udt_statement`
+            : | `trigger_statement`
+   ddl_statement: `use_statement`
+                : | `create_keyspace_statement`
+                : | `alter_keyspace_statement`
+                : | `drop_keyspace_statement`
+                : | `create_table_statement`
+                : | `alter_table_statement`
+                : | `drop_table_statement`
+                : | `truncate_statement`
+    dml_statement: `select_statement`
+                 : | `insert_statement`
+                 : | `update_statement`
+                 : | `delete_statement`
+                 : | `batch_statement`
+    index_or_view_statement: `create_index_statement`
+                           : | `drop_index_statement`
+                           : | `create_materialized_view_statement`
+                           : | `drop_materialized_view_statement`
+    role_or_permission_statement: `create_role_statement`
+                                : | `alter_role_statement`
+                                : | `drop_role_statement`
+                                : | `grant_role_statement`
+                                : | `revoke_role_statement`
+                                : | `list_role_statement`
+                                : | `grant_permission_statement`
+                                : | `revoke_permission_statement`
+                                : | `list_permission_statement`
+                                : | `create_user_statement`
+                                : | `alter_user_statement`
+                                : | `drop_user_statement`
+                                : | `list_user_statement`
+    udf_statement: `create_function_statement`
+                 : | `drop_function_statement`
+                 : | `create_aggregate_statement`
+                 : | `drop_aggregate_statement`
+    udt_statement: `create_type_statement`
+                 : | `alter_type_statement`
+                 : | `drop_type_statement`
+    trigger_statement: `create_trigger_statement`
+                     : | `drop_trigger_statement`
+
+.. _prepared-statements:
+
+Prepared Statements
+^^^^^^^^^^^^^^^^^^^
+
+CQL supports *prepared statements*. Prepared statements are an optimization that allows to parse a query only once but
+execute it multiple times with different concrete values.
+
+Any statement that uses at least one bind marker (see :token:`bind_marker`) will need to be *prepared*. After which the statement
+can be *executed* by provided concrete values for each of its marker. The exact details of how a statement is prepared
+and then executed depends on the CQL driver used and you should refer to your driver documentation.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f7335c/doc/source/cql/dml.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql/dml.rst b/doc/source/cql/dml.rst
new file mode 100644
index 0000000..4cf34eb
--- /dev/null
+++ b/doc/source/cql/dml.rst
@@ -0,0 +1,606 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: sql
+
+.. _data-manipulation:
+
+Data Manipulation
+-----------------
+
+SELECT
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= SELECT ( JSON )? 
+|  FROM 
+|  ( WHERE )?
+|  ( ORDER BY )?
+|  ( PER PARTITION LIMIT )?
+|  ( LIMIT )?
+|  ( ALLOW FILTERING )?
+
+|  ::= DISTINCT? 
+|  \| COUNT \u2018(\u2019 ( \u2018\*\u2019 \| \u20181\u2019 ) \u2018)\u2019 (AS )?
+
+|  ::= (AS )? ( \u2018,\u2019 (AS )? )\*
+|  \| \u2018\*\u2019
+
+|  ::= 
+|  \| WRITETIME \u2018(\u2019 \u2018)\u2019
+|  \| TTL \u2018(\u2019 \u2018)\u2019
+|  \| CAST \u2018(\u2019 AS \u2018)\u2019
+|  \| \u2018(\u2019 ( (\u2018,\u2019 )\*)? \u2018)\u2019
+
+ ::= ( AND )\*
+
+|  ::= 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 
+|  \| IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| TOKEN \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019 
+
+|  ::= \u2018=\u2019 \| \u2018<\u2019 \| \u2018>\u2019 \| \u2018<=\u2019 \| \u2018>=\u2019 \| CONTAINS \| CONTAINS KEY
+|  ::= ( \u2018,\u2019 )\*
+|  ::= ( ASC \| DESC )?
+|  ::= \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019
+| p.
+| *Sample:*
+
+| bc(sample)..
+| SELECT name, occupation FROM users WHERE userid IN (199, 200, 207);
+
+SELECT JSON name, occupation FROM users WHERE userid = 199;
+
+SELECT name AS user\_name, occupation AS user\_occupation FROM users;
+
+| SELECT time, value
+| FROM events
+| WHERE event\_type = \u2018myEvent\u2019
+|  AND time > \u20182011-02-03\u2019
+|  AND time <= \u20182012-01-01\u2019
+
+SELECT COUNT (\*) FROM users;
+
+SELECT COUNT (\*) AS user\_count FROM users;
+
+The ``SELECT`` statements reads one or more columns for one or more rows
+in a table. It returns a result-set of rows, where each row contains the
+collection of columns corresponding to the query. If the ``JSON``
+keyword is used, the results for each row will contain only a single
+column named \u201cjson\u201d. See the section on
+```SELECT JSON`` <#selectJson>`__ for more details.
+
+``<select-clause>``
+~~~~~~~~~~~~~~~~~~~
+
+The ``<select-clause>`` determines which columns needs to be queried and
+returned in the result-set. It consists of either the comma-separated
+list of or the wildcard character (``*``) to select all the columns
+defined for the table.
+
+A ``<selector>`` is either a column name to retrieve or a ``<function>``
+of one or more ``<term>``\ s. The function allowed are the same as for
+``<term>`` and are described in the `function section <#functions>`__.
+In addition to these generic functions, the ``WRITETIME`` (resp.
+``TTL``) function allows to select the timestamp of when the column was
+inserted (resp. the time to live (in seconds) for the column (or null if
+the column has no expiration set)) and the ```CAST`` <#castFun>`__
+function can be used to convert one data type to another.
+
+Any ``<selector>`` can be aliased using ``AS`` keyword (see examples).
+Please note that ``<where-clause>`` and ``<order-by>`` clause should
+refer to the columns by their original names and not by their aliases.
+
+The ``COUNT`` keyword can be used with parenthesis enclosing ``*``. If
+so, the query will return a single result: the number of rows matching
+the query. Note that ``COUNT(1)`` is supported as an alias.
+
+``<where-clause>``
+~~~~~~~~~~~~~~~~~~
+
+The ``<where-clause>`` specifies which rows must be queried. It is
+composed of relations on the columns that are part of the
+``PRIMARY KEY`` and/or have a `secondary index <#createIndexStmt>`__
+defined on them.
+
+Not all relations are allowed in a query. For instance, non-equal
+relations (where ``IN`` is considered as an equal relation) on a
+partition key are not supported (but see the use of the ``TOKEN`` method
+below to do non-equal queries on the partition key). Moreover, for a
+given partition key, the clustering columns induce an ordering of rows
+and relations on them is restricted to the relations that allow to
+select a **contiguous** (for the ordering) set of rows. For instance,
+given
+
+| bc(sample).
+| CREATE TABLE posts (
+|  userid text,
+|  blog\_title text,
+|  posted\_at timestamp,
+|  entry\_title text,
+|  content text,
+|  category int,
+|  PRIMARY KEY (userid, blog\_title, posted\_at)
+| )
+
+The following query is allowed:
+
+| bc(sample).
+| SELECT entry\_title, content FROM posts WHERE userid=\u2018john doe\u2019 AND
+  blog\_title=\u2018John\u2019\u2018s Blog\u2019 AND posted\_at >= \u20182012-01-01\u2019 AND
+  posted\_at < \u20182012-01-31\u2019
+
+But the following one is not, as it does not select a contiguous set of
+rows (and we suppose no secondary indexes are set):
+
+| bc(sample).
+| // Needs a blog\_title to be set to select ranges of posted\_at
+| SELECT entry\_title, content FROM posts WHERE userid=\u2018john doe\u2019 AND
+  posted\_at >= \u20182012-01-01\u2019 AND posted\_at < \u20182012-01-31\u2019
+
+When specifying relations, the ``TOKEN`` function can be used on the
+``PARTITION KEY`` column to query. In that case, rows will be selected
+based on the token of their ``PARTITION_KEY`` rather than on the value.
+Note that the token of a key depends on the partitioner in use, and that
+in particular the RandomPartitioner won\u2019t yield a meaningful order. Also
+note that ordering partitioners always order token values by bytes (so
+even if the partition key is of type int, ``token(-1) > token(0)`` in
+particular). Example:
+
+| bc(sample).
+| SELECT \* FROM posts WHERE token(userid) > token(\u2018tom\u2019) AND
+  token(userid) < token(\u2018bob\u2019)
+
+Moreover, the ``IN`` relation is only allowed on the last column of the
+partition key and on the last column of the full primary key.
+
+It is also possible to \u201cgroup\u201d ``CLUSTERING COLUMNS`` together in a
+relation using the tuple notation. For instance:
+
+| bc(sample).
+| SELECT \* FROM posts WHERE userid=\u2018john doe\u2019 AND (blog\_title,
+  posted\_at) > (\u2018John\u2019\u2018s Blog\u2019, \u20182012-01-01\u2019)
+
+will request all rows that sorts after the one having \u201cJohn\u2019s Blog\u201d as
+``blog_tile`` and \u20182012-01-01\u2019 for ``posted_at`` in the clustering
+order. In particular, rows having a ``post_at <= '2012-01-01'`` will be
+returned as long as their ``blog_title > 'John''s Blog'``, which
+wouldn\u2019t be the case for:
+
+| bc(sample).
+| SELECT \* FROM posts WHERE userid=\u2018john doe\u2019 AND blog\_title >
+  \u2018John\u2019\u2018s Blog\u2019 AND posted\_at > \u20182012-01-01\u2019
+
+The tuple notation may also be used for ``IN`` clauses on
+``CLUSTERING COLUMNS``:
+
+| bc(sample).
+| SELECT \* FROM posts WHERE userid=\u2018john doe\u2019 AND (blog\_title,
+  posted\_at) IN ((\u2018John\u2019\u2018s Blog\u2019, \u20182012-01-01), (\u2019Extreme Chess\u2019,
+  \u20182014-06-01\u2019))
+
+The ``CONTAINS`` operator may only be used on collection columns (lists,
+sets, and maps). In the case of maps, ``CONTAINS`` applies to the map
+values. The ``CONTAINS KEY`` operator may only be used on map columns
+and applies to the map keys.
+
+``<order-by>``
+~~~~~~~~~~~~~~
+
+The ``ORDER BY`` option allows to select the order of the returned
+results. It takes as argument a list of column names along with the
+order for the column (``ASC`` for ascendant and ``DESC`` for descendant,
+omitting the order being equivalent to ``ASC``). Currently the possible
+orderings are limited (which depends on the table
+```CLUSTERING ORDER`` <#createTableOptions>`__ ):
+
+-  if the table has been defined without any specific
+   ``CLUSTERING ORDER``, then then allowed orderings are the order
+   induced by the clustering columns and the reverse of that one.
+-  otherwise, the orderings allowed are the order of the
+   ``CLUSTERING ORDER`` option and the reversed one.
+
+``LIMIT`` and ``PER PARTITION LIMIT``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``LIMIT`` option to a ``SELECT`` statement limits the number of rows
+returned by a query, while the ``PER PARTITION LIMIT`` option limits the
+number of rows returned for a given partition by the query. Note that
+both type of limit can used in the same statement.
+
+``ALLOW FILTERING``
+~~~~~~~~~~~~~~~~~~~
+
+By default, CQL only allows select queries that don\u2019t involve
+\u201cfiltering\u201d server side, i.e. queries where we know that all (live)
+record read will be returned (maybe partly) in the result set. The
+reasoning is that those \u201cnon filtering\u201d queries have predictable
+performance in the sense that they will execute in a time that is
+proportional to the amount of data **returned** by the query (which can
+be controlled through ``LIMIT``).
+
+The ``ALLOW FILTERING`` option allows to explicitly allow (some) queries
+that require filtering. Please note that a query using
+``ALLOW FILTERING`` may thus have unpredictable performance (for the
+definition above), i.e. even a query that selects a handful of records
+**may** exhibit performance that depends on the total amount of data
+stored in the cluster.
+
+For instance, considering the following table holding user profiles with
+their year of birth (with a secondary index on it) and country of
+residence:
+
+| bc(sample)..
+| CREATE TABLE users (
+|  username text PRIMARY KEY,
+|  firstname text,
+|  lastname text,
+|  birth\_year int,
+|  country text
+| )
+
+| CREATE INDEX ON users(birth\_year);
+| p.
+
+Then the following queries are valid:
+
+| bc(sample).
+| SELECT \* FROM users;
+| SELECT firstname, lastname FROM users WHERE birth\_year = 1981;
+
+because in both case, Cassandra guarantees that these queries
+performance will be proportional to the amount of data returned. In
+particular, if no users are born in 1981, then the second query
+performance will not depend of the number of user profile stored in the
+database (not directly at least: due to secondary index implementation
+consideration, this query may still depend on the number of node in the
+cluster, which indirectly depends on the amount of data stored.
+Nevertheless, the number of nodes will always be multiple number of
+magnitude lower than the number of user profile stored). Of course, both
+query may return very large result set in practice, but the amount of
+data returned can always be controlled by adding a ``LIMIT``.
+
+However, the following query will be rejected:
+
+| bc(sample).
+| SELECT firstname, lastname FROM users WHERE birth\_year = 1981 AND
+  country = \u2018FR\u2019;
+
+because Cassandra cannot guarantee that it won\u2019t have to scan large
+amount of data even if the result to those query is small. Typically, it
+will scan all the index entries for users born in 1981 even if only a
+handful are actually from France. However, if you \u201cknow what you are
+doing\u201d, you can force the execution of this query by using
+``ALLOW FILTERING`` and so the following query is valid:
+
+| bc(sample).
+| SELECT firstname, lastname FROM users WHERE birth\_year = 1981 AND
+  country = \u2018FR\u2019 ALLOW FILTERING;
+
+INSERT
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= INSERT INTO 
+|  ( ( VALUES )
+|  \| ( JSON ))
+|  ( IF NOT EXISTS )?
+|  ( USING ( AND )\* )?
+
+ ::= \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+
+ ::= \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+
+|  ::= 
+|  \| 
+
+|  ::= TIMESTAMP 
+|  \| TTL 
+| p.
+| *Sample:*
+
+| bc(sample)..
+| INSERT INTO NerdMovies (movie, director, main\_actor, year)
+|  VALUES (\u2018Serenity\u2019, \u2018Joss Whedon\u2019, \u2018Nathan Fillion\u2019, 2005)
+| USING TTL 86400;
+
+| INSERT INTO NerdMovies JSON \u2018{`movie <>`__ \u201cSerenity\u201d, `director <>`__
+  \u201cJoss Whedon\u201d, `year <>`__ 2005}\u2019
+| p.
+| The ``INSERT`` statement writes one or more columns for a given row in
+  a table. Note that since a row is identified by its ``PRIMARY KEY``,
+  at least the columns composing it must be specified. The list of
+  columns to insert to must be supplied when using the ``VALUES``
+  syntax. When using the ``JSON`` syntax, they are optional. See the
+  section on ```INSERT JSON`` <#insertJson>`__ for more details.
+
+Note that unlike in SQL, ``INSERT`` does not check the prior existence
+of the row by default: the row is created if none existed before, and
+updated otherwise. Furthermore, there is no mean to know which of
+creation or update happened.
+
+It is however possible to use the ``IF NOT EXISTS`` condition to only
+insert if the row does not exist prior to the insertion. But please note
+that using ``IF NOT EXISTS`` will incur a non negligible performance
+cost (internally, Paxos will be used) so this should be used sparingly.
+
+All updates for an ``INSERT`` are applied atomically and in isolation.
+
+Please refer to the ```UPDATE`` <#updateOptions>`__ section for
+information on the ``<option>`` available and to the
+`collections <#collections>`__ section for use of
+``<collection-literal>``. Also note that ``INSERT`` does not support
+counters, while ``UPDATE`` does.
+
+UPDATE
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= UPDATE 
+|  ( USING ( AND )\* )?
+|  SET ( \u2018,\u2019 )\*
+|  WHERE 
+|  ( IF ( AND condition )\* )?
+
+|  ::= \u2018=\u2019 
+|  \| \u2018=\u2019 (\u2018+\u2019 \| \u2018-\u2019) ( \| \| )
+|  \| \u2018=\u2019 \u2018+\u2019 
+|  \| \u2018[\u2019 \u2018]\u2019 \u2018=\u2019 
+|  \| \u2018.\u2019 \u2018=\u2019 
+
+|  ::= 
+|  \| IN 
+|  \| \u2018[\u2019 \u2018]\u2019 
+|  \| \u2018[\u2019 \u2018]\u2019 IN 
+|  \| \u2018.\u2019 
+|  \| \u2018.\u2019 IN 
+
+|  ::= \u2018<\u2019 \| \u2018<=\u2019 \| \u2018=\u2019 \| \u2018!=\u2019 \| \u2018>=\u2019 \| \u2018>\u2019
+|  ::= ( \| \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019)
+
+ ::= ( AND )\*
+
+|  ::= \u2018=\u2019 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 \u2018=\u2019 
+|  \| IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| IN 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN 
+
+|  ::= TIMESTAMP 
+|  \| TTL 
+| p.
+| *Sample:*
+
+| bc(sample)..
+| UPDATE NerdMovies USING TTL 400
+| SET director = \u2018Joss Whedon\u2019,
+|  main\_actor = \u2018Nathan Fillion\u2019,
+|  year = 2005
+| WHERE movie = \u2018Serenity\u2019;
+
+| UPDATE UserActions SET total = total + 2 WHERE user =
+  B70DE1D0-9908-4AE3-BE34-5573E5B09F14 AND action = \u2018click\u2019;
+| p.
+| The ``UPDATE`` statement writes one or more columns for a given row in
+  a table. The ``<where-clause>`` is used to select the row to update
+  and must include all columns composing the ``PRIMARY KEY``. Other
+  columns values are specified through ``<assignment>`` after the
+  ``SET`` keyword.
+
+Note that unlike in SQL, ``UPDATE`` does not check the prior existence
+of the row by default (except through the use of ``<condition>``, see
+below): the row is created if none existed before, and updated
+otherwise. Furthermore, there are no means to know whether a creation or
+update occurred.
+
+It is however possible to use the conditions on some columns through
+``IF``, in which case the row will not be updated unless the conditions
+are met. But, please note that using ``IF`` conditions will incur a
+non-negligible performance cost (internally, Paxos will be used) so this
+should be used sparingly.
+
+In an ``UPDATE`` statement, all updates within the same partition key
+are applied atomically and in isolation.
+
+The ``c = c + 3`` form of ``<assignment>`` is used to
+increment/decrement counters. The identifier after the \u2018=\u2019 sign **must**
+be the same than the one before the \u2018=\u2019 sign (Only increment/decrement
+is supported on counters, not the assignment of a specific value).
+
+The ``id = id + <collection-literal>`` and ``id[value1] = value2`` forms
+of ``<assignment>`` are for collections. Please refer to the `relevant
+section <#collections>`__ for more details.
+
+The ``id.field = <term>`` form of ``<assignemt>`` is for setting the
+value of a single field on a non-frozen user-defined types.
+
+``<options>``
+~~~~~~~~~~~~~
+
+The ``UPDATE`` and ``INSERT`` statements support the following options:
+
+-  ``TIMESTAMP``: sets the timestamp for the operation. If not
+   specified, the coordinator will use the current time (in
+   microseconds) at the start of statement execution as the timestamp.
+   This is usually a suitable default.
+-  ``TTL``: specifies an optional Time To Live (in seconds) for the
+   inserted values. If set, the inserted values are automatically
+   removed from the database after the specified time. Note that the TTL
+   concerns the inserted values, not the columns themselves. This means
+   that any subsequent update of the column will also reset the TTL (to
+   whatever TTL is specified in that update). By default, values never
+   expire. A TTL of 0 is equivalent to no TTL. If the table has a
+   default\_time\_to\_live, a TTL of 0 will remove the TTL for the
+   inserted or updated values.
+
+DELETE
+^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DELETE ( ( \u2018,\u2019 )\* )?
+|  FROM 
+|  ( USING TIMESTAMP )?
+|  WHERE 
+|  ( IF ( EXISTS \| ( ( AND )\*) ) )?
+
+|  ::= 
+|  \| \u2018[\u2019 \u2018]\u2019
+|  \| \u2018.\u2019 
+
+ ::= ( AND )\*
+
+|  ::= 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 
+|  \| IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| IN 
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019
+|  \| \u2018(\u2019 (\u2018,\u2019 )\* \u2018)\u2019 IN 
+
+|  ::= \u2018=\u2019 \| \u2018<\u2019 \| \u2018>\u2019 \| \u2018<=\u2019 \| \u2018>=\u2019
+|  ::= ( \| \u2018(\u2019 ( ( \u2018,\u2019 )\* )? \u2018)\u2019)
+
+|  ::= ( \| \u2018!=\u2019) 
+|  \| IN 
+|  \| \u2018[\u2019 \u2018]\u2019 ( \| \u2018!=\u2019) 
+|  \| \u2018[\u2019 \u2018]\u2019 IN 
+|  \| \u2018.\u2019 ( \| \u2018!=\u2019) 
+|  \| \u2018.\u2019 IN 
+
+*Sample:*
+
+| bc(sample)..
+| DELETE FROM NerdMovies USING TIMESTAMP 1240003134 WHERE movie =
+  \u2018Serenity\u2019;
+
+| DELETE phone FROM Users WHERE userid IN
+  (C73DE1D3-AF08-40F3-B124-3FF3E5109F22,
+  B70DE1D0-9908-4AE3-BE34-5573E5B09F14);
+| p.
+| The ``DELETE`` statement deletes columns and rows. If column names are
+  provided directly after the ``DELETE`` keyword, only those columns are
+  deleted from the row indicated by the ``<where-clause>``. The
+  ``id[value]`` syntax in ``<selection>`` is for non-frozen collections
+  (please refer to the `collection section <#collections>`__ for more
+  details). The ``id.field`` syntax is for the deletion of non-frozen
+  user-defined types. Otherwise, whole rows are removed. The
+  ``<where-clause>`` specifies which rows are to be deleted. Multiple
+  rows may be deleted with one statement by using an ``IN`` clause. A
+  range of rows may be deleted using an inequality operator (such as
+  ``>=``).
+
+``DELETE`` supports the ``TIMESTAMP`` option with the same semantics as
+the ```UPDATE`` <#updateStmt>`__ statement.
+
+In a ``DELETE`` statement, all deletions within the same partition key
+are applied atomically and in isolation.
+
+A ``DELETE`` operation can be conditional through the use of an ``IF``
+clause, similar to ``UPDATE`` and ``INSERT`` statements. However, as
+with ``INSERT`` and ``UPDATE`` statements, this will incur a
+non-negligible performance cost (internally, Paxos will be used) and so
+should be used sparingly.
+
+BATCH
+^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= BEGIN ( UNLOGGED \| COUNTER ) BATCH
+|  ( USING ( AND )\* )?
+|  ( \u2018;\u2019 )\*
+|  APPLY BATCH
+
+|  ::= 
+|  \| 
+|  \| 
+
+|  ::= TIMESTAMP 
+| p.
+| *Sample:*
+
+| bc(sample).
+| BEGIN BATCH
+|  INSERT INTO users (userid, password, name) VALUES (\u2018user2\u2019,
+  \u2018ch@ngem3b\u2019, \u2018second user\u2019);
+|  UPDATE users SET password = \u2018ps22dhds\u2019 WHERE userid = \u2018user3\u2019;
+|  INSERT INTO users (userid, password) VALUES (\u2018user4\u2019, \u2018ch@ngem3c\u2019);
+|  DELETE name FROM users WHERE userid = \u2018user1\u2019;
+| APPLY BATCH;
+
+The ``BATCH`` statement group multiple modification statements
+(insertions/updates and deletions) into a single statement. It serves
+several purposes:
+
+#. It saves network round-trips between the client and the server (and
+   sometimes between the server coordinator and the replicas) when
+   batching multiple updates.
+#. All updates in a ``BATCH`` belonging to a given partition key are
+   performed in isolation.
+#. By default, all operations in the batch are performed as ``LOGGED``,
+   to ensure all mutations eventually complete (or none will). See the
+   notes on ```UNLOGGED`` <#unloggedBatch>`__ for more details.
+
+Note that:
+
+-  ``BATCH`` statements may only contain ``UPDATE``, ``INSERT`` and
+   ``DELETE`` statements.
+-  Batches are *not* a full analogue for SQL transactions.
+-  If a timestamp is not specified for each operation, then all
+   operations will be applied with the same timestamp. Due to
+   Cassandra\u2019s conflict resolution procedure in the case of `timestamp
+   ties <http://wiki.apache.org/cassandra/FAQ#clocktie>`__, operations
+   may be applied in an order that is different from the order they are
+   listed in the ``BATCH`` statement. To force a particular operation
+   ordering, you must specify per-operation timestamps.
+
+``UNLOGGED``
+~~~~~~~~~~~~
+
+By default, Cassandra uses a batch log to ensure all operations in a
+batch eventually complete or none will (note however that operations are
+only isolated within a single partition).
+
+There is a performance penalty for batch atomicity when a batch spans
+multiple partitions. If you do not want to incur this penalty, you can
+tell Cassandra to skip the batchlog with the ``UNLOGGED`` option. If the
+``UNLOGGED`` option is used, a failed batch might leave the patch only
+partly applied.
+
+``COUNTER``
+~~~~~~~~~~~
+
+Use the ``COUNTER`` option for batched counter updates. Unlike other
+updates in Cassandra, counter updates are not idempotent.
+
+``<option>``
+~~~~~~~~~~~~
+
+``BATCH`` supports both the ``TIMESTAMP`` option, with similar semantic
+to the one described in the ```UPDATE`` <#updateOptions>`__ statement
+(the timestamp applies to all the statement inside the batch). However,
+if used, ``TIMESTAMP`` **must not** be used in the statements within the
+batch.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f7335c/doc/source/cql/functions.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql/functions.rst b/doc/source/cql/functions.rst
new file mode 100644
index 0000000..cf52ace
--- /dev/null
+++ b/doc/source/cql/functions.rst
@@ -0,0 +1,661 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: sql
+
+.. _cql_functions:
+
+Functions
+---------
+
+CQL3 distinguishes between built-in functions (so called \u2018native
+functions\u2019) and `user-defined functions <#udfs>`__. CQL3 includes
+several native functions, described below:
+
+Scalar functions
+^^^^^^^^^^^^^^^^
+
+Native functions
+~~~~~~~~~~~~~~~~
+
+Cast
+````
+
+The ``cast`` function can be used to converts one native datatype to
+another.
+
+The following table describes the conversions supported by the ``cast``
+function. Cassandra will silently ignore any cast converting a datatype
+into its own datatype.
+
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| from            | to                                                                                                                      |
++=================+=========================================================================================================================+
+| ``ascii``       | ``text``, ``varchar``                                                                                                   |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``bigint``      | ``tinyint``, ``smallint``, ``int``, ``float``, ``double``, ``decimal``, ``varint``, ``text``, ``varchar``               |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``boolean``     | ``text``, ``varchar``                                                                                                   |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``counter``     | ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``, ``text``, ``varchar``   |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``date``        | ``timestamp``                                                                                                           |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``decimal``     | ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``varint``, ``text``, ``varchar``                |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``double``      | ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``decimal``, ``varint``, ``text``, ``varchar``               |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``float``       | ``tinyint``, ``smallint``, ``int``, ``bigint``, ``double``, ``decimal``, ``varint``, ``text``, ``varchar``              |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``inet``        | ``text``, ``varchar``                                                                                                   |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``int``         | ``tinyint``, ``smallint``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``, ``text``, ``varchar``            |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``smallint``    | ``tinyint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``, ``text``, ``varchar``                 |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``time``        | ``text``, ``varchar``                                                                                                   |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``timestamp``   | ``date``, ``text``, ``varchar``                                                                                         |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``timeuuid``    | ``timestamp``, ``date``, ``text``, ``varchar``                                                                          |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``tinyint``     | ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``varint``, ``text``, ``varchar``   |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``uuid``        | ``text``, ``varchar``                                                                                                   |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``varint``      | ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, ``double``, ``decimal``, ``text``, ``varchar``               |
++-----------------+-------------------------------------------------------------------------------------------------------------------------+
+
+The conversions rely strictly on Java\u2019s semantics. For example, the
+double value 1 will be converted to the text value \u20181.0\u2019.
+
+| bc(sample).
+| SELECT avg(cast(count as double)) FROM myTable
+
+Token
+`````
+
+The ``token`` function allows to compute the token for a given partition
+key. The exact signature of the token function depends on the table
+concerned and of the partitioner used by the cluster.
+
+The type of the arguments of the ``token`` depend on the type of the
+partition key columns. The return type depend on the partitioner in use:
+
+-  For Murmur3Partitioner, the return type is ``bigint``.
+-  For RandomPartitioner, the return type is ``varint``.
+-  For ByteOrderedPartitioner, the return type is ``blob``.
+
+For instance, in a cluster using the default Murmur3Partitioner, if a
+table is defined by
+
+| bc(sample).
+| CREATE TABLE users (
+|  userid text PRIMARY KEY,
+|  username text,
+|  \u2026
+| )
+
+then the ``token`` function will take a single argument of type ``text``
+(in that case, the partition key is ``userid`` (there is no clustering
+columns so the partition key is the same than the primary key)), and the
+return type will be ``bigint``.
+
+Uuid
+````
+
+The ``uuid`` function takes no parameters and generates a random type 4
+uuid suitable for use in INSERT or SET statements.
+
+Timeuuid functions
+``````````````````
+
+``now``
+#######
+
+The ``now`` function takes no arguments and generates, on the
+coordinator node, a new unique timeuuid (at the time where the statement
+using it is executed). Note that this method is useful for insertion but
+is largely non-sensical in ``WHERE`` clauses. For instance, a query of
+the form
+
+| bc(sample).
+| SELECT \* FROM myTable WHERE t = now()
+
+will never return any result by design, since the value returned by
+``now()`` is guaranteed to be unique.
+
+``minTimeuuid`` and ``maxTimeuuid``
+###################################
+
+The ``minTimeuuid`` (resp. ``maxTimeuuid``) function takes a
+``timestamp`` value ``t`` (which can be `either a timestamp or a date
+string <#usingtimestamps>`__ ) and return a *fake* ``timeuuid``
+corresponding to the *smallest* (resp. *biggest*) possible ``timeuuid``
+having for timestamp ``t``. So for instance:
+
+| bc(sample).
+| SELECT \* FROM myTable WHERE t > maxTimeuuid(\u20182013-01-01 00:05+0000\u2019)
+  AND t < minTimeuuid(\u20182013-02-02 10:00+0000\u2019)
+
+will select all rows where the ``timeuuid`` column ``t`` is strictly
+older than \u20182013-01-01 00:05+0000\u2019 but strictly younger than \u20182013-02-02
+10:00+0000\u2019. Please note that
+``t >= maxTimeuuid('2013-01-01 00:05+0000')`` would still *not* select a
+``timeuuid`` generated exactly at \u20182013-01-01 00:05+0000\u2019 and is
+essentially equivalent to ``t > maxTimeuuid('2013-01-01 00:05+0000')``.
+
+*Warning*: We called the values generated by ``minTimeuuid`` and
+``maxTimeuuid`` *fake* UUID because they do no respect the Time-Based
+UUID generation process specified by the `RFC
+4122 <http://www.ietf.org/rfc/rfc4122.txt>`__. In particular, the value
+returned by these 2 methods will not be unique. This means you should
+only use those methods for querying (as in the example above). Inserting
+the result of those methods is almost certainly *a bad idea*.
+
+Time conversion functions
+`````````````````````````
+
+A number of functions are provided to \u201cconvert\u201d a ``timeuuid``, a
+``timestamp`` or a ``date`` into another ``native`` type.
+
++-----------------------+-----------------+-------------------------------------------------------------------+
+| function name         | input type      | description                                                       |
++=======================+=================+===================================================================+
+| ``toDate``            | ``timeuuid``    | Converts the ``timeuuid`` argument into a ``date`` type           |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``toDate``            | ``timestamp``   | Converts the ``timestamp`` argument into a ``date`` type          |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``toTimestamp``       | ``timeuuid``    | Converts the ``timeuuid`` argument into a ``timestamp`` type      |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``toTimestamp``       | ``date``        | Converts the ``date`` argument into a ``timestamp`` type          |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``toUnixTimestamp``   | ``timeuuid``    | Converts the ``timeuuid`` argument into a ``bigInt`` raw value    |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``toUnixTimestamp``   | ``timestamp``   | Converts the ``timestamp`` argument into a ``bigInt`` raw value   |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``toUnixTimestamp``   | ``date``        | Converts the ``date`` argument into a ``bigInt`` raw value        |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``dateOf``            | ``timeuuid``    | Similar to ``toTimestamp(timeuuid)`` (DEPRECATED)                 |
++-----------------------+-----------------+-------------------------------------------------------------------+
+| ``unixTimestampOf``   | ``timeuuid``    | Similar to ``toUnixTimestamp(timeuuid)`` (DEPRECATED)             |
++-----------------------+-----------------+-------------------------------------------------------------------+
+
+Blob conversion functions
+`````````````````````````
+
+A number of functions are provided to \u201cconvert\u201d the native types into
+binary data (``blob``). For every ``<native-type>`` ``type`` supported
+by CQL3 (a notable exceptions is ``blob``, for obvious reasons), the
+function ``typeAsBlob`` takes a argument of type ``type`` and return it
+as a ``blob``. Conversely, the function ``blobAsType`` takes a 64-bit
+``blob`` argument and convert it to a ``bigint`` value. And so for
+instance, ``bigintAsBlob(3)`` is ``0x0000000000000003`` and
+``blobAsBigint(0x0000000000000003)`` is ``3``.
+
+User-defined functions
+~~~~~~~~~~~~~~~~~~~~~~
+
+User-defined functions allow execution of user-provided code in
+Cassandra. By default, Cassandra supports defining functions in *Java*
+and *JavaScript*. Support for other JSR 223 compliant scripting
+languages (such as Python, Ruby, and Scala) can be added by adding a JAR
+to the classpath.
+
+UDFs are part of the Cassandra schema. As such, they are automatically
+propagated to all nodes in the cluster.
+
+UDFs can be *overloaded* - i.e. multiple UDFs with different argument
+types but the same function name. Example:
+
+| bc(sample).
+| CREATE FUNCTION sample ( arg int ) \u2026;
+| CREATE FUNCTION sample ( arg text ) \u2026;
+
+User-defined functions are susceptible to all of the normal problems
+with the chosen programming language. Accordingly, implementations
+should be safe against null pointer exceptions, illegal arguments, or
+any other potential source of exceptions. An exception during function
+execution will result in the entire statement failing.
+
+It is valid to use *complex* types like collections, tuple types and
+user-defined types as argument and return types. Tuple types and
+user-defined types are handled by the conversion functions of the
+DataStax Java Driver. Please see the documentation of the Java Driver
+for details on handling tuple types and user-defined types.
+
+Arguments for functions can be literals or terms. Prepared statement
+placeholders can be used, too.
+
+Note that you can use the double-quoted string syntax to enclose the UDF
+source code. For example:
+
+| bc(sample)..
+| CREATE FUNCTION some\_function ( arg int )
+|  RETURNS NULL ON NULL INPUT
+|  RETURNS int
+|  LANGUAGE java
+|  AS $$ return arg; $$;
+
+| SELECT some\_function(column) FROM atable \u2026;
+| UPDATE atable SET col = some\_function(?) \u2026;
+| p.
+
+| bc(sample).
+| CREATE TYPE custom\_type (txt text, i int);
+| CREATE FUNCTION fct\_using\_udt ( udtarg frozen )
+|  RETURNS NULL ON NULL INPUT
+|  RETURNS text
+|  LANGUAGE java
+|  AS $$ return udtarg.getString(\u201ctxt\u201d); $$;
+
+User-defined functions can be used in ```SELECT`` <#selectStmt>`__,
+```INSERT`` <#insertStmt>`__ and ```UPDATE`` <#updateStmt>`__
+statements.
+
+The implicitly available ``udfContext`` field (or binding for script
+UDFs) provides the neccessary functionality to create new UDT and tuple
+values.
+
+| bc(sample).
+| CREATE TYPE custom\_type (txt text, i int);
+| CREATE FUNCTION fct\_using\_udt ( somearg int )
+|  RETURNS NULL ON NULL INPUT
+|  RETURNS custom\_type
+|  LANGUAGE java
+|  AS $$
+|  UDTValue udt = udfContext.newReturnUDTValue();
+|  udt.setString(\u201ctxt\u201d, \u201csome string\u201d);
+|  udt.setInt(\u201ci\u201d, 42);
+|  return udt;
+|  $$;
+
+The definition of the ``UDFContext`` interface can be found in the
+Apache Cassandra source code for
+``org.apache.cassandra.cql3.functions.UDFContext``.
+
+| bc(sample).
+| public interface UDFContext
+| {
+|  UDTValue newArgUDTValue(String argName);
+|  UDTValue newArgUDTValue(int argNum);
+|  UDTValue newReturnUDTValue();
+|  UDTValue newUDTValue(String udtName);
+|  TupleValue newArgTupleValue(String argName);
+|  TupleValue newArgTupleValue(int argNum);
+|  TupleValue newReturnTupleValue();
+|  TupleValue newTupleValue(String cqlDefinition);
+| }
+
+| Java UDFs already have some imports for common interfaces and classes
+  defined. These imports are:
+| Please note, that these convenience imports are not available for
+  script UDFs.
+
+| bc(sample).
+| import java.nio.ByteBuffer;
+| import java.util.List;
+| import java.util.Map;
+| import java.util.Set;
+| import org.apache.cassandra.cql3.functions.UDFContext;
+| import com.datastax.driver.core.TypeCodec;
+| import com.datastax.driver.core.TupleValue;
+| import com.datastax.driver.core.UDTValue;
+
+See ```CREATE FUNCTION`` <#createFunctionStmt>`__ and
+```DROP FUNCTION`` <#dropFunctionStmt>`__.
+
+CREATE FUNCTION
+```````````````
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE ( OR REPLACE )?
+|  FUNCTION ( IF NOT EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+|  ( CALLED \| RETURNS NULL ) ON NULL INPUT
+|  RETURNS 
+|  LANGUAGE 
+|  AS 
+| p.
+| *Sample:*
+
+| bc(sample).
+| CREATE OR REPLACE FUNCTION somefunction
+|  ( somearg int, anotherarg text, complexarg frozen, listarg list )
+|  RETURNS NULL ON NULL INPUT
+|  RETURNS text
+|  LANGUAGE java
+|  AS $$
+|  // some Java code
+|  $$;
+| CREATE FUNCTION akeyspace.fname IF NOT EXISTS
+|  ( someArg int )
+|  CALLED ON NULL INPUT
+|  RETURNS text
+|  LANGUAGE java
+|  AS $$
+|  // some Java code
+|  $$;
+
+``CREATE FUNCTION`` creates or replaces a user-defined function.
+
+Function Signature
+##################
+
+Signatures are used to distinguish individual functions. The signature
+consists of:
+
+#. The fully qualified function name - i.e *keyspace* plus
+   *function-name*
+#. The concatenated list of all argument types
+
+Note that keyspace names, function names and argument types are subject
+to the default naming conventions and case-sensitivity rules.
+
+``CREATE FUNCTION`` with the optional ``OR REPLACE`` keywords either
+creates a function or replaces an existing one with the same signature.
+A ``CREATE FUNCTION`` without ``OR REPLACE`` fails if a function with
+the same signature already exists.
+
+Behavior on invocation with ``null`` values must be defined for each
+function. There are two options:
+
+#. ``RETURNS NULL ON NULL INPUT`` declares that the function will always
+   return ``null`` if any of the input arguments is ``null``.
+#. ``CALLED ON NULL INPUT`` declares that the function will always be
+   executed.
+
+If the optional ``IF NOT EXISTS`` keywords are used, the function will
+only be created if another function with the same signature does not
+exist.
+
+``OR REPLACE`` and ``IF NOT EXISTS`` cannot be used together.
+
+Functions belong to a keyspace. If no keyspace is specified in
+``<function-name>``, the current keyspace is used (i.e. the keyspace
+specified using the ```USE`` <#useStmt>`__ statement). It is not
+possible to create a user-defined function in one of the system
+keyspaces.
+
+See the section on `user-defined functions <#udfs>`__ for more
+information.
+
+DROP FUNCTION
+`````````````
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DROP FUNCTION ( IF EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  ( \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019 )?
+
+*Sample:*
+
+| bc(sample).
+| DROP FUNCTION myfunction;
+| DROP FUNCTION mykeyspace.afunction;
+| DROP FUNCTION afunction ( int );
+| DROP FUNCTION afunction ( text );
+
+| ``DROP FUNCTION`` statement removes a function created using
+  ``CREATE FUNCTION``.
+| You must specify the argument types
+  (`signature <#functionSignature>`__ ) of the function to drop if there
+  are multiple functions with the same name but a different signature
+  (overloaded functions).
+
+``DROP FUNCTION`` with the optional ``IF EXISTS`` keywords drops a
+function if it exists.
+
+Aggregate functions
+^^^^^^^^^^^^^^^^^^^
+
+| Aggregate functions work on a set of rows. They receive values for
+  each row and returns one value for the whole set.
+| If ``normal`` columns, ``scalar functions``, ``UDT`` fields,
+  ``writetime`` or ``ttl`` are selected together with aggregate
+  functions, the values returned for them will be the ones of the first
+  row matching the query.
+
+CQL3 distinguishes between built-in aggregates (so called \u2018native
+aggregates\u2019) and `user-defined aggregates <#udas>`__. CQL3 includes
+several native aggregates, described below:
+
+Native aggregates
+~~~~~~~~~~~~~~~~~
+
+Count
+`````
+
+The ``count`` function can be used to count the rows returned by a
+query. Example:
+
+| bc(sample).
+| SELECT COUNT (\*) FROM plays;
+| SELECT COUNT (1) FROM plays;
+
+It also can be used to count the non null value of a given column.
+Example:
+
+| bc(sample).
+| SELECT COUNT (scores) FROM plays;
+
+Max and Min
+```````````
+
+The ``max`` and ``min`` functions can be used to compute the maximum and
+the minimum value returned by a query for a given column.
+
+| bc(sample).
+| SELECT MIN (players), MAX (players) FROM plays WHERE game = \u2018quake\u2019;
+
+Sum
+```
+
+The ``sum`` function can be used to sum up all the values returned by a
+query for a given column.
+
+| bc(sample).
+| SELECT SUM (players) FROM plays;
+
+Avg
+```
+
+The ``avg`` function can be used to compute the average of all the
+values returned by a query for a given column.
+
+| bc(sample).
+| SELECT AVG (players) FROM plays;
+
+User-Defined Aggregates
+~~~~~~~~~~~~~~~~~~~~~~~
+
+User-defined aggregates allow creation of custom aggregate functions
+using `UDFs <#udfs>`__. Common examples of aggregate functions are
+*count*, *min*, and *max*.
+
+Each aggregate requires an *initial state* (``INITCOND``, which defaults
+to ``null``) of type ``STYPE``. The first argument of the state function
+must have type ``STYPE``. The remaining arguments of the state function
+must match the types of the user-defined aggregate arguments. The state
+function is called once for each row, and the value returned by the
+state function becomes the new state. After all rows are processed, the
+optional ``FINALFUNC`` is executed with last state value as its
+argument.
+
+``STYPE`` is mandatory in order to be able to distinguish possibly
+overloaded versions of the state and/or final function (since the
+overload can appear after creation of the aggregate).
+
+User-defined aggregates can be used in ```SELECT`` <#selectStmt>`__
+statement.
+
+A complete working example for user-defined aggregates (assuming that a
+keyspace has been selected using the ```USE`` <#useStmt>`__ statement):
+
+| bc(sample)..
+| CREATE OR REPLACE FUNCTION averageState ( state tuple, val int )
+|  CALLED ON NULL INPUT
+|  RETURNS tuple
+|  LANGUAGE java
+|  AS \u2018
+   if (val != null) {
+   state.setInt(0, state.getInt(0)+1);
+   state.setLong(1, state.getLong(1)+val.intValue());
+   }
+   return state;
+   \u2019;
+
+| CREATE OR REPLACE FUNCTION averageFinal ( state tuple )
+|  CALLED ON NULL INPUT
+|  RETURNS double
+|  LANGUAGE java
+|  AS \u2018
+   double r = 0;
+   if (state.getInt(0) == 0) return null;
+   r = state.getLong(1);
+   r /= state.getInt(0);
+   return Double.valueOf�;
+   \u2019;
+
+| CREATE OR REPLACE AGGREGATE average ( int )
+|  SFUNC averageState
+|  STYPE tuple
+|  FINALFUNC averageFinal
+|  INITCOND (0, 0);
+
+| CREATE TABLE atable (
+|  pk int PRIMARY KEY,
+|  val int);
+| INSERT INTO atable (pk, val) VALUES (1,1);
+| INSERT INTO atable (pk, val) VALUES (2,2);
+| INSERT INTO atable (pk, val) VALUES (3,3);
+| INSERT INTO atable (pk, val) VALUES (4,4);
+| SELECT average(val) FROM atable;
+| p.
+
+See ```CREATE AGGREGATE`` <#createAggregateStmt>`__ and
+```DROP AGGREGATE`` <#dropAggregateStmt>`__.
+
+CREATE AGGREGATE
+````````````````
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE ( OR REPLACE )?
+|  AGGREGATE ( IF NOT EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+|  SFUNC 
+|  STYPE 
+|  ( FINALFUNC )?
+|  ( INITCOND )?
+| p.
+| *Sample:*
+
+| bc(sample).
+| CREATE AGGREGATE myaggregate ( val text )
+|  SFUNC myaggregate\_state
+|  STYPE text
+|  FINALFUNC myaggregate\_final
+|  INITCOND \u2018foo\u2019;
+
+See the section on `user-defined aggregates <#udas>`__ for a complete
+example.
+
+``CREATE AGGREGATE`` creates or replaces a user-defined aggregate.
+
+``CREATE AGGREGATE`` with the optional ``OR REPLACE`` keywords either
+creates an aggregate or replaces an existing one with the same
+signature. A ``CREATE AGGREGATE`` without ``OR REPLACE`` fails if an
+aggregate with the same signature already exists.
+
+``CREATE AGGREGATE`` with the optional ``IF NOT EXISTS`` keywords either
+creates an aggregate if it does not already exist.
+
+``OR REPLACE`` and ``IF NOT EXISTS`` cannot be used together.
+
+Aggregates belong to a keyspace. If no keyspace is specified in
+``<aggregate-name>``, the current keyspace is used (i.e. the keyspace
+specified using the ```USE`` <#useStmt>`__ statement). It is not
+possible to create a user-defined aggregate in one of the system
+keyspaces.
+
+Signatures for user-defined aggregates follow the `same
+rules <#functionSignature>`__ as for user-defined functions.
+
+``STYPE`` defines the type of the state value and must be specified.
+
+The optional ``INITCOND`` defines the initial state value for the
+aggregate. It defaults to ``null``. A non-\ ``null`` ``INITCOND`` must
+be specified for state functions that are declared with
+``RETURNS NULL ON NULL INPUT``.
+
+``SFUNC`` references an existing function to be used as the state
+modifying function. The type of first argument of the state function
+must match ``STYPE``. The remaining argument types of the state function
+must match the argument types of the aggregate function. State is not
+updated for state functions declared with ``RETURNS NULL ON NULL INPUT``
+and called with ``null``.
+
+The optional ``FINALFUNC`` is called just before the aggregate result is
+returned. It must take only one argument with type ``STYPE``. The return
+type of the ``FINALFUNC`` may be a different type. A final function
+declared with ``RETURNS NULL ON NULL INPUT`` means that the aggregate\u2019s
+return value will be ``null``, if the last state is ``null``.
+
+If no ``FINALFUNC`` is defined, the overall return type of the aggregate
+function is ``STYPE``. If a ``FINALFUNC`` is defined, it is the return
+type of that function.
+
+See the section on `user-defined aggregates <#udas>`__ for more
+information.
+
+DROP AGGREGATE
+``````````````
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= DROP AGGREGATE ( IF EXISTS )?
+|  ( \u2018.\u2019 )? 
+|  ( \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019 )?
+| p.
+
+*Sample:*
+
+| bc(sample).
+| DROP AGGREGATE myAggregate;
+| DROP AGGREGATE myKeyspace.anAggregate;
+| DROP AGGREGATE someAggregate ( int );
+| DROP AGGREGATE someAggregate ( text );
+
+The ``DROP AGGREGATE`` statement removes an aggregate created using
+``CREATE AGGREGATE``. You must specify the argument types of the
+aggregate to drop if there are multiple aggregates with the same name
+but a different signature (overloaded aggregates).
+
+``DROP AGGREGATE`` with the optional ``IF EXISTS`` keywords drops an
+aggregate if it exists, and does nothing if a function with the
+signature does not exist.
+
+Signatures for user-defined aggregates follow the `same
+rules <#functionSignature>`__ as for user-defined functions.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f7335c/doc/source/cql/index.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql/index.rst b/doc/source/cql/index.rst
new file mode 100644
index 0000000..c736beb
--- /dev/null
+++ b/doc/source/cql/index.rst
@@ -0,0 +1,47 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. _UUID: https://en.wikipedia.org/wiki/Universally_unique_identifier
+
+The Cassandra Query Language (CQL)
+==================================
+
+This document describes the Cassandra Query Language (CQL) [#]_. Note that this document describes the last version of
+the languages. However, the `changes <#changes>`_ section provides the diff between the different versions of CQL.
+
+CQL offers a model close to SQL in the sense that data is put in *tables* containing *rows* of *columns*. For
+that reason, when used in this document, these terms (tables, rows and columns) have the same definition than they have
+in SQL. But please note that as such, they do **not** refer to the concept of rows and columns found in the deprecated
+thrift API (and earlier version 1 and 2 of CQL).
+
+.. toctree::
+   :maxdepth: 2
+
+   definitions
+   types
+   ddl
+   dml
+   indexes
+   mvs
+   security
+   functions
+   json
+   triggers
+   appendices
+   changes
+
+.. [#] Technically, this document CQL version 3, which is not backward compatible with CQL version 1 and 2 (which have
+   been deprecated and remove) and differs from it in numerous ways.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f7335c/doc/source/cql/indexes.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql/indexes.rst b/doc/source/cql/indexes.rst
new file mode 100644
index 0000000..5791022
--- /dev/null
+++ b/doc/source/cql/indexes.rst
@@ -0,0 +1,84 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: sql
+
+Indexes
+-------
+
+CREATE INDEX
+^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE ( CUSTOM )? INDEX ( IF NOT EXISTS )? ( )?
+|  ON \u2018(\u2019 \u2018)\u2019
+|  ( USING ( WITH OPTIONS = )? )?
+
+|  ::= 
+|  \| keys( )
+| p.
+| *Sample:*
+
+| bc(sample).
+| CREATE INDEX userIndex ON NerdMovies (user);
+| CREATE INDEX ON Mutants (abilityId);
+| CREATE INDEX ON users (keys(favs));
+| CREATE CUSTOM INDEX ON users (email) USING \u2018path.to.the.IndexClass\u2019;
+| CREATE CUSTOM INDEX ON users (email) USING \u2018path.to.the.IndexClass\u2019
+  WITH OPTIONS = {\u2019storage\u2019: \u2018/mnt/ssd/indexes/\u2019};
+
+The ``CREATE INDEX`` statement is used to create a new (automatic)
+secondary index for a given (existing) column in a given table. A name
+for the index itself can be specified before the ``ON`` keyword, if
+desired. If data already exists for the column, it will be indexed
+asynchronously. After the index is created, new data for the column is
+indexed automatically at insertion time.
+
+Attempting to create an already existing index will return an error
+unless the ``IF NOT EXISTS`` option is used. If it is used, the
+statement will be a no-op if the index already exists.
+
+Indexes on Map Keys
+~~~~~~~~~~~~~~~~~~~
+
+When creating an index on a `map column <#map>`__, you may index either
+the keys or the values. If the column identifier is placed within the
+``keys()`` function, the index will be on the map keys, allowing you to
+use ``CONTAINS KEY`` in ``WHERE`` clauses. Otherwise, the index will be
+on the map values.
+
+DROP INDEX
+^^^^^^^^^^
+
+*Syntax:*
+
+bc(syntax). ::= DROP INDEX ( IF EXISTS )? ( \u2018.\u2019 )?
+
+*Sample:*
+
+| bc(sample)..
+| DROP INDEX userIndex;
+
+| DROP INDEX userkeyspace.address\_index;
+| p.
+| The ``DROP INDEX`` statement is used to drop an existing secondary
+  index. The argument of the statement is the index name, which may
+  optionally specify the keyspace of the index.
+
+If the index does not exists, the statement will return an error, unless
+``IF EXISTS`` is used in which case the operation is a no-op.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f7335c/doc/source/cql/json.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql/json.rst b/doc/source/cql/json.rst
new file mode 100644
index 0000000..4f442b7
--- /dev/null
+++ b/doc/source/cql/json.rst
@@ -0,0 +1,146 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: sql
+
+JSON Support
+------------
+
+Cassandra 2.2 introduces JSON support to ```SELECT`` <#selectStmt>`__
+and ```INSERT`` <#insertStmt>`__ statements. This support does not
+fundamentally alter the CQL API (for example, the schema is still
+enforced), it simply provides a convenient way to work with JSON
+documents.
+
+SELECT JSON
+^^^^^^^^^^^
+
+With ``SELECT`` statements, the new ``JSON`` keyword can be used to
+return each row as a single ``JSON`` encoded map. The remainder of the
+``SELECT`` statment behavior is the same.
+
+The result map keys are the same as the column names in a normal result
+set. For example, a statement like \u201c``SELECT JSON a, ttl(b) FROM ...``\u201d
+would result in a map with keys ``"a"`` and ``"ttl(b)"``. However, this
+is one notable exception: for symmetry with ``INSERT JSON`` behavior,
+case-sensitive column names with upper-case letters will be surrounded
+with double quotes. For example, \u201c``SELECT JSON myColumn FROM ...``\u201d
+would result in a map key ``"\"myColumn\""`` (note the escaped quotes).
+
+The map values will ``JSON``-encoded representations (as described
+below) of the result set values.
+
+INSERT JSON
+^^^^^^^^^^^
+
+With ``INSERT`` statements, the new ``JSON`` keyword can be used to
+enable inserting a ``JSON`` encoded map as a single row. The format of
+the ``JSON`` map should generally match that returned by a
+``SELECT JSON`` statement on the same table. In particular,
+case-sensitive column names should be surrounded with double quotes. For
+example, to insert into a table with two columns named \u201cmyKey\u201d and
+\u201cvalue\u201d, you would do the following:
+
+| bc(sample).
+| INSERT INTO mytable JSON \u2018{\u201c\\\u201dmyKey\\\u201c\u201d: 0, `value <>`__ 0}\u2019
+
+Any columns which are ommitted from the ``JSON`` map will be defaulted
+to a ``NULL`` value (which will result in a tombstone being created).
+
+JSON Encoding of Cassandra Data Types
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Where possible, Cassandra will represent and accept data types in their
+native ``JSON`` representation. Cassandra will also accept string
+representations matching the CQL literal format for all single-field
+types. For example, floats, ints, UUIDs, and dates can be represented by
+CQL literal strings. However, compound types, such as collections,
+tuples, and user-defined types must be represented by native ``JSON``
+collections (maps and lists) or a JSON-encoded string representation of
+the collection.
+
+The following table describes the encodings that Cassandra will accept
+in ``INSERT JSON`` values (and ``fromJson()`` arguments) as well as the
+format Cassandra will use when returning data for ``SELECT JSON``
+statements (and ``fromJson()``):
+
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| type            | formats accepted         | return format   | notes                                                                                                                                                                                                         |
++=================+==========================+=================+===============================================================================================================================================================================================================+
+| ``ascii``       | string                   | string          | Uses JSON\u2019s ``\u`` character escape                                                                                                                                                                           |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``bigint``      | integer, string          | integer         | String must be valid 64 bit integer                                                                                                                                                                           |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``blob``        | string                   | string          | String should be 0x followed by an even number of hex digits                                                                                                                                                  |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``boolean``     | boolean, string          | boolean         | String must be \u201ctrue\u201d or \u201cfalse\u201d                                                                                                                                                                              |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``date``        | string                   | string          | Date in format ``YYYY-MM-DD``, timezone UTC                                                                                                                                                                   |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``decimal``     | integer, float, string   | float           | May exceed 32 or 64-bit IEEE-754 floating point precision in client-side decoder                                                                                                                              |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``double``      | integer, float, string   | float           | String must be valid integer or float                                                                                                                                                                         |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``float``       | integer, float, string   | float           | String must be valid integer or float                                                                                                                                                                         |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``inet``        | string                   | string          | IPv4 or IPv6 address                                                                                                                                                                                          |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``int``         | integer, string          | integer         | String must be valid 32 bit integer                                                                                                                                                                           |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``list``        | list, string             | list            | Uses JSON\u2019s native list representation                                                                                                                                                                        |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``map``         | map, string              | map             | Uses JSON\u2019s native map representation                                                                                                                                                                         |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``smallint``    | integer, string          | integer         | String must be valid 16 bit integer                                                                                                                                                                           |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``set``         | list, string             | list            | Uses JSON\u2019s native list representation                                                                                                                                                                        |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``text``        | string                   | string          | Uses JSON\u2019s ``\u`` character escape                                                                                                                                                                           |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``time``        | string                   | string          | Time of day in format ``HH-MM-SS[.fffffffff]``                                                                                                                                                                |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``timestamp``   | integer, string          | string          | A timestamp. Strings constant are allow to input timestamps as dates, see `Working with dates <#usingdates>`__ below for more information. Datestamps with format ``YYYY-MM-DD HH:MM:SS.SSS`` are returned.   |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``timeuuid``    | string                   | string          | Type 1 UUID. See `Constants <#constants>`__ for the UUID format                                                                                                                                               |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``tinyint``     | integer, string          | integer         | String must be valid 8 bit integer                                                                                                                                                                            |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``tuple``       | list, string             | list            | Uses JSON\u2019s native list representation                                                                                                                                                                        |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``UDT``         | map, string              | map             | Uses JSON\u2019s native map representation with field names as keys                                                                                                                                                |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``uuid``        | string                   | string          | See `Constants <#constants>`__ for the UUID format                                                                                                                                                            |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``varchar``     | string                   | string          | Uses JSON\u2019s ``\u`` character escape                                                                                                                                                                           |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| ``varint``      | integer, string          | integer         | Variable length; may overflow 32 or 64 bit integers in client-side decoder                                                                                                                                    |
++-----------------+--------------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+The fromJson() Function
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``fromJson()`` function may be used similarly to ``INSERT JSON``,
+but for a single column value. It may only be used in the ``VALUES``
+clause of an ``INSERT`` statement or as one of the column values in an
+``UPDATE``, ``DELETE``, or ``SELECT`` statement. For example, it cannot
+be used in the selection clause of a ``SELECT`` statement.
+
+The toJson() Function
+^^^^^^^^^^^^^^^^^^^^^
+
+The ``toJson()`` function may be used similarly to ``SELECT JSON``, but
+for a single column value. It may only be used in the selection clause
+of a ``SELECT`` statement.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f7335c/doc/source/cql/mvs.rst
----------------------------------------------------------------------
diff --git a/doc/source/cql/mvs.rst b/doc/source/cql/mvs.rst
new file mode 100644
index 0000000..41f601b
--- /dev/null
+++ b/doc/source/cql/mvs.rst
@@ -0,0 +1,95 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: sql
+
+Materialized Views
+------------------
+
+CREATE MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax)..
+|  ::= CREATE MATERIALIZED VIEW ( IF NOT EXISTS )? AS
+|  SELECT ( \u2018(\u2019 ( \u2018,\u2019 ) \* \u2018)\u2019 \| \u2018\*\u2019 )
+|  FROM 
+|  ( WHERE )?
+|  PRIMARY KEY \u2018(\u2019 ( \u2018,\u2019 )\* \u2018)\u2019
+|  ( WITH ( AND )\* )?
+| p.
+| *Sample:*
+
+| bc(sample)..
+| CREATE MATERIALIZED VIEW monkeySpecies\_by\_population AS
+|  SELECT \*
+|  FROM monkeySpecies
+|  WHERE population IS NOT NULL AND species IS NOT NULL
+|  PRIMARY KEY (population, species)
+|  WITH comment=\u2018Allow query by population instead of species\u2019;
+| p.
+| The ``CREATE MATERIALIZED VIEW`` statement creates a new materialized
+  view. Each such view is a set of *rows* which corresponds to rows
+  which are present in the underlying, or base, table specified in the
+  ``SELECT`` statement. A materialized view cannot be directly updated,
+  but updates to the base table will cause corresponding updates in the
+  view.
+
+Attempting to create an already existing materialized view will return
+an error unless the ``IF NOT EXISTS`` option is used. If it is used, the
+statement will be a no-op if the materialized view already exists.
+
+``WHERE`` Clause
+~~~~~~~~~~~~~~~~
+
+The ``<where-clause>`` is similar to the `where clause of a ``SELECT``
+statement <#selectWhere>`__, with a few differences. First, the where
+clause must contain an expression that disallows ``NULL`` values in
+columns in the view\u2019s primary key. If no other restriction is desired,
+this can be accomplished with an ``IS NOT NULL`` expression. Second,
+only columns which are in the base table\u2019s primary key may be restricted
+with expressions other than ``IS NOT NULL``. (Note that this second
+restriction may be lifted in the future.)
+
+ALTER MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+| bc(syntax). ::= ALTER MATERIALIZED VIEW 
+|  WITH ( AND )\*
+
+The ``ALTER MATERIALIZED VIEW`` statement allows options to be update;
+these options are the same as \ ``CREATE TABLE``\ \u2019s options.
+
+DROP MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^
+
+*Syntax:*
+
+bc(syntax). ::= DROP MATERIALIZED VIEW ( IF EXISTS )?
+
+*Sample:*
+
+bc(sample). DROP MATERIALIZED VIEW monkeySpecies\_by\_population;
+
+The ``DROP MATERIALIZED VIEW`` statement is used to drop an existing
+materialized view.
+
+If the materialized view does not exists, the statement will return an
+error, unless ``IF EXISTS`` is used in which case the operation is a
+no-op.