You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by sl...@apache.org on 2014/02/03 14:56:55 UTC

svn commit: r1563901 - in /cassandra/site: publish/doc/cql3/CQL-1.2.html publish/download/index.html src/settings.py

Author: slebresne
Date: Mon Feb  3 13:56:55 2014
New Revision: 1563901

URL: http://svn.apache.org/r1563901
Log:
Update website for 1.2.14 release

Modified:
    cassandra/site/publish/doc/cql3/CQL-1.2.html
    cassandra/site/publish/download/index.html
    cassandra/site/src/settings.py

Modified: cassandra/site/publish/doc/cql3/CQL-1.2.html
URL: http://svn.apache.org/viewvc/cassandra/site/publish/doc/cql3/CQL-1.2.html?rev=1563901&r1=1563900&r2=1563901&view=diff
==============================================================================
--- cassandra/site/publish/doc/cql3/CQL-1.2.html (original)
+++ cassandra/site/publish/doc/cql3/CQL-1.2.html Mon Feb  3 13:56:55 2014
@@ -93,7 +93,7 @@ CREATE TABLE timeline (
     other text,
     PRIMARY KEY (k)
 )
-</pre></pre><p>Moreover, a table must define at least one column that is not part of the PRIMARY KEY as a row exists in Cassandra only if it contains at least one value for one such column.</p><h4 id="createTablepartitionClustering">Partition key and clustering</h4><p>In CQL, the order in which columns are defined for the <code>PRIMARY KEY</code> matters. The first column of the key is called the <i>partition key</i>. It has the property that all the rows sharing the same partition key (even across table in fact) are stored on the same physical node. Also, insertion/update/deletion on rows sharing the same partition key for a given table are performed <i>atomically</i> and in <i>isolation</i>. Note that it is possible to have a composite partition key, i.e. a partition key formed of multiple columns, using an extra set of parentheses to define which columns forms the partition key.</p><p>The remaining columns of the <code>PRIMARY KEY</code> definition, if any, are called __clusterin
 g columns. On a given physical node, rows for a given partition key are stored in the order induced by the clustering columns, making the retrieval of rows in that clustering order particularly efficient (see <a href="#selectStmt"><tt>SELECT</tt></a>).</p><h4 id="createTableOptions"><code>&lt;option></code></h4><p>The <code>CREATE TABLE</code> statement supports a number of options that controls the configuration of a new table. These options can be specified after the <code>WITH</code> keyword.</p><p>The first of these option is <code>COMPACT STORAGE</code>. This option is meanly targeted towards backward compatibility with some table definition created before CQL3.  But it also provides a slightly more compact layout of data on disk, though at the price of flexibility and extensibility, and for that reason is not recommended unless for the backward compatibility reason. The restriction for table with <code>COMPACT STORAGE</code> is that they support one and only one column outside
  of the ones part of the <code>PRIMARY KEY</code>. It also follows that columns cannot be added nor removed after creation. A table with <code>COMPACT STORAGE</code> must also define at least one <a href="createTablepartitionClustering">clustering key</a>.</p><p>Another option is <code>CLUSTERING ORDER</code>. It allows to define the ordering of rows on disk. It takes the list of the clustering key names with, for each of them, the on-disk order (Ascending or descending). Note that this option affects <a href="#selectOrderBy">what <code>ORDER BY</code> are allowed during <code>SELECT</code></a>.</p><p>Table creation supports the following other <code>&lt;property></code>:</p><table><tr><th>option                    </th><th>kind   </th><th>default   </th><th>description</th></tr><tr><td><code>comment</code>                    </td><td><em>simple</em> </td><td>none        </td><td>A free-form, human-readable comment.</td></tr><tr><td><code>read_repair_chance</code>         </td><td><
 em>simple</em> </td><td>0.1         </td><td>The probability with which to query extra nodes (e.g. more nodes than required by the consistency level) for the purpose of read repairs.</td></tr><tr><td><code>dclocal_read_repair_chance</code> </td><td><em>simple</em> </td><td>0           </td><td>The probability with which to query extra nodes (e.g. more nodes than required by the consistency level) belonging to the same data center than the read coordinator for the purpose of read repairs.</td></tr><tr><td><code>gc_grace_seconds</code>           </td><td><em>simple</em> </td><td>864000      </td><td>Time to wait before garbage collecting tombstones (deletion markers).</td></tr><tr><td><code>bloom_filter_fp_chance</code>     </td><td><em>simple</em> </td><td>0.00075     </td><td>The target probability of false positive of the sstable bloom filters. Said bloom filters will be sized to provide the provided probability (thus lowering this value impact the size of bloom filters in-memory a
 nd on-disk)</td></tr><tr><td><code>compaction</code>                 </td><td><em>map</em>    </td><td><em>see below</em> </td><td>The compaction otpions to use, see below.</td></tr><tr><td><code>compression</code>                </td><td><em>map</em>    </td><td><em>see below</em> </td><td>Compression options, see below. </td></tr><tr><td><code>replicate_on_write</code>         </td><td><em>simple</em> </td><td>true        </td><td>Whether to replicate data on write. This can only be set to false for tables with counters values. Disabling this is dangerous and can result in random lose of counters, don&#8217;t disable unless you are sure to know what you are doing</td></tr><tr><td><code>caching</code>                    </td><td><em>simple</em> </td><td>keys_only   </td><td>Whether to cache keys (&#8220;key cache&#8221;) and/or rows (&#8220;row cache&#8221;) for this table. Valid values are: <code>all</code>, <code>keys_only</code>, <code>rows_only</code> and <code>none</code>. </t
 d></tr></table><h4 id="compactionOptions"><code>compaction</code> options</h4><p>The <code>compaction</code> property must at least define the <code>'class'</code> sub-option, that defines the compaction strategy class to use. The default supported class are <code>'SizeTieredCompactionStrategy'</code> and <code>'LeveledCompactionStrategy'</code>. Custom strategy can be provided by specifying the full class name as a <a href="#constants">string constant</a>. The rest of the sub-options depends on the chosen class. The sub-options supported by the default classes are:</p><table><tr><th>option                        </th><th>supported compaction strategy </th><th>default </th><th>description </th></tr><tr><td><code>tombstone_threshold</code>           </td><td><em>all</em>                           </td><td>0.2       </td><td>A ratio such that if a sstable has more than this ratio of gcable tombstones over all contained columns, the sstable will be compacted (with no other sstables) fo
 r the purpose of purging those tombstones. </td></tr><tr><td><code>tombstone_compaction_interval</code> </td><td><em>all</em>                           </td><td>1 day     </td><td>The mininum time to wait after an sstable creation time before considering it for &#8220;tombstone compaction&#8221;, where &#8220;tombstone compaction&#8221; is the compaction triggered if the sstable has more gcable tombstones than <code>tombstone_threshold</code>. </td></tr><tr><td><code>min_sstable_size</code>              </td><td>SizeTieredCompactionStrategy    </td><td>50MB      </td><td>The size tiered strategy groups SSTables to compact in buckets. A bucket groups SSTables that differs from less than 50% in size.  However, for small sizes, this would result in a bucketing that is too fine grained. <code>min_sstable_size</code> defines a size threshold (in bytes) below which all SSTables belong to one unique bucket</td></tr><tr><td><code>min_threshold</code>                 </td><td>SizeTieredCompa
 ctionStrategy    </td><td>4         </td><td>Minimum number of SSTables needed to start a minor compaction.</td></tr><tr><td><code>max_threshold</code>                 </td><td>SizeTieredCompactionStrategy    </td><td>32        </td><td>Maximum number of SSTables processed by one minor compaction.</td></tr><tr><td><code>bucket_low</code>                    </td><td>SizeTieredCompactionStrategy    </td><td>0.5       </td><td>Size tiered consider sstables to be within the same bucket if their size is within [average_size * <code>bucket_low</code>, average_size * <code>bucket_high</code> ] (i.e the default groups sstable whose sizes diverges by at most 50%)</td></tr><tr><td><code>bucket_high</code>                   </td><td>SizeTieredCompactionStrategy    </td><td>1.5       </td><td>Size tiered consider sstables to be within the same bucket if their size is within [average_size * <code>bucket_low</code>, average_size * <code>bucket_high</code> ] (i.e the default groups sstable whose s
 izes diverges by at most 50%).</td></tr><tr><td><code>sstable_size_in_mb</code>            </td><td>LeveledCompactionStrategy       </td><td>5MB       </td><td>The target size (in MB) for sstables in the leveled strategy. Note that while sstable sizes should stay less or equal to <code>sstable_size_in_mb</code>, it is possible to exceptionally have a larger sstable as during compaction, data for a given partition key are never split into 2 sstables</td></tr></table><p>For the <code>compression</code> property, the following default sub-options are available:</p><table><tr><th>option              </th><th>default        </th><th>description </th></tr><tr><td><code>sstable_compression</code> </td><td>SnappyCompressor </td><td>The compression algorithm to use. Default compressor are: SnappyCompressor and DeflateCompressor. Use an empty string (<code>''</code>) to disable compression. Custom compressor can be provided by specifying the full class name as a <a href="#constants">string co
 nstant</a>.</td></tr><tr><td><code>chunk_length_kb</code>     </td><td>64KB             </td><td>On disk SSTables are compressed by block (to allow random reads). This defines the size (in KB) of said block. Bigger values may improve the compression rate, but increases the minimum size of data to be read from disk for a read </td></tr><tr><td><code>crc_check_chance</code>    </td><td>1.0              </td><td>When compression is enabled, each compressed block includes a checksum of that block for the purpose of detecting disk bitrot and avoiding the propagation of corruption to other replica. This option defines the probability with which those checksums are checked during read. By default they are always checked. Set to 0 to disable checksum checking and to 0.5 for instance to check them every other read</td></tr></table><h4 id="Otherconsiderations">Other considerations:</h4><ul><li>When <a href="#insertStmt/&quot;updating&quot;:#updateStmt">inserting</a> a given row, not all colum
 ns needs to be defined (except for those part of the key), and missing columns occupy no space on disk. Furthermore, adding new columns (see &lt;a href=#alterStmt><tt>ALTER TABLE</tt></a>) is a constant time operation. There is thus no need to try to anticipate future usage (or to cry when you haven&#8217;t) when creating a table.</li></ul><h3 id="alterTableStmt">ALTER TABLE</h3><p><i>Syntax:</i></p><pre class="syntax"><pre>&lt;alter-table-stmt> ::= ALTER (TABLE | COLUMNFAMILY) &lt;tablename> &lt;instruction>
+</pre></pre><p>Moreover, a table must define at least one column that is not part of the PRIMARY KEY as a row exists in Cassandra only if it contains at least one value for one such column.</p><h4 id="createTablepartitionClustering">Partition key and clustering columns</h4><p>In CQL, the order in which columns are defined for the <code>PRIMARY KEY</code> matters. The first column of the key is called the <i>partition key</i>. It has the property that all the rows sharing the same partition key (even across table in fact) are stored on the same physical node. Also, insertion/update/deletion on rows sharing the same partition key for a given table are performed <i>atomically</i> and in <i>isolation</i>. Note that it is possible to have a composite partition key, i.e. a partition key formed of multiple columns, using an extra set of parentheses to define which columns forms the partition key.</p><p>The remaining columns of the <code>PRIMARY KEY</code> definition, if any, are called __c
 lustering columns. On a given physical node, rows for a given partition key are stored in the order induced by the clustering columns, making the retrieval of rows in that clustering order particularly efficient (see <a href="#selectStmt"><tt>SELECT</tt></a>).</p><h4 id="createTableOptions"><code>&lt;option></code></h4><p>The <code>CREATE TABLE</code> statement supports a number of options that controls the configuration of a new table. These options can be specified after the <code>WITH</code> keyword.</p><p>The first of these option is <code>COMPACT STORAGE</code>. This option is mainly targeted towards backward compatibility for definitions created before CQL3 (see <a href="http://www.datastax.com/dev/blog/thrift-to-cql3">www.datastax.com/dev/blog/thrift-to-cql3</a> for more details).  The option also provides a slightly more compact layout of data on disk but at the price of diminished flexibility and extensibility for the table.  Most notably, <code>COMPACT STORAGE</code> table
 s cannot have collections and a <code>COMPACT STORAGE</code> table with at least one clustering column supports exactly one (as in not 0 nor more than 1) column not part of the <code>PRIMARY KEY</code> definition (which imply in particular that you cannot add nor remove columns after creation). For those reasons, <code>COMPACT STORAGE</code> is not recommended outside of the backward compatibility reason evoked above.</p><p>Another option is <code>CLUSTERING ORDER</code>. It allows to define the ordering of rows on disk. It takes the list of the clustering column names with, for each of them, the on-disk order (Ascending or descending). Note that this option affects <a href="#selectOrderBy">what <code>ORDER BY</code> are allowed during <code>SELECT</code></a>.</p><p>Table creation supports the following other <code>&lt;property></code>:</p><table><tr><th>option                    </th><th>kind   </th><th>default   </th><th>description</th></tr><tr><td><code>comment</code>           
          </td><td><em>simple</em> </td><td>none        </td><td>A free-form, human-readable comment.</td></tr><tr><td><code>read_repair_chance</code>         </td><td><em>simple</em> </td><td>0.1         </td><td>The probability with which to query extra nodes (e.g. more nodes than required by the consistency level) for the purpose of read repairs.</td></tr><tr><td><code>dclocal_read_repair_chance</code> </td><td><em>simple</em> </td><td>0           </td><td>The probability with which to query extra nodes (e.g. more nodes than required by the consistency level) belonging to the same data center than the read coordinator for the purpose of read repairs.</td></tr><tr><td><code>gc_grace_seconds</code>           </td><td><em>simple</em> </td><td>864000      </td><td>Time to wait before garbage collecting tombstones (deletion markers).</td></tr><tr><td><code>bloom_filter_fp_chance</code>     </td><td><em>simple</em> </td><td>0.00075     </td><td>The target probability of false positive o
 f the sstable bloom filters. Said bloom filters will be sized to provide the provided probability (thus lowering this value impact the size of bloom filters in-memory and on-disk)</td></tr><tr><td><code>compaction</code>                 </td><td><em>map</em>    </td><td><em>see below</em> </td><td>The compaction otpions to use, see below.</td></tr><tr><td><code>compression</code>                </td><td><em>map</em>    </td><td><em>see below</em> </td><td>Compression options, see below. </td></tr><tr><td><code>replicate_on_write</code>         </td><td><em>simple</em> </td><td>true        </td><td>Whether to replicate data on write. This can only be set to false for tables with counters values. Disabling this is dangerous and can result in random lose of counters, don&#8217;t disable unless you are sure to know what you are doing</td></tr><tr><td><code>caching</code>                    </td><td><em>simple</em> </td><td>keys_only   </td><td>Whether to cache keys (&#8220;key cache&#82
 21;) and/or rows (&#8220;row cache&#8221;) for this table. Valid values are: <code>all</code>, <code>keys_only</code>, <code>rows_only</code> and <code>none</code>. </td></tr></table><h4 id="compactionOptions"><code>compaction</code> options</h4><p>The <code>compaction</code> property must at least define the <code>'class'</code> sub-option, that defines the compaction strategy class to use. The default supported class are <code>'SizeTieredCompactionStrategy'</code> and <code>'LeveledCompactionStrategy'</code>. Custom strategy can be provided by specifying the full class name as a <a href="#constants">string constant</a>. The rest of the sub-options depends on the chosen class. The sub-options supported by the default classes are:</p><table><tr><th>option                        </th><th>supported compaction strategy </th><th>default </th><th>description </th></tr><tr><td><code>tombstone_threshold</code>           </td><td><em>all</em>                           </td><td>0.2       </t
 d><td>A ratio such that if a sstable has more than this ratio of gcable tombstones over all contained columns, the sstable will be compacted (with no other sstables) for the purpose of purging those tombstones. </td></tr><tr><td><code>tombstone_compaction_interval</code> </td><td><em>all</em>                           </td><td>1 day     </td><td>The mininum time to wait after an sstable creation time before considering it for &#8220;tombstone compaction&#8221;, where &#8220;tombstone compaction&#8221; is the compaction triggered if the sstable has more gcable tombstones than <code>tombstone_threshold</code>. </td></tr><tr><td><code>min_sstable_size</code>              </td><td>SizeTieredCompactionStrategy    </td><td>50MB      </td><td>The size tiered strategy groups SSTables to compact in buckets. A bucket groups SSTables that differs from less than 50% in size.  However, for small sizes, this would result in a bucketing that is too fine grained. <code>min_sstable_size</code> defin
 es a size threshold (in bytes) below which all SSTables belong to one unique bucket</td></tr><tr><td><code>min_threshold</code>                 </td><td>SizeTieredCompactionStrategy    </td><td>4         </td><td>Minimum number of SSTables needed to start a minor compaction.</td></tr><tr><td><code>max_threshold</code>                 </td><td>SizeTieredCompactionStrategy    </td><td>32        </td><td>Maximum number of SSTables processed by one minor compaction.</td></tr><tr><td><code>bucket_low</code>                    </td><td>SizeTieredCompactionStrategy    </td><td>0.5       </td><td>Size tiered consider sstables to be within the same bucket if their size is within [average_size * <code>bucket_low</code>, average_size * <code>bucket_high</code> ] (i.e the default groups sstable whose sizes diverges by at most 50%)</td></tr><tr><td><code>bucket_high</code>                   </td><td>SizeTieredCompactionStrategy    </td><td>1.5       </td><td>Size tiered consider sstables to be w
 ithin the same bucket if their size is within [average_size * <code>bucket_low</code>, average_size * <code>bucket_high</code> ] (i.e the default groups sstable whose sizes diverges by at most 50%).</td></tr><tr><td><code>sstable_size_in_mb</code>            </td><td>LeveledCompactionStrategy       </td><td>5MB       </td><td>The target size (in MB) for sstables in the leveled strategy. Note that while sstable sizes should stay less or equal to <code>sstable_size_in_mb</code>, it is possible to exceptionally have a larger sstable as during compaction, data for a given partition key are never split into 2 sstables</td></tr></table><p>For the <code>compression</code> property, the following default sub-options are available:</p><table><tr><th>option              </th><th>default        </th><th>description </th></tr><tr><td><code>sstable_compression</code> </td><td>SnappyCompressor </td><td>The compression algorithm to use. Default compressor are: SnappyCompressor and DeflateCompresso
 r. Use an empty string (<code>''</code>) to disable compression. Custom compressor can be provided by specifying the full class name as a <a href="#constants">string constant</a>.</td></tr><tr><td><code>chunk_length_kb</code>     </td><td>64KB             </td><td>On disk SSTables are compressed by block (to allow random reads). This defines the size (in KB) of said block. Bigger values may improve the compression rate, but increases the minimum size of data to be read from disk for a read </td></tr><tr><td><code>crc_check_chance</code>    </td><td>1.0              </td><td>When compression is enabled, each compressed block includes a checksum of that block for the purpose of detecting disk bitrot and avoiding the propagation of corruption to other replica. This option defines the probability with which those checksums are checked during read. By default they are always checked. Set to 0 to disable checksum checking and to 0.5 for instance to check them every other read</td></tr></t
 able><h4 id="Otherconsiderations">Other considerations:</h4><ul><li>When <a href="#insertStmt/&quot;updating&quot;:#updateStmt">inserting</a> a given row, not all columns needs to be defined (except for those part of the key), and missing columns occupy no space on disk. Furthermore, adding new columns (see &lt;a href=#alterStmt><tt>ALTER TABLE</tt></a>) is a constant time operation. There is thus no need to try to anticipate future usage (or to cry when you haven&#8217;t) when creating a table.</li></ul><h3 id="alterTableStmt">ALTER TABLE</h3><p><i>Syntax:</i></p><pre class="syntax"><pre>&lt;alter-table-stmt> ::= ALTER (TABLE | COLUMNFAMILY) &lt;tablename> &lt;instruction>
 
 &lt;instruction> ::= ALTER &lt;identifier> TYPE &lt;type>
                 | ADD   &lt;identifier> &lt;type>
@@ -233,7 +233,7 @@ SELECT COUNT(*) FROM users;
 </pre></pre><p>But the following one is not, as it does not select a contiguous set of rows (and we suppose no secondary indexes are set):</p><pre class="sample"><pre>// Needs a blog_title to be set to select ranges of posted_at
 SELECT entry_title, content FROM posts WHERE userid='john doe' AND posted_at >= '2012-01-01' AND posted_at &lt; '2012-01-31'
 </pre></pre><p>When specifying relations, the <code>TOKEN</code> function can be used on the <code>PARTITION KEY</code> column to query. In that case, rows will be selected based on the token of their <code>PARTITION_KEY</code> rather than on the value. Note that the token of a key depends on the partitioner in use, and that in particular the RandomPartitioner won&#8217;t yeld a meaningful order. Also note that ordering partitioners always order token values by bytes (so even if the partition key is of type int, <code>token(-1) > token(0)</code> in particular). Example:</p><pre class="sample"><pre>SELECT * FROM posts WHERE token(userid) > token('tom') AND token(userid) &lt; token('bob')
-</pre></pre><p>Moreover, the <code>IN</code> relation is only allowed on the last column of the partition key and on the last column of the full primary key.</p><h4 id="selectOrderBy"><code>&lt;order-by></code></h4><p>The <code>ORDER BY</code> option allows to select the order of the returned results. It takes as argument a list of column names along with the order for the column (<code>ASC</code> for ascendant and <code>DESC</code> for descendant, omitting the order being equivalent to <code>ASC</code>). Currently the possible orderings are limited (which depends on the table <a href="#createTableOptions"><code>CLUSTERING ORDER</code></a>):</p><ul><li>if the table has been defined without any specific <code>CLUSTERING ORDER</code>, then then allowed orderings are the order induced by the clustering key and the reverse of that one.</li><li>otherwise, the orderings allowed are the order of the <code>CLUSTERING ORDER</code> option and the reversed one.</li></ul><h4 id="selectLimit"><c
 ode>LIMIT</code></h4><p>The <code>LIMIT</code> option to a <code>SELECT</code> statement limits the number of rows returned by a query.</p><h4 id="selectAllowFiltering"><code>ALLOW FILTERING</code></h4><p>By default, CQL only allows select queries that don&#8217;t involve &#8220;filtering&#8221; server side, i.e. queries where we know that all (live) record read will be returned (maybe partly) in the result set. The reasoning is that those &#8220;non filtering&#8221; queries have predictable performance in the sense that they will execute in a time that is proportional to the amount of data <strong>returned</strong> by the query (which can be controlled through <code>LIMIT</code>).</p><p>The <code>ALLOW FILTERING</code> option allows to explicitely allow (some) queries that require filtering. Please note that a query using <code>ALLOW FILTERING</code> may thus have unpredictable performance (for the definition above), i.e. even a query that selects a handful of records <strong>may</
 strong> exhibit performance that depends on the total amount of data stored in the cluster.</p><p>For instance, considering the following table holding user profiles with their year of birth (with a secondary index on it) and country of residence:</p><pre class="sample"><pre>CREATE TABLE users (
+</pre></pre><p>Moreover, the <code>IN</code> relation is only allowed on the last column of the partition key and on the last column of the full primary key.</p><h4 id="selectOrderBy"><code>&lt;order-by></code></h4><p>The <code>ORDER BY</code> option allows to select the order of the returned results. It takes as argument a list of column names along with the order for the column (<code>ASC</code> for ascendant and <code>DESC</code> for descendant, omitting the order being equivalent to <code>ASC</code>). Currently the possible orderings are limited (which depends on the table <a href="#createTableOptions"><code>CLUSTERING ORDER</code></a>):</p><ul><li>if the table has been defined without any specific <code>CLUSTERING ORDER</code>, then then allowed orderings are the order induced by the clustering columns and the reverse of that one.</li><li>otherwise, the orderings allowed are the order of the <code>CLUSTERING ORDER</code> option and the reversed one.</li></ul><h4 id="selectLimit
 "><code>LIMIT</code></h4><p>The <code>LIMIT</code> option to a <code>SELECT</code> statement limits the number of rows returned by a query.</p><h4 id="selectAllowFiltering"><code>ALLOW FILTERING</code></h4><p>By default, CQL only allows select queries that don&#8217;t involve &#8220;filtering&#8221; server side, i.e. queries where we know that all (live) record read will be returned (maybe partly) in the result set. The reasoning is that those &#8220;non filtering&#8221; queries have predictable performance in the sense that they will execute in a time that is proportional to the amount of data <strong>returned</strong> by the query (which can be controlled through <code>LIMIT</code>).</p><p>The <code>ALLOW FILTERING</code> option allows to explicitely allow (some) queries that require filtering. Please note that a query using <code>ALLOW FILTERING</code> may thus have unpredictable performance (for the definition above), i.e. even a query that selects a handful of records <strong>m
 ay</strong> exhibit performance that depends on the total amount of data stored in the cluster.</p><p>For instance, considering the following table holding user profiles with their year of birth (with a secondary index on it) and country of residence:</p><pre class="sample"><pre>CREATE TABLE users (
     username text PRIMARY KEY,
     firstname text,
     lastname text,
@@ -270,7 +270,7 @@ SELECT firstname, lastname FROM users WH
 &lt;collection-type> ::= list '&lt;' &lt;native-type> '>'
                     | set  '&lt;' &lt;native-type> '>'
                     | map  '&lt;' &lt;native-type> ',' &lt;native-type> '>'
-</pre></pre><p>Note that the native types are keywords and as such are case-insensitive. They are however not reserved ones.</p><p>The following table gives additional informations on the native data types, and on which kind of <a href="#constants">constants</a> each type supports:</p><table><tr><th>type    </th><th>constants supported</th><th>description</th></tr><tr><td><code>ascii</code>    </td><td>  strings            </td><td>ASCII character string</td></tr><tr><td><code>bigint</code>   </td><td>  integers           </td><td>64-bit signed long</td></tr><tr><td><code>blob</code>     </td><td>  blobs              </td><td>Arbitrary bytes (no validation)</td></tr><tr><td><code>boolean</code>  </td><td>  booleans           </td><td>true or false</td></tr><tr><td><code>counter</code>  </td><td>  integers           </td><td>Counter column (64-bit signed value). See <a href="#counters">Counters</a> for details</td></tr><tr><td><code>decimal</code>  </td><td>  integers, floats   </td>
 <td>Variable-precision decimal</td></tr><tr><td><code>double</code>   </td><td>  integers           </td><td>64-bit IEEE-754 floating point</td></tr><tr><td><code>float</code>    </td><td>  integers, floats   </td><td>32-bit IEEE-754 floating point</td></tr><tr><td><code>inet</code>     </td><td>  strings            </td><td>An IP address. It can be either 4 bytes long (IPv4) or 16 bytes long (IPv6). There is no <code>inet</code> constant, IP address should be inputed as strings</td></tr><tr><td><code>int</code>      </td><td>  integers           </td><td>32-bit signed int</td></tr><tr><td><code>text</code>     </td><td>  strings            </td><td>UTF8 encoded string</td></tr><tr><td><code>timestamp</code></td><td>  integers, strings  </td><td>A timestamp. Strings constant are allow to input timestamps as dates, see <a href="#usingdates">Working with dates</a> below for more information.</td></tr><tr><td><code>timeuuid</code> </td><td>  uuids              </td><td>Type 1 UUID. Thi
 s is generally used as a &#8220;conflict-free&#8221; timestamp. Also see the <a href="#timeuuidFun">functions on Timeuuid</a></td></tr><tr><td><code>uuid</code>     </td><td>  uuids              </td><td>Type 1 or type 4 UUID</td></tr><tr><td><code>varchar</code>  </td><td>  strings            </td><td>UTF8 encoded string</td></tr><tr><td><code>varint</code>   </td><td>  integers           </td><td>Arbitrary-precision integer</td></tr></table><p>For more information on how to use the collection types, see the <a href="#collections">Working with collections</a> section below.</p><h3 id="usingdates">Working with dates</h3><p>Values of the <code>timestamp</code> type are encoded as 64-bit signed integers representing a number of milliseconds since the standard base time known as &#8220;the epoch&#8221;: January 1 1970 at 00:00:00 GMT.</p><p>Timestamp can be input in CQL as simple long integers, giving the number of milliseconds since the epoch, as defined above.</p><p>They can also be 
 input as string literals in any of the following ISO 8601 formats, each representing the time and date Mar 2, 2011, at 04:05:00 AM, GMT.:</p><ul><li><code>2011-02-03 04:05+0000</code></li><li><code>2011-02-03 04:05:00+0000</code></li><li><code>2011-02-03T04:05+0000</code></li><li><code>2011-02-03T04:05:00+0000</code></li></ul><p>The <code>+0000</code> above is an RFC 822 4-digit time zone specification; <code>+0000</code> refers to GMT. US Pacific Standard Time is <code>-0800</code>. The time zone may be omitted if desired&#8212; the date will be interpreted as being in the time zone under which the coordinating Cassandra node is configured.</p><ul><li><code>2011-02-03 04:05</code></li><li><code>2011-02-03 04:05:00</code></li><li><code>2011-02-03T04:05</code></li><li><code>2011-02-03T04:05:00</code></li></ul><p>There are clear difficulties inherent in relying on the time zone configuration being as expected, though, so it is recommended that the time zone always be specified for tim
 estamps when feasible.</p><p>The time of day may also be omitted, if the date is the only piece that matters:</p><ul><li><code>2011-02-03</code></li><li><code>2011-02-03+0000</code></li></ul><p>In that case, the time of day will default to 00:00:00, in the specified or default time zone.</p><h3 id="counters">Counters</h3><p>The <code>counter</code> type is used to define <em>counter columns</em>. A counter column is a column whose value is a 64-bit signed integer and on which 2 operations are supported: incrementation and decrementation (see <a href="#updateStmt"><code>UPDATE</code></a> for syntax).  Note the value of a counter cannot be set. A counter doesn&#8217;t exist until first incremented/decremented, and the first incrementation/decrementation is made as if the previous value was 0. Deletion of counter columns is supported but have some limitations (see the <a href="http://wiki.apache.org/cassandra/Counters">Cassandra Wiki</a> for more information).</p><p>The use of the coun
 ter type is limited in the following way:</p><ul><li>It cannot be used for column that is part of the <code>PRIMARY KEY</code> of a table.</li><li>A table that contains a counter can only contain counters. In other words, either all the columns of a table outside the <code>PRIMARY KEY</code> have the counter type, or none of them have it.</li></ul><h3 id="collections">Working with collections</h3><h4 id="map">Maps</h4><p>A <code>map</code> is a <a href="#types">typed</a> set of key-value pairs, where keys are unique. Furthermore, note that the map are internally sorted by their keys and will thus always be returned in that order. To create a column of type <code>map</code>, use the <code>map</code> keyword suffixed with comma-separated key and value types, enclosed in angle brackets.  For example:</p><pre class="sample"><pre>CREATE TABLE users (
+</pre></pre><p>Note that the native types are keywords and as such are case-insensitive. They are however not reserved ones.</p><p>The following table gives additional informations on the native data types, and on which kind of <a href="#constants">constants</a> each type supports:</p><table><tr><th>type    </th><th>constants supported</th><th>description</th></tr><tr><td><code>ascii</code>    </td><td>  strings            </td><td>ASCII character string</td></tr><tr><td><code>bigint</code>   </td><td>  integers           </td><td>64-bit signed long</td></tr><tr><td><code>blob</code>     </td><td>  blobs              </td><td>Arbitrary bytes (no validation)</td></tr><tr><td><code>boolean</code>  </td><td>  booleans           </td><td>true or false</td></tr><tr><td><code>counter</code>  </td><td>  integers           </td><td>Counter column (64-bit signed value). See <a href="#counters">Counters</a> for details</td></tr><tr><td><code>decimal</code>  </td><td>  integers, floats   </td>
 <td>Variable-precision decimal</td></tr><tr><td><code>double</code>   </td><td>  integers           </td><td>64-bit IEEE-754 floating point</td></tr><tr><td><code>float</code>    </td><td>  integers, floats   </td><td>32-bit IEEE-754 floating point</td></tr><tr><td><code>inet</code>     </td><td>  strings            </td><td>An IP address. It can be either 4 bytes long (IPv4) or 16 bytes long (IPv6). There is no <code>inet</code> constant, IP address should be inputed as strings</td></tr><tr><td><code>int</code>      </td><td>  integers           </td><td>32-bit signed int</td></tr><tr><td><code>text</code>     </td><td>  strings            </td><td>UTF8 encoded string</td></tr><tr><td><code>timestamp</code></td><td>  integers, strings  </td><td>A timestamp. Strings constant are allow to input timestamps as dates, see <a href="#usingdates">Working with dates</a> below for more information.</td></tr><tr><td><code>timeuuid</code> </td><td>  uuids              </td><td>Type 1 UUID. Thi
 s is generally used as a &#8220;conflict-free&#8221; timestamp. Also see the <a href="#timeuuidFun">functions on Timeuuid</a></td></tr><tr><td><code>uuid</code>     </td><td>  uuids              </td><td>Type 1 or type 4 UUID</td></tr><tr><td><code>varchar</code>  </td><td>  strings            </td><td>UTF8 encoded string</td></tr><tr><td><code>varint</code>   </td><td>  integers           </td><td>Arbitrary-precision integer</td></tr></table><p>For more information on how to use the collection types, see the <a href="#collections">Working with collections</a> section below.</p><h3 id="usingdates">Working with dates</h3><p>Values of the <code>timestamp</code> type are encoded as 64-bit signed integers representing a number of milliseconds since the standard base time known as &#8220;the epoch&#8221;: January 1 1970 at 00:00:00 GMT.</p><p>Timestamp can be input in CQL as simple long integers, giving the number of milliseconds since the epoch, as defined above.</p><p>They can also be 
 input as string literals in any of the following ISO 8601 formats, each representing the time and date Mar 2, 2011, at 04:05:00 AM, GMT.:</p><ul><li><code>2011-02-03 04:05+0000</code></li><li><code>2011-02-03 04:05:00+0000</code></li><li><code>2011-02-03T04:05+0000</code></li><li><code>2011-02-03T04:05:00+0000</code></li></ul><p>The <code>+0000</code> above is an RFC 822 4-digit time zone specification; <code>+0000</code> refers to GMT. US Pacific Standard Time is <code>-0800</code>. The time zone may be omitted if desired&#8212; the date will be interpreted as being in the time zone under which the coordinating Cassandra node is configured.</p><ul><li><code>2011-02-03 04:05</code></li><li><code>2011-02-03 04:05:00</code></li><li><code>2011-02-03T04:05</code></li><li><code>2011-02-03T04:05:00</code></li></ul><p>There are clear difficulties inherent in relying on the time zone configuration being as expected, though, so it is recommended that the time zone always be specified for tim
 estamps when feasible.</p><p>The time of day may also be omitted, if the date is the only piece that matters:</p><ul><li><code>2011-02-03</code></li><li><code>2011-02-03+0000</code></li></ul><p>In that case, the time of day will default to 00:00:00, in the specified or default time zone.</p><h3 id="counters">Counters</h3><p>The <code>counter</code> type is used to define <em>counter columns</em>. A counter column is a column whose value is a 64-bit signed integer and on which 2 operations are supported: incrementation and decrementation (see <a href="#updateStmt"><code>UPDATE</code></a> for syntax).  Note the value of a counter cannot be set. A counter doesn&#8217;t exist until first incremented/decremented, and the first incrementation/decrementation is made as if the previous value was 0. Deletion of counter columns is supported but have some limitations (see the <a href="http://wiki.apache.org/cassandra/Counters">Cassandra Wiki</a> for more information).</p><p>The use of the coun
 ter type is limited in the following way:</p><ul><li>It cannot be used for column that is part of the <code>PRIMARY KEY</code> of a table.</li><li>A table that contains a counter can only contain counters. In other words, either all the columns of a table outside the <code>PRIMARY KEY</code> have the counter type, or none of them have it.</li></ul><h3 id="collections">Working with collections</h3><h4 id="Noteworthycharacteristics">Noteworthy characteristics</h4><p>Collections are meant for storing/denormalizing relatively small amount of data. They work well for things like &#8220;the phone numbers of a given user&#8221;, &#8220;labels applied to an email&#8221;, etc. But when items are expected to grow unbounded (&#8220;all the messages sent by a given user&#8221;, &#8220;events registered by a sensor&#8221;, ...), then collections are not appropriate anymore and a specific table (with clustering columns) should be used. Concretely, collections have the following limitations:</p><u
 l><li>Collections are always read in their entirety (and reading one is not paged internally).</li><li>Collections cannot have more than 65535 elements. More precisely, while it may be possible to insert more than 65535 elements, it is not possible to read more than the 65535 first elements (see <a href="https://issues.apache.org/jira/browse/CASSANDRA-5428">CASSANDRA-5428</a> for details).</li><li>While insertion operations on sets and maps never incur a read-before-write internally, some operations on lists do (see the section on lists below for details). It is thus advised to prefer sets over lists when possible.</li></ul><p>Please note that while some of those limitations may or may not be loosen in the future, the general rule that collections are for denormalizing small amount of data is meant to stay.</p><h4 id="map">Maps</h4><p>A <code>map</code> is a <a href="#types">typed</a> set of key-value pairs, where keys are unique. Furthermore, note that the map are internally sorted
  by their keys and will thus always be returned in that order. To create a column of type <code>map</code>, use the <code>map</code> keyword suffixed with comma-separated key and value types, enclosed in angle brackets.  For example:</p><pre class="sample"><pre>CREATE TABLE users (
     id text PRIMARY KEY,
     given text,
     surname text,
@@ -284,7 +284,7 @@ UPDATE users SET favs['author'] = 'Ed Po
 UPDATE users SET favs = favs +  { 'movie' : 'Cassablanca' } WHERE id = 'jsmith'
 </pre></pre><p>Note that TTLs are allowed for both <code>INSERT</code> and <code>UPDATE</code>, but in both case the TTL set only apply to the newly inserted/updated <em>values</em>. In other words,</p><pre class="sample"><pre>// Updating (or inserting)
 UPDATE users USING TTL 10 SET favs['color'] = 'green' WHERE id = 'jsmith'
-</pre></pre><p>will only apply the TTL to the <code>{ 'color' : 'green' }</code> record, the rest of the map remaining unaffected.</p><p>Deleting a map record is done with:</p><pre class="sample"><pre>DELETE favs['author'] FROM plays WHERE id = 'jsmith'
+</pre></pre><p>will only apply the TTL to the <code>{ 'color' : 'green' }</code> record, the rest of the map remaining unaffected.</p><p>Deleting a map record is done with:</p><pre class="sample"><pre>DELETE favs['author'] FROM users WHERE id = 'jsmith'
 </pre></pre><h4 id="set">Sets</h4><p>A <code>set</code> is a <a href="#types">typed</a> collection of unique values. Sets are ordered by their values. To create a column of type <code>set</code>, use the <code>set</code> keyword suffixed with the value type enclosed in angle brackets.  For example:</p><pre class="sample"><pre>CREATE TABLE images (
     name text PRIMARY KEY,
     owner text,
@@ -313,6 +313,6 @@ UPDATE plays SET scores = scores - [ 12,
     username text,
     ...
 )
-</pre></pre><p>then the <code>token</code> function will take a single argument of type <code>text</code> (in that case, the partition key is <code>userid</code> (there is no clustering key so the partition key is the same than the primary key)), and the return type will be <code>bigint</code>.</p><h3 id="timeuuidFun">Timeuuid functions</h3><h4 id="now"><code>now</code></h4><p>The <code>now</code> function takes no arguments and generates a new unique timeuuid (at the time where the statement using it is executed). Note that this method is useful for insertion but is largely non-sensical in <code>WHERE</code> clauses. For instance, a query of the form</p><pre class="sample"><pre>SELECT * FROM myTable WHERE t = now()
+</pre></pre><p>then the <code>token</code> function will take a single argument of type <code>text</code> (in that case, the partition key is <code>userid</code> (there is no clustering columns so the partition key is the same than the primary key)), and the return type will be <code>bigint</code>.</p><h3 id="timeuuidFun">Timeuuid functions</h3><h4 id="now"><code>now</code></h4><p>The <code>now</code> function takes no arguments and generates a new unique timeuuid (at the time where the statement using it is executed). Note that this method is useful for insertion but is largely non-sensical in <code>WHERE</code> clauses. For instance, a query of the form</p><pre class="sample"><pre>SELECT * FROM myTable WHERE t = now()
 </pre></pre><p>will never return any result by design, since the value returned by <code>now()</code> is guaranteed to be unique.</p><h4 id="minTimeuuidandmaxTimeuuid"><code>minTimeuuid</code> and <code>maxTimeuuid</code></h4><p>The <code>minTimeuuid</code> (resp. <code>maxTimeuuid</code>) function takes a <code>timestamp</code> value <code>t</code> (which can be <a href="#usingdates">either a timestamp or a date string</a>) and return a <em>fake</em> <code>timeuuid</code> corresponding to the <em>smallest</em> (resp. <em>biggest</em>) possible <code>timeuuid</code> having for timestamp <code>t</code>. So for instance:</p> <pre class="sample"><pre>SELECT * FROM myTable WHERE t > maxTimeuuid('2013-01-01 00:05+0000') AND t &lt; minTimeuuid('2013-02-02 10:00+0000')
 </pre></pre> <p>will select all rows where the <code>timeuuid</code> column <code>t</code> is strictly older than &#8216;2013-01-01 00:05+0000&#8217; but stricly younger than &#8216;2013-02-02 10:00+0000&#8217;.  Please note that <code>t >= maxTimeuuid('2013-01-01 00:05+0000')</code> would still <em>not</em> select a <code>timeuuid</code> generated exactly at &#8216;2013-01-01 00:05+0000&#8217; and is essentially equivalent to <code>t > maxTimeuuid('2013-01-01 00:05+0000')</code>.</p><p><em>Warning</em>: We called the values generated by <code>minTimeuuid</code> and <code>maxTimeuuid</code> <em>fake</em> UUID because they do no respect the Time-Based UUID generation process specified by the <a href="http://www.ietf.org/rfc/rfc4122.txt">RFC 4122</a>. In particular, the value returned by these 2 methods will not be unique. This means you should only use those methods for querying (as in the example above). Inserting the result of those methods is almost certainly <em>a bad idea</em>.<
 /p><h4 id="dateOfandunixTimestampOf"><code>dateOf</code> and <code>unixTimestampOf</code></h4><p>The <code>dateOf</code> and <code>unixTimestampOf</code> functions take a <code>timeuuid</code> argument and extract the embeded timestamp. However, while the <code>dateof</code> function return it with the <code>timestamp</code> type (that most client, including cqlsh, interpret as a date), the <code>unixTimestampOf</code> function returns it as a <code>bigint</code> raw value.</p><h3 id="blobFun">Blob conversion functions</h3><p>A number of functions are provided to &#8220;convert&#8221; the native types into binary data (<code>blob</code>). For every <code>&lt;native-type></code> <code>type</code> supported by CQL3 (a notable exceptions is <code>blob</code>, for obvious reasons), the function <code>typeAsBlob</code> takes a argument of type <code>type</code> and return it as a <code>blob</code>.  Conversely, the function <code>blobAsType</code> takes a 64-bit <code>blob</code> argumen
 t and convert it to a <code>bigint</code> value.  And so for instance, <code>bigintAsBlob(3)</code> is <code>0x0000000000000003</code> and <code>blobAsBigint(0x0000000000000003)</code> is <code>3</code>.</p><h2 id="appendixA">Appendix A: CQL Keywords</h2><p>CQL distinguishes between <em>reserved</em> and <em>non-reserved</em> keywords. Reserved keywords cannot be used as identifier, they are truly reserved for the language (but one can enclose a reserved keyword by double-quotes to use it as an identifier). Non-reserved keywords however only have a specific meaning in certain context but can used as identifer otherwise. The only <em>raison d'être</em> of these non-reserved keywords is convenience: some keyword are non-reserved when it was always easy for the parser to decide whether they were used as keywords or not.</p><table><tr><th>Keyword      </th><th>Reserved? </th></tr><tr><td><code>ADD</code>          </td><td>yes </td></tr><tr><td><code>ALL</code>          </td><td>no  <
 /td></tr><tr><td><code>ALTER</code>        </td><td>yes </td></tr><tr><td><code>AND</code>          </td><td>yes </td></tr><tr><td><code>ANY</code>          </td><td>yes </td></tr><tr><td><code>APPLY</code>        </td><td>yes </td></tr><tr><td><code>ASC</code>          </td><td>yes </td></tr><tr><td><code>ASCII</code>        </td><td>no  </td></tr><tr><td><code>AUTHORIZE</code>    </td><td>yes </td></tr><tr><td><code>BATCH</code>        </td><td>yes </td></tr><tr><td><code>BEGIN</code>        </td><td>yes </td></tr><tr><td><code>BIGINT</code>       </td><td>no  </td></tr><tr><td><code>BLOB</code>         </td><td>no  </td></tr><tr><td><code>BOOLEAN</code>      </td><td>no  </td></tr><tr><td><code>BY</code>           </td><td>yes </td></tr><tr><td><code>CLUSTERING</code>   </td><td>no  </td></tr><tr><td><code>COLUMNFAMILY</code> </td><td>yes </td></tr><tr><td><code>COMPACT</code>      </td><td>no  </td></tr><tr><td><code>CONSISTENCY</code>  </td><td>no  </td></tr><tr><td><code>COUNT
 </code>        </td><td>no  </td></tr><tr><td><code>COUNTER</code>      </td><td>no  </td></tr><tr><td><code>CREATE</code>       </td><td>yes </td></tr><tr><td><code>DECIMAL</code>      </td><td>no  </td></tr><tr><td><code>DELETE</code>       </td><td>yes </td></tr><tr><td><code>DESC</code>         </td><td>yes </td></tr><tr><td><code>DOUBLE</code>       </td><td>no  </td></tr><tr><td><code>DROP</code>         </td><td>yes </td></tr><tr><td><code>EACH_QUORUM</code>  </td><td>yes </td></tr><tr><td><code>FLOAT</code>        </td><td>no  </td></tr><tr><td><code>FROM</code>         </td><td>yes </td></tr><tr><td><code>GRANT</code>        </td><td>yes </td></tr><tr><td><code>IN</code>           </td><td>yes </td></tr><tr><td><code>INDEX</code>        </td><td>yes </td></tr><tr><td><code>CUSTOM</code>       </td><td>no  </td></tr><tr><td><code>INSERT</code>       </td><td>yes </td></tr><tr><td><code>INT</code>          </td><td>no  </td></tr><tr><td><code>INTO</code>         </td><td>yes 
 </td></tr><tr><td><code>KEY</code>          </td><td>no  </td></tr><tr><td><code>KEYSPACE</code>     </td><td>yes </td></tr><tr><td><code>LEVEL</code>        </td><td>no  </td></tr><tr><td><code>LIMIT</code>        </td><td>yes </td></tr><tr><td><code>LOCAL_ONE</code>    </td><td>yes </td></tr><tr><td><code>LOCAL_QUORUM</code> </td><td>yes </td></tr><tr><td><code>MODIFY</code>       </td><td>yes </td></tr><tr><td><code>NORECURSIVE</code>  </td><td>yes </td></tr><tr><td><code>NOSUPERUSER</code>  </td><td>no  </td></tr><tr><td><code>OF</code>           </td><td>yes </td></tr><tr><td><code>ON</code>           </td><td>yes </td></tr><tr><td><code>ONE</code>          </td><td>yes </td></tr><tr><td><code>ORDER</code>        </td><td>yes </td></tr><tr><td><code>PASSWORD</code>     </td><td>no  </td></tr><tr><td><code>PERMISSION</code>   </td><td>no  </td></tr><tr><td><code>PERMISSIONS</code>  </td><td>no  </td></tr><tr><td><code>PRIMARY</code>      </td><td>yes </td></tr><tr><td><code>QUOR
 UM</code>       </td><td>yes </td></tr><tr><td><code>REVOKE</code>       </td><td>yes </td></tr><tr><td><code>SCHEMA</code>       </td><td>yes </td></tr><tr><td><code>SELECT</code>       </td><td>yes </td></tr><tr><td><code>SET</code>          </td><td>yes </td></tr><tr><td><code>STORAGE</code>      </td><td>no  </td></tr><tr><td><code>SUPERUSER</code>    </td><td>no  </td></tr><tr><td><code>TABLE</code>        </td><td>yes </td></tr><tr><td><code>TEXT</code>         </td><td>no  </td></tr><tr><td><code>TIMESTAMP</code>    </td><td>no  </td></tr><tr><td><code>TIMEUUID</code>     </td><td>no  </td></tr><tr><td><code>THREE</code>        </td><td>yes </td></tr><tr><td><code>TOKEN</code>        </td><td>yes </td></tr><tr><td><code>TRUNCATE</code>     </td><td>yes </td></tr><tr><td><code>TTL</code>          </td><td>no  </td></tr><tr><td><code>TWO</code>          </td><td>yes </td></tr><tr><td><code>TYPE</code>         </td><td>no  </td></tr><tr><td><code>UPDATE</code>       </td><td>yes
  </td></tr><tr><td><code>USE</code>          </td><td>yes </td></tr><tr><td><code>USER</code>         </td><td>no  </td></tr><tr><td><code>USERS</code>        </td><td>no  </td></tr><tr><td><code>USING</code>        </td><td>yes </td></tr><tr><td><code>UUID</code>         </td><td>no  </td></tr><tr><td><code>VALUES</code>       </td><td>no  </td></tr><tr><td><code>VARCHAR</code>      </td><td>no  </td></tr><tr><td><code>VARINT</code>       </td><td>no  </td></tr><tr><td><code>WHERE</code>        </td><td>yes </td></tr><tr><td><code>WITH</code>         </td><td>yes </td></tr><tr><td><code>WRITETIME</code>    </td><td>no  </td></tr></table><h2 id="changes">Changes</h2><p>The following describes the addition/changes brought for each version of CQL.</p><h3 id="a3.0.5">3.0.5</h3><ul><li><code>SELECT</code>, <code>UPDATE</code>, and <code>DELETE</code> statements now allow empty <code>IN</code> relations (see <a href="https://issues.apache.org/jira/browse/CASSANDRA-5626">CASSANDRA-5626</a
 >).</li></ul><h3 id="a3.0.4">3.0.4</h3><ul><li>Updated the syntax for custom <a href="#createIndexStmt">secondary indexes</a>.</li><li>Non-equal condition on the partition key are now never supported, even for ordering partitioner as this was not correct (the order was <strong>not</strong> the one of the type of the partition key). Instead, the <code>token</code> method should always be used for range queries on the partition key (see <a href="#selectWhere">WHERE clauses</a>).</li></ul><h3 id="a3.0.3">3.0.3</h3><ul><li>Support for custom <a href="#createIndexStmt">secondary indexes</a> has been added.</li></ul><h3 id="a3.0.2">3.0.2</h3><ul><li>Type validation for the <a href="#constants">constants</a> has been fixed. For instance, the implementation used to allow <code>'2'</code> as a valid value for an <code>int</code> column (interpreting it has the equivalent of <code>2</code>), or <code>42</code> as a valid <code>blob</code> value (in which case <code>42</code> was interpreted a
 s an hexadecimal representation of the blob). This is no longer the case, type validation of constants is now more strict. See the <a href="#types">data types</a> section for details on which constant is allowed for which type.</li><li>The type validation fixed of the previous point has lead to the introduction of <a href="#constants">blobs constants</a> to allow inputing blobs. Do note that while inputing blobs as strings constant is still supported by this version (to allow smoother transition to blob constant), it is now deprecated (in particular the <a href="#types">data types</a> section does not list strings constants as valid blobs) and will be removed by a future version. If you were using strings as blobs, you should thus update your client code asap to switch blob constants.</li><li>A number of functions to convert native types to blobs have also been introduced. Furthermore the token function is now also allowed in select clauses. See the <a href="#functions">section on f
 unctions</a> for details.</li></ul><h3 id="a3.0.1">3.0.1</h3><ul><li><a href="#usingdates">Date strings</a> (and timestamps) are no longer accepted as valid <code>timeuuid</code> values. Doing so was a bug in the sense that date string are not valid <code>timeuuid</code>, and it was thus resulting in <a href="https://issues.apache.org/jira/browse/CASSANDRA-4936">confusing behaviors</a>.  However, the following new methods have been added to help working with <code>timeuuid</code>: <code>now</code>, <code>minTimeuuid</code>, <code>maxTimeuuid</code> , <code>dateOf</code> and <code>unixTimestampOf</code>. See the <a href="#usingtimeuuid">section dedicated to these methods</a> for more detail.</li><li>&#8220;Float constants&#8221;#constants now support the exponent notation. In other words, <code>4.2E10</code> is now a valid floating point value.</li></ul><h2 id="Versioning">Versioning</h2><p>Versioning of the CQL language adheres to the <a href="http://semver.org">Semantic Versioning<
 /a> guidelines. Versions take the form X.Y.Z where X, Y, and Z are integer values representing major, minor, and patch level respectively. There is no correlation between Cassandra release versions and the CQL language version.</p><table><tr><th>version</th><th>description</th></tr><tr><td>Major     </td><td>The major version <em>must</em> be bumped when backward incompatible changes are introduced. This should rarely occur.</td></tr><tr><td>Minor     </td><td>Minor version increments occur when new, but backward compatible, functionality is introduced.</td></tr><tr><td>Patch     </td><td>The patch version is incremented when bugs are fixed.</td></tr></table></body></html>
\ No newline at end of file

Modified: cassandra/site/publish/download/index.html
URL: http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1563901&r1=1563900&r2=1563901&view=diff
==============================================================================
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Mon Feb  3 13:56:55 2014
@@ -102,16 +102,16 @@
   <p>
   Previous stable branches of Cassandra continue to see periodic maintenance
   for some time after a new major release is made. The lastest release on the
-  1.2 branch is 1.2.13 (released on
-  2013-12-20).
+  1.2 branch is 1.2.14 (released on
+  2013-02-03).
   </p>
 
   <ul>
     <li>
-    <a class="filename" href="http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.13/apache-cassandra-1.2.13-bin.tar.gz">apache-cassandra-1.2.13-bin.tar.gz</a>
-    [<a href="http://www.apache.org/dist/cassandra/1.2.13/apache-cassandra-1.2.13-bin.tar.gz.asc">PGP</a>]
-    [<a href="http://www.apache.org/dist/cassandra/1.2.13/apache-cassandra-1.2.13-bin.tar.gz.md5">MD5</a>]
-    [<a href="http://www.apache.org/dist/cassandra/1.2.13/apache-cassandra-1.2.13-bin.tar.gz.sha1">SHA1</a>]
+    <a class="filename" href="http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.14/apache-cassandra-1.2.14-bin.tar.gz">apache-cassandra-1.2.14-bin.tar.gz</a>
+    [<a href="http://www.apache.org/dist/cassandra/1.2.14/apache-cassandra-1.2.14-bin.tar.gz.asc">PGP</a>]
+    [<a href="http://www.apache.org/dist/cassandra/1.2.14/apache-cassandra-1.2.14-bin.tar.gz.md5">MD5</a>]
+    [<a href="http://www.apache.org/dist/cassandra/1.2.14/apache-cassandra-1.2.14-bin.tar.gz.sha1">SHA1</a>]
     </li>
   </ul>
   
@@ -154,10 +154,10 @@
     </li>
   
     <li>
-    <a class="filename" href="http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.13/apache-cassandra-1.2.13-src.tar.gz">apache-cassandra-1.2.13-src.tar.gz</a>
-    [<a href="http://www.apache.org/dist/cassandra/1.2.13/apache-cassandra-1.2.13-src.tar.gz.asc">PGP</a>]
-    [<a href="http://www.apache.org/dist/cassandra/1.2.13/apache-cassandra-1.2.13-src.tar.gz.md5">MD5</a>]
-    [<a href="http://www.apache.org/dist/cassandra/1.2.13/apache-cassandra-1.2.13-src.tar.gz.sha1">SHA1</a>]
+    <a class="filename" href="http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.14/apache-cassandra-1.2.14-src.tar.gz">apache-cassandra-1.2.14-src.tar.gz</a>
+    [<a href="http://www.apache.org/dist/cassandra/1.2.14/apache-cassandra-1.2.14-src.tar.gz.asc">PGP</a>]
+    [<a href="http://www.apache.org/dist/cassandra/1.2.14/apache-cassandra-1.2.14-src.tar.gz.md5">MD5</a>]
+    [<a href="http://www.apache.org/dist/cassandra/1.2.14/apache-cassandra-1.2.14-src.tar.gz.sha1">SHA1</a>]
     </li>
   
   

Modified: cassandra/site/src/settings.py
URL: http://svn.apache.org/viewvc/cassandra/site/src/settings.py?rev=1563901&r1=1563900&r2=1563901&view=diff
==============================================================================
--- cassandra/site/src/settings.py (original)
+++ cassandra/site/src/settings.py Mon Feb  3 13:56:55 2014
@@ -92,8 +92,8 @@ SITE_POST_PROCESSORS = {
 }
 
 class CassandraDef(object):
-    oldstable_version = '1.2.13'
-    oldstable_release_date = '2013-12-20'
+    oldstable_version = '1.2.14'
+    oldstable_release_date = '2013-02-03'
     oldstable_exists = True
     veryoldstable_version = '1.1.12'
     veryoldstable_release_date = '2013-05-27'