You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by jb...@apache.org on 2012/04/20 17:59:43 UTC

git commit: add concurrent schema, cql3, and CFRR wide rows to NEWS. clarify that KeyRange.filter allows Hadoop to take advantage of C* indexes

Updated Branches:
  refs/heads/cassandra-1.1.0 3b697bbc4 -> 54aaa3350


add concurrent schema, cql3, and CFRR wide rows to NEWS.  clarify that KeyRange.filter allows Hadoop to take advantage of C* indexes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54aaa335
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54aaa335
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54aaa335

Branch: refs/heads/cassandra-1.1.0
Commit: 54aaa33506847a2a33705b05ff76a40949b5fe64
Parents: 3b697bb
Author: Jonathan Ellis <jb...@apache.org>
Authored: Fri Apr 20 10:59:01 2012 -0500
Committer: Jonathan Ellis <jb...@apache.org>
Committed: Fri Apr 20 10:59:22 2012 -0500

----------------------------------------------------------------------
 NEWS.txt |   41 +++++++++++++++++++++++++++++------------
 1 files changed, 29 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/54aaa335/NEWS.txt
----------------------------------------------------------------------
diff --git a/NEWS.txt b/NEWS.txt
index 5132730..684182f 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -48,19 +48,23 @@ Upgrading
 
 Features
 --------
-    - Cassandra 1.1 adds row-level isolation.  Multi-column updates to
-      a single row have always been *atomic* (either all will be applied,
-      or none) thanks to the CommitLog, but until 1.1 they were not *isolated*
-      -- a reader may see mixed old and new values while the update happens.
+    - Concurrent schema updates are now supported, with any conflicts
+      automatically resolved.  This makes temporary columnfamilies and
+      other uses of dynamic schema appropriate to use in applications.
+    - The CQL language has undergone a major revision, CQL3, the
+      highlights of which are covered at [1].  CQL3 is not
+      backwards-compatibile with CQL2, so we've introduced a
+      set_cql_version Thrift method to specify which version you want.
+      (The default remains CQL2 at least until Cassandra 1.2.)  cqlsh
+      adds a --cql3 flag to enable this.
+      [1] http://www.datastax.com/dev/blog/schema-in-cassandra-1-1
+    - Row-level isolation: multi-column updates to a single row have
+      always been *atomic* (either all will be applied, or none)
+      thanks to the CommitLog, but until 1.1 they were not *isolated*
+      -- a reader may see mixed old and new values while the update
+      happens.
     - Finer-grained control over data directories, allowing a ColumnFamily to
-      be pinned to specfic media.
-    - Hadoop: a new BulkOutputFormat is included which will directly write
-      SSTables locally and then stream them into the cluster.
-      YOU SHOULD USE BulkOutputFormat BY DEFAULT.  ColumnFamilyOutputFormat
-      is still around in case for some strange reason you want results
-      trickling out over Thrift, but BulkOutputFormat is significantly
-      more efficient.
-    - Hadoop: KeyRange.filter is now supported with ColumnFamilyInputFormat
+      be pinned to specfic volume, e.g. one backed by SSD.
     - The bulk loader is not longer a fat client; it can be run from an
       existing machine in a cluster.
     - A new write survey mode has been added, similar to bootstrap (enabled via
@@ -74,6 +78,19 @@ Features
     - Compactions may now be aborted via JMX or nodetool.
     - The stress tool is not new in 1.1, but it is newly included in
       binary builds as well as the source tree
+    - Hadoop: a new BulkOutputFormat is included which will directly write
+      SSTables locally and then stream them into the cluster.
+      YOU SHOULD USE BulkOutputFormat BY DEFAULT.  ColumnFamilyOutputFormat
+      is still around in case for some strange reason you want results
+      trickling out over Thrift, but BulkOutputFormat is significantly
+      more efficient.
+    - Hadoop: KeyRange.filter is now supported with ColumnFamilyInputFormat,
+      allowing index expressions to be evaluated server-side to reduce
+      the amount of data sent to Hadoop
+    - Hadoop: ColumnFamilyRecordReader has a wide-row mode, enabled via
+      a boolean parameter to setInputColumnFamily, that pages through
+      data column-at-a-time instead of row-at-a-time
+
 
 
 1.0.8