You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by jb...@apache.org on 2012/05/05 01:10:28 UTC

[6/10] git commit: merge from 1.0

merge from 1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/885ab7cf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/885ab7cf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/885ab7cf

Branch: refs/heads/cassandra-1.1
Commit: 885ab7cf85fd2d074d06e9444f3f9d63b2bf2702
Parents: 44e7a08 b2ca7f8
Author: Jonathan Ellis <jb...@apache.org>
Authored: Fri May 4 18:06:40 2012 -0500
Committer: Jonathan Ellis <jb...@apache.org>
Committed: Fri May 4 18:06:40 2012 -0500

----------------------------------------------------------------------
 NEWS.txt                                           |    7 +++++++
 .../cassandra/io/sstable/DescriptorTest.java       |    2 --
 2 files changed, 7 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/885ab7cf/NEWS.txt
----------------------------------------------------------------------
diff --cc NEWS.txt
index 8c65101,42bea7c..b87f05c
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -8,105 -8,20 +8,112 @@@ upgrade, just in case you need to roll 
  (Cassandra version X + 1 will always be able to read data files created
  by version X, but the inverse is not necessarily the case.)
  
+ 1.0.10
+ ======
+ 
+ Upgrading
+ ---------
+     - Nothing specific to 1.0.10
+ 
  
 -1.0.9
 +1.1.1
  =====
  
 +Features
 +--------
 +    - Continuous commitlog archiving and point-in-time recovery.
 +      See conf/commitlog_archiving.properties
 +    - Incremental repair by token range, exposed over JMX
 +
 +
 +1.1
 +===
 +
  Upgrading
  ---------
 -    - Nothing specific to 1.0.9
 +    - Compression is enabled by default on newly created ColumnFamilies
 +      (and unchanged for ColumnFamilies created prior to upgrading).
 +    - If you are running a multi datacenter setup, you should upgrade to
 +      the latest 1.0.x (or 0.8.x) release before upgrading.  Versions
 +      0.8.8 and 1.0.3-1.0.5 generate cross-dc forwarding that is incompatible
 +      with 1.1.
 +    - EACH_QUORUM ConsistencyLevel is only supported for writes and will now
 +      throw an InvalidRequestException when used for reads.  (Previous
 +      versions would silently perform a LOCAL_QUORUM read instead.)
 +    - ANY ConsistencyLevel is only supported for writes and will now
 +      throw an InvalidRequestException when used for reads.  (Previous
 +      versions would silently perform a ONE read for range queries;
 +      single-row and multiget reads already rejected ANY.)
 +    - The largest mutation batch accepted by the commitlog is now 128MB.  
 +      (In practice, batches larger than ~10MB always caused poor
 +      performance due to load volatility and GC promotion failures.)
 +      Larger batches will continue to be accepted but will not be
 +      durable.  Consider setting durable_writes=false if you really
 +      want to use such large batches.
 +    - Make sure that global settings: key_cache_{size_in_mb, save_period}
 +      and row_cache_{size_in_mb, save_period} in conf/cassandra.yaml are
 +      used instead of per-ColumnFamily options.
 +    - JMX methods no longer return custom Cassandra objects.  Any such methods
 +      will now return standard Maps, Lists, etc.
 +    - Hadoop input and output details are now separated.  If you were
 +      previously using methods such as getRpcPort you now need to use
 +      getInputRpcPort or getOutputRpcPort depending on the circumstance.
 +    - CQL changes:
 +      + Prior to 1.1, you could use KEY as the primary key name in some
 +        select statements, even if the PK was actually given a different
 +        name.  In 1.1+ you must use the defined PK name.
 +    - The sliced_buffer_size_in_kb option has been removed from the
 +      cassandra.yaml config file (this option was a no-op since 1.0).
 +
 +Features
 +--------
 +    - Concurrent schema updates are now supported, with any conflicts
 +      automatically resolved.  This makes temporary columnfamilies and
 +      other uses of dynamic schema appropriate to use in applications.
 +    - The CQL language has undergone a major revision, CQL3, the
 +      highlights of which are covered at [1].  CQL3 is not
 +      backwards-compatibile with CQL2, so we've introduced a
 +      set_cql_version Thrift method to specify which version you want.
 +      (The default remains CQL2 at least until Cassandra 1.2.)  cqlsh
 +      adds a --cql3 flag to enable this.
 +      [1] http://www.datastax.com/dev/blog/schema-in-cassandra-1-1
 +    - Row-level isolation: multi-column updates to a single row have
 +      always been *atomic* (either all will be applied, or none)
 +      thanks to the CommitLog, but until 1.1 they were not *isolated*
 +      -- a reader may see mixed old and new values while the update
 +      happens.
 +    - Finer-grained control over data directories, allowing a ColumnFamily to
 +      be pinned to specfic volume, e.g. one backed by SSD.
 +    - The bulk loader is not longer a fat client; it can be run from an
 +      existing machine in a cluster.
 +    - A new write survey mode has been added, similar to bootstrap (enabled via
 +      -Dcassandra.write_survey=true), but the node will not automatically join
 +      the cluster.  This is useful for cases such as testing different
 +      compaction strategies with live traffic without affecting the cluster.
 +    - Key and row caches are now global, similar to the global memtable
 +      threshold. Manual tuning of cache sizes per-columnfamily is no longer
 +      required.
 +    - Off-heap caches no longer require JNA, and will work out of the box
 +      on Windows as well as Unix platforms.
 +    - Streaming is now multithreaded.
 +    - Compactions may now be aborted via JMX or nodetool.
 +    - The stress tool is not new in 1.1, but it is newly included in
 +      binary builds as well as the source tree
 +    - Hadoop: a new BulkOutputFormat is included which will directly write
 +      SSTables locally and then stream them into the cluster.
 +      YOU SHOULD USE BulkOutputFormat BY DEFAULT.  ColumnFamilyOutputFormat
 +      is still around in case for some strange reason you want results
 +      trickling out over Thrift, but BulkOutputFormat is significantly
 +      more efficient.
 +    - Hadoop: KeyRange.filter is now supported with ColumnFamilyInputFormat,
 +      allowing index expressions to be evaluated server-side to reduce
 +      the amount of data sent to Hadoop.
 +    - Hadoop: ColumnFamilyRecordReader has a wide-row mode, enabled via
 +      a boolean parameter to setInputColumnFamily, that pages through
 +      data column-at-a-time instead of row-at-a-time.
 +    - Pig: can use the wide-row Hadoop support, by setting PIG_WIDEROW_INPUT
 +      to true.  This will produce each row's columns in a bag.
 +
  
  
  1.0.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/885ab7cf/test/unit/org/apache/cassandra/io/sstable/DescriptorTest.java
----------------------------------------------------------------------
diff --cc test/unit/org/apache/cassandra/io/sstable/DescriptorTest.java
index 525ecde,604ce24..5b6dc75
--- a/test/unit/org/apache/cassandra/io/sstable/DescriptorTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/DescriptorTest.java
@@@ -38,17 -41,27 +38,15 @@@ public class DescriptorTes
      public void testVersion()
      {
          // letter only
 -        Descriptor desc = Descriptor.fromFilename(new File("Keyspace1"), "Standard1-h-1-Data.db").left;
 +        Descriptor desc = Descriptor.fromFilename("Keyspace1-Standard1-h-1-Data.db");
          assert "h".equals(desc.version);
-         assert desc.tracksMaxTimestamp;
  
          // multiple letters
 -        desc = Descriptor.fromFilename(new File("Keyspace1"), "Standard1-ha-1-Data.db").left;
 +        desc = Descriptor.fromFilename("Keyspace1-Standard1-ha-1-Data.db");
          assert "ha".equals(desc.version);
-         assert desc.tracksMaxTimestamp;
  
          // hypothetical two-letter g version
 -        desc = Descriptor.fromFilename(new File("Keyspace1"), "Standard1-gz-1-Data.db").left;
 +        desc = Descriptor.fromFilename("Keyspace1-Standard1-gz-1-Data.db");
          assert "gz".equals(desc.version);
          assert !desc.tracksMaxTimestamp;
      }