You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by Apache Wiki <wi...@apache.org> on 2009/12/10 06:12:24 UTC

[Cassandra Wiki] Trivial Update of "Operations" by JonathanEllis

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification.

The "Operations" page has been changed by JonathanEllis.
The comment on this change is: move all headings to the next size smaller.
http://wiki.apache.org/cassandra/Operations?action=diff&rev1=13&rev2=14

--------------------------------------------------

  The following applies to Cassandra 0.5, which is currently in '''beta'''.
  
- = Hardware =
+ == Hardware ==
  
  See [[CassandraHardware]]
  
- = Ring management =
+ == Ring management ==
  Each Cassandra server [node] is assigned a unique Token that determines what keys it is the primary replica for.  If you sort all nodes' Tokens, the Range of keys each is responsible for is (!PreviousToken, !MyToken], that is, from the previous token (exclusive) to the node's token (inclusive).  The machine with the lowest Token gets both all keys less than that token, and all keys greater than the largest Token; this is called a "wrapping Range."
  
  (Note that there is nothing special about being the "primary" replica, in the sense of being a point of failure.)
  
  When the !RandomPartitioner is used, Tokens are integers from 0 to 2**127.  Keys are converted to this range by MD5 hashing for comparison with Tokens.  (Thus, keys are always convertible to Tokens, but the reverse is not always true.)
  
- == Token selection ==
+ === Token selection ===
  Using a strong hash function means !RandomPartitioner keys will, on average, be evenly spread across the Token space, but you can still have imbalances if your Tokens do not divide up the range evenly, so you should specify !InitialToken to your first nodes as `i * (2**127 / N)` for i = 1 .. N.
  
  With order preserving partioners, your key distribution will be application-dependent.  You should still take your best guess at specifying initial tokens (guided by sampling actual data, if possible), but you will be more dependent on active load balancing (see below) and/or adding new nodes to hot spots.
  
  Once data is placed on the cluster, the partitioner may not be changed without wiping and starting over.
  
- == Replication ==
+ === Replication ===
  A Cassandra cluster always divides up the key space into ranges delimited by Tokens as described above, but additional replica placement is customizable via !IReplicaPlacementStrategy in the configuration file.  The standard strategies are
  
   * !RackUnawareStrategy: replicas are always placed on the next (in increasing Token order) N-1 nodes along the ring
@@ -38, +38 @@

  
  Reducing replication factor is easily done and only requires running cleanup afterwards to remove extra replicas.
   
- == Network topology ==
+ === Network topology ===
  
  Besides datacenters, you can also tell Cassandra which nodes are in the same rack within a datacenter.  Cassandra will use this to route both reads and data movement for Range changes to the nearest replicas.  This is configured by a user-pluggable !EndpointSnitch class in the configuration file.
  
@@ -46, +46 @@

  
  There is an example of a custom Snitch implementation in https://svn.apache.org/repos/asf/incubator/cassandra/trunk/contrib/property_snitch/.
  
- = Range changes =
+ == Range changes ==
  
- == Bootstrap ==
+ === Bootstrap ===
  Adding new nodes is called "bootstrapping."
  
  To bootstrap a node, turn !AutoBootstrap on in the configuration file, and start it.
@@ -68, +68 @@

  
  Again, no data is removed automatically, so if you want to put the node back into service and you don't need the data on it anymore, it should be removed manually.
  
- == Moving nodes ==
+ === Moving nodes ===
  `nodeprobe move`: move the target node to to a given Token. Moving is essentially a convenience over decommission + bootstrap.
  
- == Load balancing ==
+ === Load balancing ===
  `nodeprobe loadbalance`: also essentially a convenience over decommission + bootstrap, only instead of telling the target node where to move on the ring it will choose its location based on the same heuristic as Token selection on bootstrap.
  
- = Consistency =
+ == Consistency ==
  Cassandra allows clients to specify the desired consistency level on reads and writes.  (See [[API]].)  If R + W > N, where R, W, and N are respectively the read replica count, the write replica count, and the replication factor, all client reads will see the most recent write.  Otherwise, readers '''may''' see older versions, for periods of typically a few ms; this is called "eventual consistency."  See http://www.allthingsdistributed.com/2008/12/eventually_consistent.html and http://queue.acm.org/detail.cfm?id=1466448 for more.
  
- == Repairing missing or inconsistent data ==
+ === Repairing missing or inconsistent data ===
  Cassandra repairs data in two ways:
  
   1. Read Repair: every time a read is performed, Cassandra compares the versions at each replica (in the background, if a low consistency was requested by the reader to minimize latency), and the newest version is sent to any out-of-date replicas.
   1. Anti-Entropy: when `nodeprobe repair` is run, Cassandra performs a major compaction, computes a Merkle Tree of the data on that node, and compares it with the versions on other replicas, to catch any out of sync data that hasn't been read recently.  This is intended to be run infrequently (e.g., weekly) since major compaction is relatively expensive.
  
- == Handling failure ==
+ === Handling failure ===
  If a node goes down and comes back up, the ordinary repair mechanisms will be adequate to deal with any inconsistent data.  If a node goes down entirely, you should be aware of the following as well:
  
   1. Remove the old node from the ring first, or bring up a replacement node with the same IP and Token as the old; otherwise, the old node will stay part of the ring in a "down" state, which will degrade your replication factor for the affected Range
@@ -92, +92 @@

   1. Removing the old node, then bootstrapping the new one, may be more performant than using Anti-Entropy.  Testing needed.
    * Even brute-force rsyncing of data from the relevant replicas and running cleanup on the replacement node may be more performant
  
- = Backing up data =
+ == Backing up data ==
  Cassandra can snapshot data while online using `nodeprobe snapshot`.  You can then back up those snapshots using any desired system, although leaving them where they are is probably the option that makes the most sense on large clusters.
  
  Currently, only flushed data is snapshotted (not data that only exists in the commitlog).  Run `nodeprobe flush` first and wait for that to complete, to make sure you get '''all''' data in the snapshot.
  
  To revert to a snapshot, shut down the node, clear out the old commitlog and sstables, and move the sstables from the snapshot location to the live data directory.
  
- == Import / export ==
+ === Import / export ===
  Cassandra can also export data as JSON with `bin/sstable2json`, and import it with `bin/json2sstable`.  Eric to document. :)
  
- = Monitoring =
+ == Monitoring ==
  Cassandra exposes internal metrics as JMX data.  This is a common standard in the JVM world; OpenNMS, Nagios, and Munin at least offer some level of JMX support.
  
  Running `nodeprobe cfstats` can provide an overview of each Column Family, and important metrics to graph your cluster. Some folks prefer having to deal with non-jmx clients, there is a JMX-to-REST bridge available at http://code.google.com/p/polarrose-jmx-rest-bridge/