You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by jb...@apache.org on 2009/08/11 19:13:06 UTC

svn commit: r803208 - /incubator/cassandra/trunk/BUGS.txt

Author: jbellis
Date: Tue Aug 11 17:13:06 2009
New Revision: 803208

URL: http://svn.apache.org/viewvc?rev=803208&view=rev
Log:
update BUGS.txt for 0.4
patch by jbellis; reviewed by Eric Evans and Michael Greene for CASSANDRA-354

Modified:
    incubator/cassandra/trunk/BUGS.txt

Modified: incubator/cassandra/trunk/BUGS.txt
URL: http://svn.apache.org/viewvc/incubator/cassandra/trunk/BUGS.txt?rev=803208&r1=803207&r2=803208&view=diff
==============================================================================
--- incubator/cassandra/trunk/BUGS.txt (original)
+++ incubator/cassandra/trunk/BUGS.txt Tue Aug 11 17:13:06 2009
@@ -1,39 +1,6 @@
-Here are the known issues you should be most concerned about:
-
- 1. With enough and large enough keys in a ColumnFamily, Cassandra will
-    run out of memory trying to perform compactions (data file merges).
-    The size of what is stored in memory is (S + 16) * (N + M) where S
-    is the size of the key (usually 2 bytes per character), N is the
-    number of keys and M, is the map overhead (which can be guestimated
-    at around 32 bytes per key).
-
-    So, if you have 10-character keys and 1GB of headroom in your heap
-    space for compaction, you can expect to store about 17M keys
-    before running into problems.
-
-    See https://issues.apache.org/jira/browse/CASSANDRA-208
-
- 2. Because fixing #1 requires a data file format change, 0.4 will not
-    be binary-compatible with 0.3 data files.  A client-side upgrade
-    can be done relatively easily with the following algorithm:
-
-	for key in old_client.get_key_range(everything):
-          columns = old_client.get_slice or get_slice_super(key, all columns)
-	  new_client.batch_insert or batch_insert_super(key, columns)
-
-    The inner loop can be trivially parallelized for speed.
-
- 3. Commitlog does not fsync before reporting a write successful.
-    Using blocking writes mitigates this to some degree, since all
-    nodes that were part of the write quorum would have to fail
-    before sync for data to be lost.
-
-    See https://issues.apache.org/jira/browse/CASSANDRA-182
-
-
-Additionally, row size (that is, all the data associated with a single
+Row size (that is, all the data associated with a single
 key in a given ColumnFamily) is limited by available memory, because
 compaction deserializes each row before merging.
 
 See https://issues.apache.org/jira/browse/CASSANDRA-16
-    
+and http://wiki.apache.org/cassandra/CassandraLimitations