You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Hugo <hu...@unitedgames.com> on 2010/07/19 21:14:17 UTC

Question on Eventual Consistency

Hi,

Being fairly new to Cassandra I have a question on the eventual 
consistency. I'm currently performing experiments with a single-node 
Cassandra system and a single client. In some of my tests I perform an 
update to an existing subcolumn in a row and subsequently read it back 
from the same thread. More often than not I get back the value I've 
written (and expected), but sometimes it can occur that I get back the 
old value of the subcolumn. Is this a bug or does it fall into the 
eventual consistency?

I'm using Hector 0.6.0-14 on Cassandra 0.6.3 on a single disk, 
double-core Windows machine with a Sun 1.6 JVM. All reads and writes are 
quorum (the default), but I don't think this matters in my setup.

Groets, Hugo.

Re: Question on Eventual Consistency

Posted by "F. Hugo Zwaal" <hu...@unitedgames.com>.
It's the previous value. I've checked.

Groets, Hugo.

On 20 jul 2010, at 00:19, Aaron Morton <aa...@thelastpickle.com> wrote:

> When the test fails what value does the verify array have ? Is it  
> null or a previous value?
>
> Aaron
>
> On 20 Jul, 2010,at 08:22 AM, Hugo <hu...@unitedgames.com> wrote:
>
>> See my test case attached below. In my setup it usually fails  
>> around the 800th try...
>>
>> import java.util.ArrayList;
>> import java.util.Arrays;
>> import java.util.HashMap;
>> import java.util.List;
>> import java.util.Map;
>> import java.util.Random;
>>
>> import me.prettyprint.cassandra.service.CassandraClient;
>> import me.prettyprint.cassandra.service.CassandraClientPool;
>> import me.prettyprint.cassandra.service.CassandraClientPoolFactory;
>> import me.prettyprint.cassandra.service.Keyspace;
>>
>> import org.apache.cassandra.thrift.Column;
>> import org.apache.cassandra.thrift.ColumnOrSuperColumn;
>> import org.apache.cassandra.thrift.ColumnParent;
>> import org.apache.cassandra.thrift.Mutation;
>> import org.apache.cassandra.thrift.SlicePredicate;
>> import org.apache.cassandra.thrift.SuperColumn;
>> import org.junit.Assert;
>> import org.junit.Test;
>>
>> public final class ConsistencyTest
>> {
>>     private static String HOST = "localhost";
>>     private static int PORT = 9160;
>>     private static String KEYSPACE = "Keyspace1";
>>     private static String FAMILY = "Super1";
>>     private static String ROW_KEY = "key";
>>     private static byte[] SUPER_COLUMN = "super".getBytes();
>>     private static byte[] SUB_COLUMN = "sub".getBytes();
>>
>>     private void write(CassandraClientPool aPool, byte[] aValue)
>>     throws Exception
>>     {
>>         CassandraClient client = aPool.borrowClient(HOST, PORT);
>>         final Keyspace keyspace = client.getKeyspace(KEYSPACE);
>>
>>         final List<Column> columnList = new ArrayList<Column>();
>>         columnList.add(new Column(SUB_COLUMN, aValue,  
>> keyspace.createTimestamp()));
>>
>>         final SuperColumn superColumn = new SuperColumn 
>> (SUPER_COLUMN, columnList);
>>         final ColumnOrSuperColumn cosc = new ColumnOrSuperColumn();
>>         cosc.setSuper_column(superColumn);
>>
>>         final Mutation mutation = new Mutation();
>>         mutation.setColumn_or_supercolumn(cosc);
>>
>>         final List<Mutation> mutations = new ArrayList<Mutation>();
>>         mutations.add(mutation);
>>
>>         final Map<String,List<Mutation>> familyBatch =
>>             new HashMap<String,List<Mutation>>();
>>         familyBatch.put(FAMILY, mutations);
>>
>>         final Map<String,Map<String,List<Mutation>>> batch =
>>             new HashMap<String,Map<String,List<Mutation>>>();
>>         batch.put(ROW_KEY, familyBatch);
>>
>>         try
>>         {
>>             keyspace.batchMutate(batch);
>>             client = keyspace.getClient();
>>         }
>>         finally
>>         {
>>             aPool.releaseClient(client);
>>         }
>>     }
>>
>>     private byte[] read(CassandraClientPool aPool)
>>     throws Exception
>>     {
>>         CassandraClient client = aPool.borrowClient(HOST, PORT);
>>         final Keyspace keyspace = client.getKeyspace(KEYSPACE);
>>
>>         final List<byte[]> columnNames = new ArrayList<byte[]>();
>>         columnNames.add(SUPER_COLUMN);
>>
>>         final SlicePredicate predicate = new SlicePredicate();
>>         predicate.setColumn_names(columnNames);
>>
>>         final List<SuperColumn> result;
>>         try
>>         {
>>             result = keyspace.getSuperSlice(ROW_KEY, new  
>> ColumnParent(FAMILY), predicate);
>>             client = keyspace.getClient();
>>         }
>>         finally
>>         {
>>             aPool.releaseClient(client);
>>         }
>>
>>         // never mind the inefficiency
>>         for (SuperColumn superColumn : result)
>>         {
>>             for (Column column : superColumn.getColumns())
>>             {
>>                 if (Arrays.equals(superColumn.getName(),  
>> SUPER_COLUMN)
>>                     && Arrays.equals(column.getName(), SUB_COLUMN))
>>                 {
>>                     return column.getValue();
>>                 }
>>             }
>>         }
>>         return null;
>>     }
>>
>>     @Test
>>     public void testConsistency()
>>     throws Exception
>>     {
>>         final CassandraClientPool pool =  
>> CassandraClientPoolFactory.INSTANCE.get();
>>
>>         for (int i = 0; (i < 1000); ++i)
>>         {
>>             final byte[] value = new byte[1];
>>             new Random().nextBytes(value);
>>
>>             write(pool, value);
>>             final byte[] verify = read(pool);
>>
>>             Assert.assertArrayEquals("failed on attempt " + (i +  
>> 1), value, verify);
>>         }
>>     }
>> }
>>
>> On 7/19/2010 9:26 PM, Ran Tavory wrote:
>>>
>>> if your test case is correct then it sounds like a bug to me. With  
>>> one node, unless you're writing with CL=0 you should get full  
>>> consistency.
>>>
>>> On Mon, Jul 19, 2010 at 10:14 PM, Hugo <hu...@unitedgames.com> wrote:
>>> Hi,
>>>
>>> Being fairly new to Cassandra I have a question on the eventual  
>>> consistency. I'm currently performing experiments with a single- 
>>> node Cassandra system and a single client. In some of my tests I  
>>> perform an update to an existing subcolumn in a row and  
>>> subsequently read it back from the same thread. More often than  
>>> not I get back the value I've written (and expected), but  
>>> sometimes it can occur that I get back the old value of the  
>>> subcolumn. Is this a bug or does it fall into the eventual  
>>> consistency?
>>>
>>> I'm using Hector 0.6.0-14 on Cassandra 0.6.3 on a single disk,  
>>> double-core Windows machine with a Sun 1.6 JVM. All reads and  
>>> writes are quorum (the default), but I don't think this matters in  
>>> my setup.
>>>
>>> Groets, Hugo.
>>>
>>

SV: SV: What is consuming the heap?

Posted by Thorvaldsson Justus <ju...@svenskaspel.se>.
There is some more information here about memory usage.
http://wiki.apache.org/cassandra/StorageConfiguration
/J

Från: 王一锋 [mailto:wangyifeng@aspire-tech.com]
Skickat: den 20 juli 2010 08:56
Till: user
Ämne: Re: SV: What is consuming the heap?


No, I don't think so. Because I'm not using supercolumn and size of a column will not exceed 1M

2010-07-20
________________________________

________________________________
发件人: Thorvaldsson Justus
发送时间: 2010-07-20  14:52:22
收件人: 'user@cassandra.apache.org'
抄送:
主题: SV: What is consuming the heap?
Supercolumn/column must fit into node memory
It could be?
/Justus
Från: 王一锋 [mailto:wangyifeng@aspire-tech.com]
Skickat: den 20 juli 2010 08:48
Till: user
Ämne: What is consuming the heap?

In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0",
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space),

what else can be consuming the heap?

heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD

2010-07-20
________________________________

Re: SV: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
No, I don't think so. Because I'm not using supercolumn and size of a column will not exceed 1M

2010-07-20 







发件人: Thorvaldsson Justus 
发送时间: 2010-07-20  14:52:22 
收件人: 'user@cassandra.apache.org' 
抄送: 
主题: SV: What is consuming the heap? 
 
Supercolumn/column must fit into node memory
It could be?
/Justus
Från: 王一锋 [mailto:wangyifeng@aspire-tech.com] 
Skickat: den 20 juli 2010 08:48
Till: user
Ämne: What is consuming the heap?
 
In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0", 
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space), 
 
what else can be consuming the heap?
 
heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD
 
2010-07-20 

SV: What is consuming the heap?

Posted by Thorvaldsson Justus <ju...@svenskaspel.se>.
Supercolumn/column must fit into node memory
It could be?
/Justus
Från: 王一锋 [mailto:wangyifeng@aspire-tech.com]
Skickat: den 20 juli 2010 08:48
Till: user
Ämne: What is consuming the heap?

In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0",
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space),

what else can be consuming the heap?

heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD

2010-07-20
________________________________

Re: Re: Re: What is consuming the heap?

Posted by Benjamin Black <b...@b3k.us>.
Have you changed the default Memtable settings?  Are you running on
nodes with a single 1TB drive?  Are you monitoring your I/O load on
the nodes?

On Thu, Jul 22, 2010 at 6:40 PM, 王一锋 <wa...@aspire-tech.com> wrote:
> The version we are using is 0.6.1
>
> 2010-07-23
> ________________________________
>
> ________________________________
> 发件人: 王一锋
> 发送时间: 2010-07-23  09:38:15
> 收件人: user
> 抄送:
> 主题: Re: Re: Re: What is consuming the heap?
> Yes, we are doing a lot of inserts.
>
> But how can CASSANDRA-1042 cause an OutOfMemory?
> And we are using multigetSlice(). We are not doing any get_range_slice() at
> all.
>
> 2010-07-23
> ________________________________
>
> ________________________________
> 发件人: Jonathan Ellis
> 发送时间: 2010-07-21  21:17:21
> 收件人: user
> 抄送:
> 主题: Re: Re: What is consuming the heap?
> On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
> <pe...@infidyne.com> wrote:
>>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>>> java.lang.OutOfMemoryError: Java heap space
>>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>>
>> So that confirms a "legitimate" out-of-memory condition in the sense
>> that CMS is reclaiming extremely little and the live set after a
>> concurrent mark/sweep is indeed around the 10 gig.
> Are you doing a lot of inserts?  You might be hitting
> https://issues.apache.org/jira/browse/CASSANDRA-1042
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support
> http://riptano.com

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
The version we are using is 0.6.1

2010-07-23 







发件人: 王一锋 
发送时间: 2010-07-23  09:38:15 
收件人: user 
抄送: 
主题: Re: Re: Re: What is consuming the heap? 
 
Yes, we are doing a lot of inserts.

But how can CASSANDRA-1042 cause an OutOfMemory?
And we are using multigetSlice(). We are not doing any get_range_slice() at all.

2010-07-23 







发件人: Jonathan Ellis 
发送时间: 2010-07-21  21:17:21 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
<pe...@infidyne.com> wrote:
>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>> java.lang.OutOfMemoryError: Java heap space
>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>
> So that confirms a "legitimate" out-of-memory condition in the sense
> that CMS is reclaiming extremely little and the live set after a
> concurrent mark/sweep is indeed around the 10 gig.
Are you doing a lot of inserts?  You might be hitting
https://issues.apache.org/jira/browse/CASSANDRA-1042
-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
Yes, we are doing a lot of inserts.

But how can CASSANDRA-1042 cause an OutOfMemory?
And we are using multigetSlice(). We are not doing any get_range_slice() at all.

2010-07-23 







发件人: Jonathan Ellis 
发送时间: 2010-07-21  21:17:21 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
 
On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
<pe...@infidyne.com> wrote:
>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>> java.lang.OutOfMemoryError: Java heap space
>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>
> So that confirms a "legitimate" out-of-memory condition in the sense
> that CMS is reclaiming extremely little and the live set after a
> concurrent mark/sweep is indeed around the 10 gig.
Are you doing a lot of inserts?  You might be hitting
https://issues.apache.org/jira/browse/CASSANDRA-1042
-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: What is consuming the heap?

Posted by Jonathan Ellis <jb...@gmail.com>.
On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
<pe...@infidyne.com> wrote:
>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>> java.lang.OutOfMemoryError: Java heap space
>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>
> So that confirms a "legitimate" out-of-memory condition in the sense
> that CMS is reclaiming extremely little and the live set after a
> concurrent mark/sweep is indeed around the 10 gig.

Are you doing a lot of inserts?  You might be hitting
https://issues.apache.org/jira/browse/CASSANDRA-1042

-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: What is consuming the heap?

Posted by Peter Schuller <pe...@infidyne.com>.
>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
> java.lang.OutOfMemoryError: Java heap space
>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584

So that confirms a "legitimate" out-of-memory condition in the sense
that CMS is reclaiming extremely little and the live set after a
concurrent mark/sweep is indeed around the 10 gig.


-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
I can only find these in the system.log

 INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
java.lang.OutOfMemoryError: Java heap space
 INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584




2010-07-21 







发件人: Jonathan Ellis 
发送时间: 2010-07-20  19:26:11 
收件人: user 
抄送: 
主题: Re: What is consuming the heap? 
 
you should post the full stack trace.
2010/7/20 王一锋 <wa...@aspire-tech.com>:
> In my cluster, I have set both KeysCached and RowsCached of my column family
> on all nodes to "0",
> but it still happened that a few nodes crashed because of OutOfMemory
> (from the gc.log, a full gc wasn't able to free up any memory space),
>
> what else can be consuming the heap?
>
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
> 1T HDD
>
> 2010-07-20
> ________________________________
-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: What is consuming the heap?

Posted by Jonathan Ellis <jb...@gmail.com>.
you should post the full stack trace.

2010/7/20 王一锋 <wa...@aspire-tech.com>:
> In my cluster, I have set both KeysCached and RowsCached of my column family
> on all nodes to "0",
> but it still happened that a few nodes crashed because of OutOfMemory
> (from the gc.log, a full gc wasn't able to free up any memory space),
>
> what else can be consuming the heap?
>
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
> 1T HDD
>
> 2010-07-20
> ________________________________



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
no, I'm using QUORUM for both writes and reads
Replication factor is 3

2010-07-21 







发件人: Dathan Pattishall 
发送时间: 2010-07-21  12:51:32 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
 
By off chance on writes are you using ConsistencyLevel::ZERO?





On Tue, Jul 20, 2010 at 9:41 PM, 王一锋 <wa...@aspire-tech.com> wrote:

So the bloom filters reside in memory completely?

We do have a lot of small values, hundreds of millions of columns in a columnfamily.

I count the total size of *-Filter.db files in my keyspace, it's 436,747,815bytes.

I guess this means it won't consume a major part of 10g heap space


2010-07-21 







发件人: Peter Schuller 
发送时间: 2010-07-20  21:45:08 
收件人: user 
抄送: 
主题: Re: What is consuming the heap? 
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
Are the 300 GB made up of *really* small values? Per SS table bloom
filters do consume memory, but you'd have to have a *lot* of *really*
small values for a 300 GB database to cause bloom filters to be a
significant part of a 10 GB h eap.
-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by Dathan Pattishall <da...@gmail.com>.
By off chance on writes are you using ConsistencyLevel::ZERO?




On Tue, Jul 20, 2010 at 9:41 PM, 王一锋 <wa...@aspire-tech.com> wrote:

>  So the bloom filters reside in memory completely?
>
> We do have a lot of small values, hundreds of millions of columns in a
> columnfamily.
>
> I count the total size of *-Filter.db files in my keyspace, it's
> 436,747,815bytes.
>
> I guess this means it won't consume a major part of 10g heap space
>
>
> 2010-07-21
> ------------------------------
>
> ------------------------------
> *发件人:* Peter Schuller
> *发送时间:* 2010-07-20  21:45:08
> *收件人:* user
> *抄送:*
> *主题:* Re: What is consuming the heap?
>
> > heap size is 10G and the load of data per node was around 300G, 16-core CPU,
>  Are the 300 GB made up of *really* small values? Per SS table bloom
> filters do consume memory, but you'd have to have a *lot* of *really*
> small values for a 300 GB database to cause bloom filters to be a
> significant part of a 10 GB h eap.
>  --
> / Peter Schuller
>

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
Yes, I'm running with defaults settings otherwise.
For cache sizes, I've tried '0' for non-cached, '1' for full cached and a fixed value of 500000, for KeysCached, RowsCached was using default everytime.
So I don't think the problem is about the cache.
Concurrent read was 32, write was 64
I also tried 320 and 640

The read/write ratio is about 2/1

How much memory will it need to do a compaction?
another 2 nodes went down last night. They were doing a compaction before they went down, judging from the timestamp of the *tmp* files in the data folder. 

Stack trace for node 1
 INFO [GC inspection] 2010-07-23 04:13:24,517 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 31275 ms, 29578704 reclaimed leaving 10713006792 used; max is 10873667584
ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-07-23 04:14:30,656 DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
        at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.cassandra.net.MessageSerializer.deserialize(Message.java:138)
        at org.apache.cassandra.net.MessageDeserializationTask.run(MessageDeserializationTask.java:45)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        ... 2 more

Stack trace for node 2
 INFO [COMMIT-LOG-WRITER] 2010-07-23 01:41:06,550 CommitLogSegment.java (line 50) Creating new commitlog segment /opt/crawler/cassandra/sysdata/commitlog/CommitLog-1279820466550.log
 INFO [Timer-1] 2010-07-23 01:41:09,027 Gossiper.java (line 179) InetAddress /183.62.134.31 is now dead.
 INFO [ROW-MUTATION-STAGE:45] 2010-07-23 01:41:09,279 ColumnFamilyStore.java (line 357) source_page has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='/opt/crawler/cassandra/sysdata/commitlog/CommitLog-1279820466550.log', position=9413)
 INFO [ROW-MUTATION-STAGE:45] 2010-07-23 01:41:09,322 ColumnFamilyStore.java (line 609) Enqueuing flush of Memtable(source_page)@1343553539
 INFO [FLUSH-WRITER-POOL:1] 2010-07-23 01:41:09,323 Memtable.java (line 148) Writing Memtable(source_page)@1343553539
 INFO [GMFD:1] 2010-07-23 01:41:09,349 Gossiper.java (line 568) InetAddress /183.62.134.30 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,349 Gossiper.java (line 568) InetAddress /183.62.134.31 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.28 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.26 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.27 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.24 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.25 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.22 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.23 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.33 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.32 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.34 is now UP
 INFO [GC inspection] 2010-07-23 01:41:24,192 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 12908 ms, 413977296 reclaimed leaving 9524655928 used; max is 10873667584
 INFO [Timer-1] 2010-07-23 01:41:50,867 Gossiper.java (line 179) InetAddress /183.62.134.34 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.33 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.32 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.31 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.30 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.28 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.27 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.26 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.25 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.24 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.23 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.22 is now dead.
 INFO [GC inspection] 2010-07-23 01:41:50,875 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11964 ms, 226808 reclaimed leaving 10303521344 used; max is 10873667584
ERROR [Thread-21] 2010-07-23 01:41:50,890 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-21,5,main]
java.lang.OutOfMemoryError: Java heap space
        at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:71)



2010-07-23 







发件人: Peter Schuller 
发送时间: 2010-07-21  14:35:36 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
 
> So the bloom filters reside in memory completely?
Yes. The point of bloom filters in cassandra is to act as a fast way
to determine whether sstables need to be consulted. This check
involves random access into the bloom filter. It needs to be in memory
for this to be effective.
But due to the nature of bloom filters you don't need a lot of memory
per key in the database, so it scales pretty well.
> I count the total size of *-Filter.db files in my keyspace, it's
> 436,747,815bytes.
>
> I guess this means it won't consume a major part of 10g heap space
Right, doesn't sound like bloom filters are the cause.
Are you running with defaults settings otherwise - cache sizes, flush
thresholds, etc?
-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by Peter Schuller <pe...@infidyne.com>.
> So the bloom filters reside in memory completely?

Yes. The point of bloom filters in cassandra is to act as a fast way
to determine whether sstables need to be consulted. This check
involves random access into the bloom filter. It needs to be in memory
for this to be effective.

But due to the nature of bloom filters you don't need a lot of memory
per key in the database, so it scales pretty well.

> I count the total size of *-Filter.db files in my keyspace, it's
> 436,747,815bytes.
>
> I guess this means it won't consume a major part of 10g heap space

Right, doesn't sound like bloom filters are the cause.

Are you running with defaults settings otherwise - cache sizes, flush
thresholds, etc?

-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
So the bloom filters reside in memory completely?

We do have a lot of small values, hundreds of millions of columns in a columnfamily.

I count the total size of *-Filter.db files in my keyspace, it's 436,747,815bytes.

I guess this means it won't consume a major part of 10g heap space


2010-07-21 







发件人: Peter Schuller 
发送时间: 2010-07-20  21:45:08 
收件人: user 
抄送: 
主题: Re: What is consuming the heap? 
 
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
Are the 300 GB made up of *really* small values? Per SS table bloom
filters do consume memory, but you'd have to have a *lot* of *really*
small values for a 300 GB database to cause bloom filters to be a
significant part of a 10 GB h eap.
-- 
/ Peter Schuller

Re: What is consuming the heap?

Posted by Peter Schuller <pe...@infidyne.com>.
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,

Are the 300 GB made up of *really* small values? Per SS table bloom
filters do consume memory, but you'd have to have a *lot* of *really*
small values for a 300 GB database to cause bloom filters to be a
significant part of a 10 GB h eap.

-- 
/ Peter Schuller

What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0", 
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space), 

what else can be consuming the heap?

heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD

2010-07-20 

Re: Question on Eventual Consistency

Posted by Aaron Morton <aa...@thelastpickle.com>.
When the test fails what value does the verify array have ? Is it null or a previous value?

Aaron

On 20 Jul, 2010,at 08:22 AM, Hugo <hu...@unitedgames.com> wrote:

> See my test case attached below. In my setup it usually fails around the 800th try...
>
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.HashMap;
> import java.util.List;
> import java.util.Map;
> import java.util.Random;
>
> import me.prettyprint.cassandra.service.CassandraClient;
> import me.prettyprint.cassandra.service.CassandraClientPool;
> import me.prettyprint.cassandra.service.CassandraClientPoolFactory;
> import me.prettyprint.cassandra.service.Keyspace;
>
> import org.apache.cassandra.thrift.Column;
> import org.apache.cassandra.thrift.ColumnOrSuperColumn;
> import org.apache.cassandra.thrift.ColumnParent;
> import org.apache.cassandra.thrift.Mutation;
> import org.apache.cassandra.thrift.SlicePredicate;
> import org.apache.cassandra.thrift.SuperColumn;
> import org.junit.Assert;
> import org.junit.Test;
>
> public final class ConsistencyTest
> {
>     private static String HOST = "localhost";
>     private static int PORT = 9160;
>     private static String KEYSPACE = "Keyspace1";
>     private static String FAMILY = "Super1";
>     private static String ROW_KEY = "key";
>     private static byte[] SUPER_COLUMN = "super".getBytes();
>     private static byte[] SUB_COLUMN = "sub".getBytes();
>    
>     private void write(CassandraClientPool aPool, byte[] aValue)
>     throws Exception
>     {
>         CassandraClient client = aPool.borrowClient(HOST, PORT);
>         final Keyspace keyspace = client.getKeyspace(KEYSPACE);
>        
>         final List<Column> columnList = new ArrayList<Column>();
>         columnList.add(new Column(SUB_COLUMN, aValue, keyspace.createTimestamp()));
>        
>         final SuperColumn superColumn = new SuperColumn(SUPER_COLUMN, columnList);
>         final ColumnOrSuperColumn cosc = new ColumnOrSuperColumn();
>         cosc.setSuper_column(superColumn);
>        
>         final Mutation mutation = new Mutation();
>         mutation.setColumn_or_supercolumn(cosc);
>        
>         final List<Mutation> mutations = new ArrayList<Mutation>();
>         mutations.add(mutation);
>        
>         final Map<String,List<Mutation>> familyBatch =
>             new HashMap<String,List<Mutation>>();
>         familyBatch.put(FAMILY, mutations);
>        
>         final Map<String,Map<String,List<Mutation>>> batch =
>             new HashMap<String,Map<String,List<Mutation>>>();
>         batch.put(ROW_KEY, familyBatch);
>        
>         try
>         {
>             keyspace.batchMutate(batch);
>             client = keyspacegetClient();
>         }
>         finally
>         {
>             aPool.releaseClient(client);
>         }
>     }
>
>     private byte[] read(CassandraClientPool aPool)
>     throws Exception
>     {
>         CassandraClient client = aPool.borrowClient(HOST, PORT);
>         final Keyspace keyspace = client.getKeyspace(KEYSPACE);
>        
>         final List<byte[]> columnNames = new ArrayList<byte[]>();
>         columnNames.add(SUPER_COLUMN);
>        
>         final SlicePredicate predicate = new SlicePredicate();
>         predicate.setColumn_names(columnNames);
>        
>         final List<SuperColumn> result;
>         try
>         {
>             result = keyspace.getSuperSlice(ROW_KEY, new ColumnParent(FAMILY), predicate);
>             client = keyspace.getClient();
>         }
>         finally
>         {
>             aPool.releaseClient(client);
>         }
>        
>         // never mind the inefficiency
>         for (SuperColumn superColumn : result)
>         {
>             for (Column column : superColumn.getColumns())
>             {
>                 if (Arrays.equals(superColumn.getName(), SUPER_COLUMN)
>                     && Arrays.equals(column.getName(), SUB_COLUMN))
>                 {
>                     return column.getValue();
>                 }
>             }
>         }
>         return null;
>     }
>    
>     @Test
>     public void testConsistency()
>     throws Exception
>     {
>         final CassandraClientPool pool = CassandraClientPoolFactory.INSTANCE.get();
>        
>         for (int i = 0; (i < 1000); ++i)
>         {
>             final byte[] value = new byte[1];
>             new Random().nextBytes(value);
>            
>             write(pool, value);
>             final byte[] verify = read(pool);
>            
>             Assert.assertArrayEquals("failed on attempt " + (i + 1), value, verify);
>         }
>     }
> }
>
> On 7/19/2010 9:26 PM, Ran Tavory wrote:
>> if your test case is correct then it sounds like a bug to me. With one node, unless you're writing with CL=0 you should get full consistency.
>>
>> On Mon, Jul 19, 2010 at 10:14 PM, Hugo <hu...@unitedgames.com> wrote:
>>
>>     Hi,
>>
>>     Being fairly new to Cassandra I have a question on the eventual consistency. I'm currently performing experiments with a single-node Cassandra system and a single client. In some of my tests I perform an update to an existing subcolumn in a row and subsequently read it back from the same thread. More often than not I get back the value I've written (and expected), but sometimes it can occur that I get back the old value of the subcolumn. Is this a bug or does it fall into the eventual consistency?
>>
>>     I'm using Hector 0.6.0-14 on Cassandra 0.6.3 on a single disk, double-core Windows machine with a Sun 1.6 JVM. All reads and writes are quorum (the default), but I don't think this matters in my setup.
>>
>>     Groets, Hugo.
>>
>>

Re: Question on Eventual Consistency

Posted by Hugo <hu...@unitedgames.com>.
See my test case attached below. In my setup it usually fails around the 
800th try...

import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Random;

import me.prettyprint.cassandra.service.CassandraClient;
import me.prettyprint.cassandra.service.CassandraClientPool;
import me.prettyprint.cassandra.service.CassandraClientPoolFactory;
import me.prettyprint.cassandra.service.Keyspace;

import org.apache.cassandra.thrift.Column;
import org.apache.cassandra.thrift.ColumnOrSuperColumn;
import org.apache.cassandra.thrift.ColumnParent;
import org.apache.cassandra.thrift.Mutation;
import org.apache.cassandra.thrift.SlicePredicate;
import org.apache.cassandra.thrift.SuperColumn;
import org.junit.Assert;
import org.junit.Test;

public final class ConsistencyTest
{
     private static String HOST = "localhost";
     private static int PORT = 9160;
     private static String KEYSPACE = "Keyspace1";
     private static String FAMILY = "Super1";
     private static String ROW_KEY = "key";
     private static byte[] SUPER_COLUMN = "super".getBytes();
     private static byte[] SUB_COLUMN = "sub".getBytes();

     private void write(CassandraClientPool aPool, byte[] aValue)
     throws Exception
     {
         CassandraClient client = aPool.borrowClient(HOST, PORT);
         final Keyspace keyspace = client.getKeyspace(KEYSPACE);

         final List<Column> columnList = new ArrayList<Column>();
         columnList.add(new Column(SUB_COLUMN, aValue, 
keyspace.createTimestamp()));

         final SuperColumn superColumn = new SuperColumn(SUPER_COLUMN, 
columnList);
         final ColumnOrSuperColumn cosc = new ColumnOrSuperColumn();
         cosc.setSuper_column(superColumn);

         final Mutation mutation = new Mutation();
         mutation.setColumn_or_supercolumn(cosc);

         final List<Mutation> mutations = new ArrayList<Mutation>();
         mutations.add(mutation);

         final Map<String,List<Mutation>> familyBatch =
             new HashMap<String,List<Mutation>>();
         familyBatch.put(FAMILY, mutations);

         final Map<String,Map<String,List<Mutation>>> batch =
             new HashMap<String,Map<String,List<Mutation>>>();
         batch.put(ROW_KEY, familyBatch);

         try
         {
             keyspace.batchMutate(batch);
             client = keyspace.getClient();
         }
         finally
         {
             aPool.releaseClient(client);
         }
     }

     private byte[] read(CassandraClientPool aPool)
     throws Exception
     {
         CassandraClient client = aPool.borrowClient(HOST, PORT);
         final Keyspace keyspace = client.getKeyspace(KEYSPACE);

         final List<byte[]> columnNames = new ArrayList<byte[]>();
         columnNames.add(SUPER_COLUMN);

         final SlicePredicate predicate = new SlicePredicate();
         predicate.setColumn_names(columnNames);

         final List<SuperColumn> result;
         try
         {
             result = keyspace.getSuperSlice(ROW_KEY, new 
ColumnParent(FAMILY), predicate);
             client = keyspace.getClient();
         }
         finally
         {
             aPool.releaseClient(client);
         }

         // never mind the inefficiency
         for (SuperColumn superColumn : result)
         {
             for (Column column : superColumn.getColumns())
             {
                 if (Arrays.equals(superColumn.getName(), SUPER_COLUMN)
&& Arrays.equals(column.getName(), SUB_COLUMN))
                 {
                     return column.getValue();
                 }
             }
         }
         return null;
     }

     @Test
     public void testConsistency()
     throws Exception
     {
         final CassandraClientPool pool = 
CassandraClientPoolFactory.INSTANCE.get();

         for (int i = 0; (i < 1000); ++i)
         {
             final byte[] value = new byte[1];
             new Random().nextBytes(value);

             write(pool, value);
             final byte[] verify = read(pool);

             Assert.assertArrayEquals("failed on attempt " + (i + 1), 
value, verify);
         }
     }
}

On 7/19/2010 9:26 PM, Ran Tavory wrote:
> if your test case is correct then it sounds like a bug to me. With one 
> node, unless you're writing with CL=0 you should get full consistency.
>
> On Mon, Jul 19, 2010 at 10:14 PM, Hugo <hugo@unitedgames.com 
> <ma...@unitedgames.com>> wrote:
>
>     Hi,
>
>     Being fairly new to Cassandra I have a question on the eventual
>     consistency. I'm currently performing experiments with a
>     single-node Cassandra system and a single client. In some of my
>     tests I perform an update to an existing subcolumn in a row and
>     subsequently read it back from the same thread. More often than
>     not I get back the value I've written (and expected), but
>     sometimes it can occur that I get back the old value of the
>     subcolumn. Is this a bug or does it fall into the eventual
>     consistency?
>
>     I'm using Hector 0.6.0-14 on Cassandra 0.6.3 on a single disk,
>     double-core Windows machine with a Sun 1.6 JVM. All reads and
>     writes are quorum (the default), but I don't think this matters in
>     my setup.
>
>     Groets, Hugo.
>
>

Re: Question on Eventual Consistency

Posted by Ran Tavory <ra...@gmail.com>.
On Mon, Jul 19, 2010 at 10:43 PM, Peter Schuller <
peter.schuller@infidyne.com> wrote:

> > I'm using CL=QUORUM (=Hector default) for both reads and writes. Most of
> the
> > times, the test passes, but sometimes it fails because I get back the old
> > value. Since the test is single-threaded, I guess it is a bug. I'll try
> to
> > reduce the test to something smaller that can be used for
> troubleshooting.
>
> I have never used or looked at the source of Hector; but is it at all
> possible that Hector is making the write asynchronous by putting it on
> a queue of some kind, serviced by a pool of workers?
>
no

>
> To be clear, this is *pure* speculation and may be completely out of
> the question. It's an attempt to think up an hypothesis other than a
> Cassandra bug to explain what you're seeing.
>
sorry... there could be other bugs, but queues and asyncs aren't
involved...

>
> > By the way, is it documented somewhere under what circumstances one can
> > expect inconsistencies and when not?
>
> Not sure if consistency is dealt with in more depth somewhere, but one
> point talking about consistency levels is:
>
>   http://wiki.apache.org/cassandra/API
>
> You may also be interested in the Dynamo paper for background:
>
>   http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html
>
> Unless I have seriously misunderstood something, you're definitely
> expected to get the consistency you are after with QUOROM - under the
> assumption that you use QUOROM for both reads and writes of the data
> in question, as you say you do.
>
> If you further need durability (so that you don't lose said
> consistency in the event of cassandra nodes going done in an
> uncontrolled fashion), you'll want to turn on batch wise commits
> rather than periodic commits in Cassandra. Expect that to imply a
> potentially significant performance penalty though, depending
> primarily on what your commit log is stored on.
>
> --
> / Peter Schuller
>

Re: Question on Eventual Consistency

Posted by Peter Schuller <pe...@infidyne.com>.
> I'm using CL=QUORUM (=Hector default) for both reads and writes. Most of the
> times, the test passes, but sometimes it fails because I get back the old
> value. Since the test is single-threaded, I guess it is a bug. I'll try to
> reduce the test to something smaller that can be used for troubleshooting.

I have never used or looked at the source of Hector; but is it at all
possible that Hector is making the write asynchronous by putting it on
a queue of some kind, serviced by a pool of workers?

To be clear, this is *pure* speculation and may be completely out of
the question. It's an attempt to think up an hypothesis other than a
Cassandra bug to explain what you're seeing.

> By the way, is it documented somewhere under what circumstances one can
> expect inconsistencies and when not?

Not sure if consistency is dealt with in more depth somewhere, but one
point talking about consistency levels is:

   http://wiki.apache.org/cassandra/API

You may also be interested in the Dynamo paper for background:

   http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html

Unless I have seriously misunderstood something, you're definitely
expected to get the consistency you are after with QUOROM - under the
assumption that you use QUOROM for both reads and writes of the data
in question, as you say you do.

If you further need durability (so that you don't lose said
consistency in the event of cassandra nodes going done in an
uncontrolled fashion), you'll want to turn on batch wise commits
rather than periodic commits in Cassandra. Expect that to imply a
potentially significant performance penalty though, depending
primarily on what your commit log is stored on.

-- 
/ Peter Schuller

Re: Question on Eventual Consistency

Posted by Hugo <hu...@unitedgames.com>.
I'm using CL=QUORUM (=Hector default) for both reads and writes. Most of 
the times, the test passes, but sometimes it fails because I get back 
the old value. Since the test is single-threaded, I guess it is a bug. 
I'll try to reduce the test to something smaller that can be used for 
troubleshooting.

By the way, is it documented somewhere under what circumstances one can 
expect inconsistencies and when not?

On 7/19/2010 9:26 PM, Ran Tavory wrote:
> if your test case is correct then it sounds like a bug to me. With one 
> node, unless you're writing with CL=0 you should get full consistency.
>
> On Mon, Jul 19, 2010 at 10:14 PM, Hugo <hugo@unitedgames.com 
> <ma...@unitedgames.com>> wrote:
>
>     Hi,
>
>     Being fairly new to Cassandra I have a question on the eventual
>     consistency. I'm currently performing experiments with a
>     single-node Cassandra system and a single client. In some of my
>     tests I perform an update to an existing subcolumn in a row and
>     subsequently read it back from the same thread. More often than
>     not I get back the value I've written (and expected), but
>     sometimes it can occur that I get back the old value of the
>     subcolumn. Is this a bug or does it fall into the eventual
>     consistency?
>
>     I'm using Hector 0.6.0-14 on Cassandra 0.6.3 on a single disk,
>     double-core Windows machine with a Sun 1.6 JVM. All reads and
>     writes are quorum (the default), but I don't think this matters in
>     my setup.
>
>     Groets, Hugo.
>
>

Re: Question on Eventual Consistency

Posted by Ran Tavory <ra...@gmail.com>.
if your test case is correct then it sounds like a bug to me. With one node,
unless you're writing with CL=0 you should get full consistency.

On Mon, Jul 19, 2010 at 10:14 PM, Hugo <hu...@unitedgames.com> wrote:

> Hi,
>
> Being fairly new to Cassandra I have a question on the eventual
> consistency. I'm currently performing experiments with a single-node
> Cassandra system and a single client. In some of my tests I perform an
> update to an existing subcolumn in a row and subsequently read it back from
> the same thread. More often than not I get back the value I've written (and
> expected), but sometimes it can occur that I get back the old value of the
> subcolumn. Is this a bug or does it fall into the eventual consistency?
>
> I'm using Hector 0.6.0-14 on Cassandra 0.6.3 on a single disk, double-core
> Windows machine with a Sun 1.6 JVM. All reads and writes are quorum (the
> default), but I don't think this matters in my setup.
>
> Groets, Hugo.
>