You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by kapil nayar <ka...@gmail.com> on 2011/09/09 18:38:18 UTC

Cassandra as in-memory cache

Hi,

Can we configure some column-families (or keyspaces) in Cassandra to perform
as a pure in-memory cache?

The feature should let the memtables always be in-memory (never flushed to
the disk - sstables).
The memtable flush threshold settings of time/ memory/ operations can be set
to a max value to achieve this.

However, it seems uneven distribution of the keys across the nodes in the
cluster could lead to java error no-memory available. In order to prevent
this error can we overflow some entries to the disk?

Thanks,
Kapil

Re: Cassandra as in-memory cache

Posted by Adrian Cockcroft <ad...@gmail.com>.
You should be using the off heap row cache option. That way you avoid GC
overhead and the rows are stored in a compact serialized form that means you
get more cache entries in RAM. Trade off is slightly more CPU for
deserialization etc.

Adrian

On Sunday, September 11, 2011, aaron morton <aa...@thelastpickle.com> wrote:
> If the row cache is enabled the read path will not use the sstables.
Depending on the workload I would then look at setting *low* memtable flush
settings to use as much memory as possible for the row cache. If the row is
in the row cache the read path will not look at SSTables.
>
> Then set the row cache save settings per CF to ensure the cache is warmed
when the node starts.
>
> The write path will still use the WAL so if you may want to disable the
commit log using the durable_writes setting on  the keyspace.
>
> Hope that helps.
>
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 10/09/2011, at 4:38 AM, kapil nayar wrote:
>
>> Hi,
>>
>> Can we configure some column-families (or keyspaces) in Cassandra to
perform as a pure in-memory cache?
>>
>> The feature should let the memtables always be in-memory (never flushed
to the disk - sstables).
>> The memtable flush threshold settings of time/ memory/ operations can be
set to a max value to achieve this.
>>
>> However, it seems uneven distribution of the keys across the nodes in the
cluster could lead to java error no-memory available. In order to prevent
this error can we overflow some entries to the disk?
>>
>> Thanks,
>> Kapil
>
>

Re: Cassandra as in-memory cache

Posted by aaron morton <aa...@thelastpickle.com>.
If the row cache is enabled the read path will not use the sstables. Depending on the workload I would then look at setting *low* memtable flush settings to use as much memory as possible for the row cache. If the row is in the row cache the read path will not look at SSTables. 

Then set the row cache save settings per CF to ensure the cache is warmed when the node starts. 

The write path will still use the WAL so if you may want to disable the commit log using the durable_writes setting on  the keyspace. 

Hope that helps. 

-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 10/09/2011, at 4:38 AM, kapil nayar wrote:

> Hi,
> 
> Can we configure some column-families (or keyspaces) in Cassandra to perform as a pure in-memory cache? 
> 
> The feature should let the memtables always be in-memory (never flushed to the disk - sstables).
> The memtable flush threshold settings of time/ memory/ operations can be set to a max value to achieve this. 
> 
> However, it seems uneven distribution of the keys across the nodes in the cluster could lead to java error no-memory available. In order to prevent this error can we overflow some entries to the disk?
> 
> Thanks,
> Kapil


Re: Cassandra as in-memory cache

Posted by Hernán Quevedo <al...@gmail.com>.
Hi, all.

I´m new at this and haven´t been able to install cassandra in debian
6. After uncompressing the tar and creating var/log and var/lib
directories, the command bin/cassandra -f results in message "exec:
357 -ea not found" preventing cassandra from run the process README
file says it is suppose to start.

Any help would be very appreciated.

Thnx!

On 9/9/11, kapil nayar <ka...@gmail.com> wrote:
> Hi,
>
> Can we configure some column-families (or keyspaces) in Cassandra to perform
> as a pure in-memory cache?
>
> The feature should let the memtables always be in-memory (never flushed to
> the disk - sstables).
> The memtable flush threshold settings of time/ memory/ operations can be set
> to a max value to achieve this.
>
> However, it seems uneven distribution of the keys across the nodes in the
> cluster could lead to java error no-memory available. In order to prevent
> this error can we overflow some entries to the disk?
>
> Thanks,
> Kapil
>

-- 
Sent from my mobile device

Είναι η θέληση των Θεών.