You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Noble Paul നോബിള്‍ नोब्ळ् <no...@gmail.com> on 2012/06/15 09:13:47 UTC

Limited row cache size

hi,
I configured my server with a row_cache_size_in_mb : 1920

When  started the server and checked  the JMX it shows the capacity is
set to 1024MB .

I investigated further and found that the version of
concurrentlruhashmap used is 1.2 which sets capacity max value to 1GB.

So, in cassandra 1.1 the max cache size I can use is 1GB


Digging deeper , I realized that throughout the API chain the cache
size is passed around as an int so even if I write my own
CacheProvider the max size would be Integer.MAX_VALUE = 2GB

unless cassandra changes the version of concurrentlruhashmap to 1.3
and change the signature to use a long for size, we can't have a big
cache. according to me 1 GB is a really small size.

So , even if I have bigger machines I can't really use them



-- 
-----------------------------------------------------
Noble Paul

Re: Limited row cache size

Posted by Noble Paul നോബിള്‍ नोब्ळ् <no...@gmail.com>.
sorry I meant 1.1.1 build

On Mon, Jun 25, 2012 at 10:40 AM, Noble Paul നോബിള്‍  नोब्ळ्
<no...@gmail.com> wrote:
> I was using the datastax build. Do they also have a 1.1 build?
>
> On Mon, Jun 18, 2012 at 9:05 AM, aaron morton <aa...@thelastpickle.com> wrote:
>> cassandra 1.1.1 ships with concurrentlinkedhashmap-lru-1.3.jar
>>
>> row_cache_size_in_mb starts life as an int but the byte size is stored as a
>> long
>> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/CacheService.java#L143
>>
>> Cheers
>>
>>
>> -----------------
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 15/06/2012, at 7:13 PM, Noble Paul നോബിള്‍ नोब्ळ् wrote:
>>
>> hi,
>> I configured my server with a row_cache_size_in_mb : 1920
>>
>> When  started the server and checked  the JMX it shows the capacity is
>> set to 1024MB .
>>
>> I investigated further and found that the version of
>> concurrentlruhashmap used is 1.2 which sets capacity max value to 1GB.
>>
>> So, in cassandra 1.1 the max cache size I can use is 1GB
>>
>>
>> Digging deeper , I realized that throughout the API chain the cache
>> size is passed around as an int so even if I write my own
>> CacheProvider the max size would be Integer.MAX_VALUE = 2GB
>>
>> unless cassandra changes the version of concurrentlruhashmap to 1.3
>> and change the signature to use a long for size, we can't have a big
>> cache. according to me 1 GB is a really small size.
>>
>> So , even if I have bigger machines I can't really use them
>>
>>
>>
>> --
>> -----------------------------------------------------
>> Noble Paul
>>
>>
>
>
>
> --
> -----------------------------------------------------
> Noble Paul



-- 
-----------------------------------------------------
Noble Paul

Re: Limited row cache size

Posted by Noble Paul നോബിള്‍ नोब्ळ् <no...@gmail.com>.
I was using the datastax build. Do they also have a 1.1 build?

On Mon, Jun 18, 2012 at 9:05 AM, aaron morton <aa...@thelastpickle.com> wrote:
> cassandra 1.1.1 ships with concurrentlinkedhashmap-lru-1.3.jar
>
> row_cache_size_in_mb starts life as an int but the byte size is stored as a
> long
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/CacheService.java#L143
>
> Cheers
>
>
> -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 15/06/2012, at 7:13 PM, Noble Paul നോബിള്‍ नोब्ळ् wrote:
>
> hi,
> I configured my server with a row_cache_size_in_mb : 1920
>
> When  started the server and checked  the JMX it shows the capacity is
> set to 1024MB .
>
> I investigated further and found that the version of
> concurrentlruhashmap used is 1.2 which sets capacity max value to 1GB.
>
> So, in cassandra 1.1 the max cache size I can use is 1GB
>
>
> Digging deeper , I realized that throughout the API chain the cache
> size is passed around as an int so even if I write my own
> CacheProvider the max size would be Integer.MAX_VALUE = 2GB
>
> unless cassandra changes the version of concurrentlruhashmap to 1.3
> and change the signature to use a long for size, we can't have a big
> cache. according to me 1 GB is a really small size.
>
> So , even if I have bigger machines I can't really use them
>
>
>
> --
> -----------------------------------------------------
> Noble Paul
>
>



-- 
-----------------------------------------------------
Noble Paul

Re: Limited row cache size

Posted by aaron morton <aa...@thelastpickle.com>.
cassandra 1.1.1 ships with concurrentlinkedhashmap-lru-1.3.jar

row_cache_size_in_mb starts life as an int but the byte size is stored as a long 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/CacheService.java#L143

Cheers


-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 15/06/2012, at 7:13 PM, Noble Paul നോബിള്‍ नोब्ळ् wrote:

> hi,
> I configured my server with a row_cache_size_in_mb : 1920
> 
> When  started the server and checked  the JMX it shows the capacity is
> set to 1024MB .
> 
> I investigated further and found that the version of
> concurrentlruhashmap used is 1.2 which sets capacity max value to 1GB.
> 
> So, in cassandra 1.1 the max cache size I can use is 1GB
> 
> 
> Digging deeper , I realized that throughout the API chain the cache
> size is passed around as an int so even if I write my own
> CacheProvider the max size would be Integer.MAX_VALUE = 2GB
> 
> unless cassandra changes the version of concurrentlruhashmap to 1.3
> and change the signature to use a long for size, we can't have a big
> cache. according to me 1 GB is a really small size.
> 
> So , even if I have bigger machines I can't really use them
> 
> 
> 
> -- 
> -----------------------------------------------------
> Noble Paul