You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Hiroyuki Yamada <mo...@gmail.com> on 2016/03/05 03:04:51 UTC

How can I make Cassandra stable in a 2GB RAM node environment ?

Hi,

I'm working on some POCs for Cassandra with single 2GB RAM node environment
and
some issues came up with me, so let me ask here.

I have tried to insert about 200 million records (about 11GB in size) to
the node,
and the insertion from an application program seems completed,
but something (probably compaction?) was happening after the insertion and
later Cassandra itself was killed by OOM killer.

I've tried to tune the configurations including heap size, compaction
memory setting and bloom filter setting
to make C* work nicely in the low memory environment,
but in any cases, it doesn't work so far. (which means I still get OOM
eventually)

I know it is not very recommended to run C* in such low memory environment,
but I am wondering what can I do (what configurations to change) to make it
a little more stable in such environment.
(I understand the following configuration is very tight and not very
recommended but I just want to make it work now)

Could anyone give me a help ?


Hardware and software :
    - EC2 instance (t2.small: 1vCPU, 2GB RAM)
    - Cassandra 2.2.5
    - JDK 8 (8u73)

Cassandara configuraions (what I changed from the default) :
    - leveledCompactionStrategy
    - custom configuration settings of cassandra-env.sh
        - MAX_HEAP_SIZE: 640MB
        - HEAP_NEWSIZE: 128MB
        - custom configuration settings of cassandra.yaml
            - commitlog_segment_size_in_mb: 4
            - commitlog_total_space_in_mb: 512
            - sstable_preemptive_open_interval_in_mb: 16
            - file_cache_size_in_mb: 40
            - memtable_heap_space_in_mb: 40
            - key_cache_size_in_mb: 0
        - bloom filter is disabled


=== debug.log around when Cassandra was killed by OOM killer ===
DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:02,378
FileCacheService.java:177 - Invalidating cache for
/var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15626-big-Data.db
DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:09,903
FileCacheService.java:177 - Invalidating cache for
/var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15622-big-Data.db
DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:14,360
FileCacheService.java:177 - Invalidating cache for
/var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15626-big-Data.db
DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:20,004
FileCacheService.java:177 - Invalidating cache for
/var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15622-big-Data.db
======

=== /var/log/message ===
Mar  4 00:36:22 ip-10-0-0-11 kernel: Killed process 8919 (java)
total-vm:32407840kB, anon-rss:1535020kB, file-rss:123096kB
======


Best regards,
Hiro

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Alain RODRIGUEZ <ar...@gmail.com>.
Hi, I am not sure I understood your message correctly but I will try to
answer it.

but, I think, in Cassandra case, it seems a matter of how much data we use
> with how much memory we have.


If you are saying you can use poor commodity servers (vertically scale
poorly) and just add nodes (horizontal scaling) when the cluster is not
powerful enough, you need to know that a minimum of vertical scaling is
needed to have great performances or a good stability. Yet, tuning things,
you can probably reach a stable state with t2.medium if there is enough
t2.medium to handle the load.

with default configuration except for leveledCompactionStrategy


LeveledCompactionStrategy is heavier to maintain than STCS. On such an
environment, read latency is probably not  your main concern, and using
STCS could give better results as it is way lighter in terms of compactions
(Depends on your use case though).

I also used 4GM RAM machine (t2.medium)
>

With 4GB of RAM you probably want to use 1 GB of heap. What version of
cassandra are you using ?
You might also need to tune bloomfilters, index_interval, memtables size
and type, and a few other things to reduce the memory footprint.

About compaction, use only half of the cores as concurrent compactors (one
core) and see if this improves stability and compaction can still keep up.
Or keep 2 and reduce it speed by lowering the compaction throughput.

Use nodetool {tpstats, compactionstats, cfstats, cfhistograms} to monitor
things and see what to tune.

As told earlier, using this low spec machines is fine if you know how to
tune Cassandra and can afford some research / tuning time...

Alain
-----------------------
Alain Rodriguez - alain@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2016-03-12 6:58 GMT+01:00 Hiroyuki Yamada <mo...@gmail.com>:

> Thank you all to respond and discuss my question.
>
> I agree with you all basically,
> but, I think, in Cassandra case, it seems a matter of how much data we use
> with how much memory we have.
>
> As Jack's (and datastax's) suggestion,
> I also used 4GM RAM machine (t2.medium) with 1 billion records (about
> 100GB in size) with default configuration except for
> leveledCompactionStrategy,
> but after completion of insertion from an application program, probably
> compaction kept working,
> and again, later Cassandra was killed by OOM killer.
>
> Insertion from application side is finished, so the issue is maybe from
> compaction happening in background.
> Is there any recommended configuration in compaction to make Cassandra
> stable with large dataset (more than 100GB) with kind of low memory (4GB)
> environment ?
>
> I think it would be the same thing if I try the experiment with 8GB memory
> and larger data set (maybe more than 2 billion records).
> (If it is not correct, please explain why.)
>
>
> Best regards,
> Hiro
>
> On Fri, Mar 11, 2016 at 4:19 AM, Robert Coli <rc...@eventbrite.com> wrote:
>
>> On Thu, Mar 10, 2016 at 3:27 AM, Alain RODRIGUEZ <ar...@gmail.com>
>> wrote:
>>
>>> So, like Jack, I globally really not recommend it unless you know what
>>> you are doing and don't care about facing those issues.
>>>
>>
>> Certainly a spectrum of views here, but everyone (including OP) seems to
>> agree with the above. :D
>>
>> =Rob
>>
>>
>
>

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Hiroyuki Yamada <mo...@gmail.com>.
Thank you all to respond and discuss my question.

I agree with you all basically,
but, I think, in Cassandra case, it seems a matter of how much data we use
with how much memory we have.

As Jack's (and datastax's) suggestion,
I also used 4GM RAM machine (t2.medium) with 1 billion records (about 100GB
in size) with default configuration except for leveledCompactionStrategy,
but after completion of insertion from an application program, probably
compaction kept working,
and again, later Cassandra was killed by OOM killer.

Insertion from application side is finished, so the issue is maybe from
compaction happening in background.
Is there any recommended configuration in compaction to make Cassandra
stable with large dataset (more than 100GB) with kind of low memory (4GB)
environment ?

I think it would be the same thing if I try the experiment with 8GB memory
and larger data set (maybe more than 2 billion records).
(If it is not correct, please explain why.)


Best regards,
Hiro

On Fri, Mar 11, 2016 at 4:19 AM, Robert Coli <rc...@eventbrite.com> wrote:

> On Thu, Mar 10, 2016 at 3:27 AM, Alain RODRIGUEZ <ar...@gmail.com>
> wrote:
>
>> So, like Jack, I globally really not recommend it unless you know what
>> you are doing and don't care about facing those issues.
>>
>
> Certainly a spectrum of views here, but everyone (including OP) seems to
> agree with the above. :D
>
> =Rob
>
>

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Robert Coli <rc...@eventbrite.com>.
On Thu, Mar 10, 2016 at 3:27 AM, Alain RODRIGUEZ <ar...@gmail.com> wrote:

> So, like Jack, I globally really not recommend it unless you know what you
> are doing and don't care about facing those issues.
>

Certainly a spectrum of views here, but everyone (including OP) seems to
agree with the above. :D

=Rob

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Alain RODRIGUEZ <ar...@gmail.com>.
+1 for Rob comment.

I would add that I have been learning a lot from running t1.micro (then
small, medium, Large, ..., i2.2XL) on AWS machines (800 MB RAM). I had to
tweak every single parameter in cassandra.yaml and cassandra-env.sh. So I
leaned a lot about internals, I had to! Even if I am glad I had this chance
to learn, I must say production wasn't that stable (latency was not
predictable, a compaction was a big event to handle...).

So, like Jack, I globally really not recommend it unless you know what you
are doing and don't care about facing those issues.

There are also people running Cassandra on Raspberry, so yes, it is doable
and it is really up to you =).

Good luck if you go this way.

C*heers
-----------------------
Alain Rodriguez - alain@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2016-03-09 23:31 GMT+01:00 Jack Krupansky <ja...@gmail.com>:

> Thanks, Rob, but... I'll continue to do my best to strongly (vehemently,
> or is there an even stronger word for me to use?!) discourage use of
> Cassandra in under 4/8 GB of memory. Hey, I just want people to be happy,
> and trying to run Cassandra in under 8 GB (or 4 GB for dev) is just...
> asking for trouble, unhappiness, even despair. Hey, if somebody is smart
> enough to figure out how to do it on their own, then great, they are set
> and don't need our help, but personally I would declare it as out of
> bounds/off limits. But if anybody else here wants to support/encourage it,
> they are free to do so and I won't get in their way other than to state my
> own view.
>
> By "support", I primarily mean what the (open source) code does out of the
> box without superhuman effort (BTW, all of the guys at Open Source
> Connection ARE superhuman!!) as well as the support of memory of the
> community here on this list.
>
> Doc? If anybody thinks there is a better source of doc for open source
> Cassandra than the DataStax doc, please point me to it. Until then, I'll
> stick with the DataStax doc
>
> That said, it might be interesting to have a no-memory/low-memory mode for
> Cassandra which trades off performance for storage capacity. But... that
> would be an enhancement, not something that is "supported" out of the box
> today. What use cases would this satisfy? I mean, who is it that can get
> away with sacrificing performance these days?
>
> -- Jack Krupansky
>
> On Mon, Mar 7, 2016 at 3:29 PM, Ben Bromhead <be...@instaclustr.com> wrote:
>
>> +1 for
>> http://opensourceconnections.com/blog/2013/08/31/building-
>> the-perfect-cassandra-test-environment/
>> <http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/>
>>
>>
>> We also run Cassandra on t2.mediums for our Developer clusters. You can
>> force Cassandra to do most "memory" things by hitting the disk instead (on
>> disk compaction passes, flush immediately to disk) and by throttling client
>> connections. In fact on the t2 series memory is not the biggest concern,
>> but rather the CPU credit issue.
>>
>> On Mon, 7 Mar 2016 at 11:53 Robert Coli <rc...@eventbrite.com> wrote:
>>
>>> On Fri, Mar 4, 2016 at 8:27 PM, Jack Krupansky <jack.krupansky@gmail.com
>>> > wrote:
>>>
>>>> Please review the minimum hardware requirements as clearly documented:
>>>>
>>>> http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningHardware.html
>>>>
>>>
>>> That is a document for Datastax Cassandra, not Apache Cassandra. It's
>>> wonderful that Datastax provides docs, but Datastax Cassandra is a superset
>>> of Apache Cassandra. Presuming that the requirements of one are exactly
>>> equivalent to the requirements of the other is not necessarily reasonable.
>>>
>>> Please adjust your hardware usage to at least meet the clearly
>>>> documented minimum requirements. If you continue to encounter problems once
>>>> you have corrected your configuration error, please resubmit the details
>>>> with updated hardware configuration details.
>>>>
>>>
>>> Disagree. OP specifically stated that they knew this was not a
>>> recommended practice. It does not seem unlikely that they are constrained
>>> to use this hardware for reasons outside of their control.
>>>
>>>
>>>> Just to be clear, development on less than 4 GB is not supported and
>>>> production on less than 8 GB is not supported. Those are not suggestions or
>>>> guidelines or recommendations, they are absolute requirements.
>>>>
>>>
>>> What does "supported" mean here? That Datastax will not provide support
>>> if you do not follow the above recommendations? Because it certainly is
>>> "supported" in the sense of "it can be made to work" ... ?
>>>
>>> The premise of a minimum RAM level seems meaningless without context.
>>> How much data are you serving from your 2GB RAM node? What is the rate of
>>> client requests?
>>>
>>> To be clear, I don't recommend trying to run production Cassandra with
>>> under 8GB of RAM on your node, but "absolute requirement" is a serious
>>> overstatement.
>>>
>>>
>>> http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/
>>>
>>> Has some good discussion of how to run Cassandra in a low memory
>>> environment. Maybe someone should tell John that his 64MB of JVM heap for a
>>> test node is 62x too small to be "supported"? :D
>>>
>>> =Rob
>>>
>>> --
>> Ben Bromhead
>> CTO | Instaclustr <https://www.instaclustr.com/>
>> +1 650 284 9692
>> Managed Cassandra / Spark on AWS, Azure and Softlayer
>>
>
>

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Jack Krupansky <ja...@gmail.com>.
Thanks, Rob, but... I'll continue to do my best to strongly (vehemently, or
is there an even stronger word for me to use?!) discourage use of Cassandra
in under 4/8 GB of memory. Hey, I just want people to be happy, and trying
to run Cassandra in under 8 GB (or 4 GB for dev) is just... asking for
trouble, unhappiness, even despair. Hey, if somebody is smart enough to
figure out how to do it on their own, then great, they are set and don't
need our help, but personally I would declare it as out of bounds/off
limits. But if anybody else here wants to support/encourage it, they are
free to do so and I won't get in their way other than to state my own view.

By "support", I primarily mean what the (open source) code does out of the
box without superhuman effort (BTW, all of the guys at Open Source
Connection ARE superhuman!!) as well as the support of memory of the
community here on this list.

Doc? If anybody thinks there is a better source of doc for open source
Cassandra than the DataStax doc, please point me to it. Until then, I'll
stick with the DataStax doc

That said, it might be interesting to have a no-memory/low-memory mode for
Cassandra which trades off performance for storage capacity. But... that
would be an enhancement, not something that is "supported" out of the box
today. What use cases would this satisfy? I mean, who is it that can get
away with sacrificing performance these days?

-- Jack Krupansky

On Mon, Mar 7, 2016 at 3:29 PM, Ben Bromhead <be...@instaclustr.com> wrote:

> +1 for
> http://opensourceconnections.com/blog/2013/08/31/building-
> the-perfect-cassandra-test-environment/
> <http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/>
>
>
> We also run Cassandra on t2.mediums for our Developer clusters. You can
> force Cassandra to do most "memory" things by hitting the disk instead (on
> disk compaction passes, flush immediately to disk) and by throttling client
> connections. In fact on the t2 series memory is not the biggest concern,
> but rather the CPU credit issue.
>
> On Mon, 7 Mar 2016 at 11:53 Robert Coli <rc...@eventbrite.com> wrote:
>
>> On Fri, Mar 4, 2016 at 8:27 PM, Jack Krupansky <ja...@gmail.com>
>> wrote:
>>
>>> Please review the minimum hardware requirements as clearly documented:
>>>
>>> http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningHardware.html
>>>
>>
>> That is a document for Datastax Cassandra, not Apache Cassandra. It's
>> wonderful that Datastax provides docs, but Datastax Cassandra is a superset
>> of Apache Cassandra. Presuming that the requirements of one are exactly
>> equivalent to the requirements of the other is not necessarily reasonable.
>>
>> Please adjust your hardware usage to at least meet the clearly documented
>>> minimum requirements. If you continue to encounter problems once you have
>>> corrected your configuration error, please resubmit the details with
>>> updated hardware configuration details.
>>>
>>
>> Disagree. OP specifically stated that they knew this was not a
>> recommended practice. It does not seem unlikely that they are constrained
>> to use this hardware for reasons outside of their control.
>>
>>
>>> Just to be clear, development on less than 4 GB is not supported and
>>> production on less than 8 GB is not supported. Those are not suggestions or
>>> guidelines or recommendations, they are absolute requirements.
>>>
>>
>> What does "supported" mean here? That Datastax will not provide support
>> if you do not follow the above recommendations? Because it certainly is
>> "supported" in the sense of "it can be made to work" ... ?
>>
>> The premise of a minimum RAM level seems meaningless without context. How
>> much data are you serving from your 2GB RAM node? What is the rate of
>> client requests?
>>
>> To be clear, I don't recommend trying to run production Cassandra with
>> under 8GB of RAM on your node, but "absolute requirement" is a serious
>> overstatement.
>>
>>
>> http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/
>>
>> Has some good discussion of how to run Cassandra in a low memory
>> environment. Maybe someone should tell John that his 64MB of JVM heap for a
>> test node is 62x too small to be "supported"? :D
>>
>> =Rob
>>
>> --
> Ben Bromhead
> CTO | Instaclustr <https://www.instaclustr.com/>
> +1 650 284 9692
> Managed Cassandra / Spark on AWS, Azure and Softlayer
>

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Ben Bromhead <be...@instaclustr.com>.
+1 for
http://opensourceconnections.com/blog/2013/08/31/building-
the-perfect-cassandra-test-environment/
<http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/>


We also run Cassandra on t2.mediums for our Developer clusters. You can
force Cassandra to do most "memory" things by hitting the disk instead (on
disk compaction passes, flush immediately to disk) and by throttling client
connections. In fact on the t2 series memory is not the biggest concern,
but rather the CPU credit issue.

On Mon, 7 Mar 2016 at 11:53 Robert Coli <rc...@eventbrite.com> wrote:

> On Fri, Mar 4, 2016 at 8:27 PM, Jack Krupansky <ja...@gmail.com>
> wrote:
>
>> Please review the minimum hardware requirements as clearly documented:
>>
>> http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningHardware.html
>>
>
> That is a document for Datastax Cassandra, not Apache Cassandra. It's
> wonderful that Datastax provides docs, but Datastax Cassandra is a superset
> of Apache Cassandra. Presuming that the requirements of one are exactly
> equivalent to the requirements of the other is not necessarily reasonable.
>
> Please adjust your hardware usage to at least meet the clearly documented
>> minimum requirements. If you continue to encounter problems once you have
>> corrected your configuration error, please resubmit the details with
>> updated hardware configuration details.
>>
>
> Disagree. OP specifically stated that they knew this was not a recommended
> practice. It does not seem unlikely that they are constrained to use this
> hardware for reasons outside of their control.
>
>
>> Just to be clear, development on less than 4 GB is not supported and
>> production on less than 8 GB is not supported. Those are not suggestions or
>> guidelines or recommendations, they are absolute requirements.
>>
>
> What does "supported" mean here? That Datastax will not provide support if
> you do not follow the above recommendations? Because it certainly is
> "supported" in the sense of "it can be made to work" ... ?
>
> The premise of a minimum RAM level seems meaningless without context. How
> much data are you serving from your 2GB RAM node? What is the rate of
> client requests?
>
> To be clear, I don't recommend trying to run production Cassandra with
> under 8GB of RAM on your node, but "absolute requirement" is a serious
> overstatement.
>
>
> http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/
>
> Has some good discussion of how to run Cassandra in a low memory
> environment. Maybe someone should tell John that his 64MB of JVM heap for a
> test node is 62x too small to be "supported"? :D
>
> =Rob
>
> --
Ben Bromhead
CTO | Instaclustr <https://www.instaclustr.com/>
+1 650 284 9692
Managed Cassandra / Spark on AWS, Azure and Softlayer

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Robert Coli <rc...@eventbrite.com>.
On Fri, Mar 4, 2016 at 8:27 PM, Jack Krupansky <ja...@gmail.com>
wrote:

> Please review the minimum hardware requirements as clearly documented:
>
> http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningHardware.html
>

That is a document for Datastax Cassandra, not Apache Cassandra. It's
wonderful that Datastax provides docs, but Datastax Cassandra is a superset
of Apache Cassandra. Presuming that the requirements of one are exactly
equivalent to the requirements of the other is not necessarily reasonable.

Please adjust your hardware usage to at least meet the clearly documented
> minimum requirements. If you continue to encounter problems once you have
> corrected your configuration error, please resubmit the details with
> updated hardware configuration details.
>

Disagree. OP specifically stated that they knew this was not a recommended
practice. It does not seem unlikely that they are constrained to use this
hardware for reasons outside of their control.


> Just to be clear, development on less than 4 GB is not supported and
> production on less than 8 GB is not supported. Those are not suggestions or
> guidelines or recommendations, they are absolute requirements.
>

What does "supported" mean here? That Datastax will not provide support if
you do not follow the above recommendations? Because it certainly is
"supported" in the sense of "it can be made to work" ... ?

The premise of a minimum RAM level seems meaningless without context. How
much data are you serving from your 2GB RAM node? What is the rate of
client requests?

To be clear, I don't recommend trying to run production Cassandra with
under 8GB of RAM on your node, but "absolute requirement" is a serious
overstatement.

http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/

Has some good discussion of how to run Cassandra in a low memory
environment. Maybe someone should tell John that his 64MB of JVM heap for a
test node is 62x too small to be "supported"? :D

=Rob

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

Posted by Jack Krupansky <ja...@gmail.com>.
Please review the minimum hardware requirements as clearly documented:
http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningHardware.html

Please adjust your hardware usage to at least meet the clearly documented
minimum requirements. If you continue to encounter problems once you have
corrected your configuration error, please resubmit the details with
updated hardware configuration details.

Just to be clear, development on less than 4 GB is not supported and
production on less than 8 GB is not supported. Those are not suggestions or
guidelines or recommendations, they are absolute requirements.

-- Jack Krupansky

On Fri, Mar 4, 2016 at 9:04 PM, Hiroyuki Yamada <mo...@gmail.com> wrote:

> Hi,
>
> I'm working on some POCs for Cassandra with single 2GB RAM node
> environment and
> some issues came up with me, so let me ask here.
>
> I have tried to insert about 200 million records (about 11GB in size) to
> the node,
> and the insertion from an application program seems completed,
> but something (probably compaction?) was happening after the insertion and
> later Cassandra itself was killed by OOM killer.
>
> I've tried to tune the configurations including heap size, compaction
> memory setting and bloom filter setting
> to make C* work nicely in the low memory environment,
> but in any cases, it doesn't work so far. (which means I still get OOM
> eventually)
>
> I know it is not very recommended to run C* in such low memory environment,
> but I am wondering what can I do (what configurations to change) to make
> it a little more stable in such environment.
> (I understand the following configuration is very tight and not very
> recommended but I just want to make it work now)
>
> Could anyone give me a help ?
>
>
> Hardware and software :
>     - EC2 instance (t2.small: 1vCPU, 2GB RAM)
>     - Cassandra 2.2.5
>     - JDK 8 (8u73)
>
> Cassandara configuraions (what I changed from the default) :
>     - leveledCompactionStrategy
>     - custom configuration settings of cassandra-env.sh
>         - MAX_HEAP_SIZE: 640MB
>         - HEAP_NEWSIZE: 128MB
>         - custom configuration settings of cassandra.yaml
>             - commitlog_segment_size_in_mb: 4
>             - commitlog_total_space_in_mb: 512
>             - sstable_preemptive_open_interval_in_mb: 16
>             - file_cache_size_in_mb: 40
>             - memtable_heap_space_in_mb: 40
>             - key_cache_size_in_mb: 0
>         - bloom filter is disabled
>
>
> === debug.log around when Cassandra was killed by OOM killer ===
> DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:02,378
> FileCacheService.java:177 - Invalidating cache for
> /var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15626-big-Data.db
> DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:09,903
> FileCacheService.java:177 - Invalidating cache for
> /var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15622-big-Data.db
> DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:14,360
> FileCacheService.java:177 - Invalidating cache for
> /var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15626-big-Data.db
> DEBUG [NonPeriodicTasks:1] 2016-03-04 00:36:20,004
> FileCacheService.java:177 - Invalidating cache for
> /var/lib/cassandra/data/test/user-adc91d20e15011e586c53fd5b957bea8/tmplink-la-15622-big-Data.db
> ======
>
> === /var/log/message ===
> Mar  4 00:36:22 ip-10-0-0-11 kernel: Killed process 8919 (java)
> total-vm:32407840kB, anon-rss:1535020kB, file-rss:123096kB
> ======
>
>
> Best regards,
> Hiro
>
>