You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by G man <gm...@gmail.com> on 2013/06/24 03:14:39 UTC
Cassandra 1.0.9 Performance
Hi All,
We are running a 1.0.9 cluster with 3 nodes (RF=3) serving a load of
approximately 600GB, and since I am fairly new to Cassandra, I'd like to
compare notes with other people running a cluster of similar size (perhaps
not in the amount of data, but the number of nodes).
Does anyone have CPU/memory/network graphs (e.g. Cacti) over the last 1-2
months they are willing to share of their Cassandra database nodes?
Just trying to compare our patterns with others to see if they are "normal".
Thanks in advance.
G
Re: Cassandra 1.0.9 Performance
Posted by aaron morton <aa...@thelastpickle.com>.
> serving a load of approximately 600GB
is that 600GB in the cluster or 600GB per node ?
In pre 1.2 days we recommend around 300GB to 500GB per node with spinning disks and 1Gbe networking. It's a soft rule of thumb not a hard rule. Above that size repair and replacing a failed node can take a long time.
> Does anyone have CPU/memory/network graphs (e.g. Cacti) over the last 1-2 months they are willing to share of their Cassandra database nodes?
If you can share yours and any specific concerns you may have we may be able to help.
Cheers
-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 24/06/2013, at 1:14 PM, G man <gm...@gmail.com> wrote:
> Hi All,
>
> We are running a 1.0.9 cluster with 3 nodes (RF=3) serving a load of approximately 600GB, and since I am fairly new to Cassandra, I'd like to compare notes with other people running a cluster of similar size (perhaps not in the amount of data, but the number of nodes).
>
> Does anyone have CPU/memory/network graphs (e.g. Cacti) over the last 1-2 months they are willing to share of their Cassandra database nodes?
>
> Just trying to compare our patterns with others to see if they are "normal".
>
> Thanks in advance.
> G