You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Jonathan Ellis <jb...@gmail.com> on 2010/04/19 22:41:10 UTC

Re: why read operation use so much of memory?

(Moving to users@ list.)

Like any Java server, Cassandra will use as much memory in its heap as
you allow it to.  You can request a GC from jconsole to see what its
approximate "real" working set it.

http://wiki.apache.org/cassandra/SSTableMemtable explains why reads
are slower than writes.  You can tune this by using the key cache, row
cache, or by using range queries instead of requesting rows one at a
time.

contrib/py_stress is a better starting place for a benchmark than
rolling your own, btw.  we see about 8000 reads/s with that on a
4-core server.

On Sun, Apr 18, 2010 at 8:40 PM, Bingbing Liu <ru...@gmail.com> wrote:
> Hi,all
>
> I have a cluster of 5 nodes, each node has a 4 cores cpu and 8 G Memory.
>
> I use the 0.6-beta3 cassandra for testting.
>
> First , i insert 6,000,000 rows each of which is 1k bytes, the speed of write is so excited.
>
> But then ,when i read them each row at a time from two clients at the same time ,one of the client is very slow and use so long a time,
>
> i find that on each node the process of Cassandra occupy 7 G memory or so (use the "top" command), that puzzled me.
>
> Why read operation use so much of memory? May be i missed something?
>
> Thx.
>
>
> 2010-04-18
>
>
>
> Bingbing Liu
>

Re: why read operation use so much of memory?

Posted by Brandon Williams <dr...@gmail.com>.
On Mon, Apr 19, 2010 at 10:28 PM, dir dir <si...@gmail.com> wrote:

> Hi Jonathan,
>
> I see this page (http://wiki.apache.org/cassandra/SSTableMemtable) does
> not exist yet.
>
>
I think he meant: http://wiki.apache.org/cassandra/MemtableSSTable

-Brandon

Re: why read operation use so much of memory?

Posted by dir dir <si...@gmail.com>.
Hi Jonathan,

I see this page (http://wiki.apache.org/cassandra/SSTableMemtable) does not
exist yet.

thanks.

Dir.

On Tue, Apr 20, 2010 at 3:41 AM, Jonathan Ellis <jb...@gmail.com> wrote:

> (Moving to users@ list.)
>
> Like any Java server, Cassandra will use as much memory in its heap as
> you allow it to.  You can request a GC from jconsole to see what its
> approximate "real" working set it.
>
> http://wiki.apache.org/cassandra/SSTableMemtable explains why reads
> are slower than writes.  You can tune this by using the key cache, row
> cache, or by using range queries instead of requesting rows one at a
> time.
>
> contrib/py_stress is a better starting place for a benchmark than
> rolling your own, btw.  we see about 8000 reads/s with that on a
> 4-core server.
>
> On Sun, Apr 18, 2010 at 8:40 PM, Bingbing Liu <ru...@gmail.com> wrote:
> > Hi,all
> >
> > I have a cluster of 5 nodes, each node has a 4 cores cpu and 8 G Memory.
> >
> > I use the 0.6-beta3 cassandra for testting.
> >
> > First , i insert 6,000,000 rows each of which is 1k bytes, the speed of
> write is so excited.
> >
> > But then ,when i read them each row at a time from two clients at the
> same time ,one of the client is very slow and use so long a time,
> >
> > i find that on each node the process of Cassandra occupy 7 G memory or so
> (use the "top" command), that puzzled me.
> >
> > Why read operation use so much of memory? May be i missed something?
> >
> > Thx.
> >
> >
> > 2010-04-18
> >
> >
> >
> > Bingbing Liu
> >
>