You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Chris Jansen <ch...@cognitomobile.com> on 2010/09/22 09:27:00 UTC

Running out of heap

Hi all,

 

I have written a test application that does a write, read and delete on
one of the sample column families that ship with Cassandra, and for some
reason when I leave it going for an extended period of time I see
Cassandra crash with out of heap exceptions. I don't understand why this
should be as I am deleting the data almost as soon as I have read it. 

 

Also I am seeing the data files grow for Keyspace1, again with
apparently no reason as I am deleting the data as I read it, which
eventually causes the disk space to completely fill up.

 

How can this be, am I using Cassandra in the wrong way or is this a bug?

 

Any help or advice would be greatly appreciated.

 

Thanks in advance,

 

Chris

 

 

PS To give a better idea of what I am doing I've included some of the
source from my Java test app, typically I have 20 threads running in
parallel performing this operation: 

 

                while(true)

                {

                    long startTime = System.currentTimeMillis();

                    key = UUID.randomUUID().toString();

                    long timestamp = System.currentTimeMillis();

                    ColumnPath colPathFdl = new
ColumnPath(columnFamily);

 
colPathFdl.setColumn(("345345345354"+key).getBytes(UTF8));

 

                    boolean broken = true;

 

                    while(broken)

                    {

                        try

                        {

                            client.insert(keyspace, key, colPathFdl,
getBytesFromFile(new
File("/opt/java/apache-cassandra/conf/storage-conf.xml")),timestamp,
ConsistencyLevel.QUORUM);

                            broken = false;

                        }

                        catch(Exception e)

                        {

                            System.out.println("Cannot write: "+key+"
RETRYING");

                            broken=true;

                            e.printStackTrace();

                        }

                    }

                

                    try

                    {

                        Column col = client1.get(keyspace, key,
colPathFdl,ConsistencyLevel.QUORUM).getColumn();

                        System.out.println(key +" column name: " + new
String(col.name, UTF8));

                        //System.out.println("column value: " + new
String(col.value, UTF8));

                        System.out.println(key +" column timestamp: " +
new Date(col.timestamp));

                        

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot read: "+key);

                        e.printStackTrace();                        

                    }

                

                    try

                    {

                        System.out.println(key +" delete column::
"+key);

                        client.remove(keyspace, key, colPathFdl,
timestamp, ConsistencyLevel.QUORUM);

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot delete: "+key);

                        e.printStackTrace();                        

                    }

                    

                    long stopTime = System.currentTimeMillis();

                    long timeTaken = stopTime -startTime;

                    System.err.println(Thread.currentThread().getName()
+  " " +key+ " Last operation took "+ timeTaken+"ms" );

                }

 



NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU.  UK. Company number 02723032.  This e-mail message and any attachment is confidential. It may not be disclosed to or used by anyone other than the intended recipient. If you have received this e-mail in error please notify the sender immediately then delete it from your system. Whilst every effort has been made to check this mail is virus free we accept no responsibility for software viruses and you should check for viruses before opening any attachments. Opinions, conclusions and other information in this email and any attachments which do not relate to the official business of the company are neither given by the company nor endorsed by it.

This email message has been scanned for viruses by Mimecast

RE: Running out of heap

Posted by Chris Jansen <ch...@cognitomobile.com>.
Thanks Dan, I've reduced the GCGraceSeconds to a number of hours for my
testing, and Cassandra is now removing the old records.

 

The link provided by Leo helped a lot also, I've been able to tune the
garbage collector to better suit the rapid creation and removal of data.

 

Thanks again,

 

Chris

 

From: Dan Washusen [mailto:dan@reactive.org] 
Sent: 22 September 2010 09:11
To: user@cassandra.apache.org
Subject: Re: Running out of heap

 

A key point in that FAQ entry is that the deletes don't occur until
after the configured GCGraceSeconds (10 days is the default I believe).


 

This
(http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts)
FAQ entry mentions your scenario and suggests either increasing the
memory allocation to the JVM or decreasing the insert threshold before
Cassandra flushes its memtables...

 

On Wed, Sep 22, 2010 at 5:51 PM, Chris Jansen
<ch...@cognitomobile.com> wrote:

Hi Dan,

 

I do see compaction happen, I keep a close eye on the disk usage and
what I see is the usage grow then shrink, but it despite the periodic
compaction the overall result is a slow but steady growth.

 

Regards,

 

Chris

 

From: Dan Washusen [mailto:dan@reactive.org] 

Sent: 22 September 2010 08:39
To: user@cassandra.apache.org
Subject: Re: Running out of heap

 

http://wiki.apache.org/cassandra/FAQ#i_deleted_what_gives

 

That help?

On Wed, Sep 22, 2010 at 5:27 PM, Chris Jansen
<ch...@cognitomobile.com> wrote:

Hi all,

 

I have written a test application that does a write, read and delete on
one of the sample column families that ship with Cassandra, and for some
reason when I leave it going for an extended period of time I see
Cassandra crash with out of heap exceptions. I don't understand why this
should be as I am deleting the data almost as soon as I have read it. 

 

Also I am seeing the data files grow for Keyspace1, again with
apparently no reason as I am deleting the data as I read it, which
eventually causes the disk space to completely fill up.

 

How can this be, am I using Cassandra in the wrong way or is this a bug?

 

Any help or advice would be greatly appreciated.

 

Thanks in advance,

 

Chris

 

 

PS To give a better idea of what I am doing I've included some of the
source from my Java test app, typically I have 20 threads running in
parallel performing this operation: 

 

                while(true)

                {

                    long startTime = System.currentTimeMillis();

                    key = UUID.randomUUID().toString();

                    long timestamp = System.currentTimeMillis();

                    ColumnPath colPathFdl = new
ColumnPath(columnFamily);

 
colPathFdl.setColumn(("345345345354"+key).getBytes(UTF8));

 

                    boolean broken = true;

 

                    while(broken)

                    {

                        try

                        {

                            client.insert(keyspace, key, colPathFdl,
getBytesFromFile(new
File("/opt/java/apache-cassandra/conf/storage-conf.xml")),timestamp,
ConsistencyLevel.QUORUM);

                            broken = false;

                        }

                        catch(Exception e)

                        {

                            System.out.println("Cannot write: "+key+"
RETRYING");

                            broken=true;

                            e.printStackTrace();

                        }

                    }

                

                    try

                    {

                        Column col = client1.get(keyspace, key,
colPathFdl,ConsistencyLevel.QUORUM).getColumn();

                        System.out.println(key +" column name: " + new
String(col.name, UTF8));

                        //System.out.println("column value: " + new
String(col.value, UTF8));

                        System.out.println(key +" column timestamp: " +
new Date(col.timestamp));

                        

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot read: "+key);

                        e.printStackTrace();                        

                    }

                

                    try

                    {

                        System.out.println(key +" delete column::
"+key);

                        client.remove(keyspace, key, colPathFdl,
timestamp, ConsistencyLevel.QUORUM);

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot delete: "+key);

                        e.printStackTrace();                        

                    }

                    

                    long stopTime = System.currentTimeMillis();

                    long timeTaken = stopTime -startTime;

                    System.err.println(Thread.currentThread().getName()
+  " " +key+ " Last operation took "+ timeTaken+"ms" );

                }

 




NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU.
UK. Company number 02723032. This e-mail message and any attachment is
confidential. It may not be disclosed to or used by anyone other than
the intended recipient. If you have received this e-mail in error please
notify the sender immediately then delete it from your system. Whilst
every effort has been made to check this mail is virus free we accept no
responsibility for software viruses and you should check for viruses
before opening any attachments. Opinions, conclusions and other
information in this email and any attachments which do not relate to the
official business of the company are neither given by the company nor
endorsed by it.

This email message has been scanned for viruses by Mimecast
<http://www.mimecast.com>  

 

 

Re: Running out of heap

Posted by Dan Washusen <da...@reactive.org>.
A key point in that FAQ entry is that the deletes don't occur until after
the configured GCGraceSeconds (10 days is the default I believe).

This (http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts)
FAQ entry mentions your scenario and suggests either increasing the memory
allocation to the JVM or decreasing the insert threshold before Cassandra
flushes its memtables...

On Wed, Sep 22, 2010 at 5:51 PM, Chris Jansen <
chris.jansen@cognitomobile.com> wrote:

> Hi Dan,
>
>
>
> I do see compaction happen, I keep a close eye on the disk usage and what I
> see is the usage grow then shrink, but it despite the periodic compaction
> the overall result is a slow but steady growth.
>
>
>
> Regards,
>
>
>
> Chris
>
>
>
> *From:* Dan Washusen [mailto:dan@reactive.org]
> *Sent:* 22 September 2010 08:39
> *To:* user@cassandra.apache.org
> *Subject:* Re: Running out of heap
>
>
>
> http://wiki.apache.org/cassandra/FAQ#i_deleted_what_gives
>
>
>
> That help?
>
> On Wed, Sep 22, 2010 at 5:27 PM, Chris Jansen <
> chris.jansen@cognitomobile.com> wrote:
>
> Hi all,
>
>
>
> I have written a test application that does a write, read and delete on one
> of the sample column families that ship with Cassandra, and for some reason
> when I leave it going for an extended period of time I see Cassandra crash
> with out of heap exceptions. I don’t understand why this should be as I am
> deleting the data almost as soon as I have read it.
>
>
>
> Also I am seeing the data files grow for Keyspace1, again with apparently
> no reason as I am deleting the data as I read it, which eventually causes
> the disk space to completely fill up.
>
>
>
> How can this be, am I using Cassandra in the wrong way or is this a bug?
>
>
>
> Any help or advice would be greatly appreciated.
>
>
>
> Thanks in advance,
>
>
>
> Chris
>
>
>
>
>
> PS To give a better idea of what I am doing I’ve included some of the
> source from my Java test app, typically I have 20 threads running in
> parallel performing this operation:
>
>
>
>                 while(true)
>
>                 {
>
>                     long startTime = System.currentTimeMillis();
>
>                     key = UUID.randomUUID().toString();
>
>                     long timestamp = System.currentTimeMillis();
>
>                     ColumnPath colPathFdl = new ColumnPath(columnFamily);
>
>
> colPathFdl.setColumn(("345345345354"+key).getBytes(UTF8));
>
>
>
>                     boolean broken = true;
>
>
>
>                     while(broken)
>
>                     {
>
>                         try
>
>                         {
>
>                             client.insert(keyspace, key, colPathFdl,
> getBytesFromFile(new
> File("/opt/java/apache-cassandra/conf/storage-conf.xml")),timestamp,
> ConsistencyLevel.QUORUM);
>
>                             broken = false;
>
>                         }
>
>                         catch(Exception e)
>
>                         {
>
>                             System.out.println("Cannot write: "+key+"
> RETRYING");
>
>                             broken=true;
>
>                             e.printStackTrace();
>
>                         }
>
>                     }
>
>
>
>                     try
>
>                     {
>
>                         Column col = client1.get(keyspace, key,
> colPathFdl,ConsistencyLevel.QUORUM).getColumn();
>
>                         System.out.println(key +" column name: " + new
> String(col.name, UTF8));
>
>                         //System.out.println("column value: " + new
> String(col.value, UTF8));
>
>                         System.out.println(key +" column timestamp: " + new
> Date(col.timestamp));
>
>
>
>                     }
>
>                     catch(Exception e)
>
>                     {
>
>                         System.out.println("Cannot read: "+key);
>
>                         e.printStackTrace();
>
>                     }
>
>
>
>                     try
>
>                     {
>
>                         System.out.println(key +" delete column:: "+key);
>
>                         client.remove(keyspace, key, colPathFdl, timestamp,
> ConsistencyLevel.QUORUM);
>
>                     }
>
>                     catch(Exception e)
>
>                     {
>
>                         System.out.println("Cannot delete: "+key);
>
>                         e.printStackTrace();
>
>                     }
>
>
>
>                     long stopTime = System.currentTimeMillis();
>
>                     long timeTaken = stopTime -startTime;
>
>                     System.err.println(Thread.currentThread().getName() +
> " " +key+ " Last operation took "+ timeTaken+"ms" );
>
>                 }
>
>
>
>
>
>
> NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU. UK.
> Company number 02723032. This e-mail message and any attachment is
> confidential. It may not be disclosed to or used by anyone other than the
> intended recipient. If you have received this e-mail in error please notify
> the sender immediately then delete it from your system. Whilst every effort
> has been made to check this mail is virus free we accept no responsibility
> for software viruses and you should check for viruses before opening any
> attachments. Opinions, conclusions and other information in this email and
> any attachments which do not relate to the official business of the company
> are neither given by the company nor endorsed by it.
>
> This email message has been scanned for viruses by Mimecast<http://www.mimecast.com>
>
>
>

RE: Running out of heap

Posted by Chris Jansen <ch...@cognitomobile.com>.
Hi Dan,

 

I do see compaction happen, I keep a close eye on the disk usage and
what I see is the usage grow then shrink, but it despite the periodic
compaction the overall result is a slow but steady growth.

 

Regards,

 

Chris

 

From: Dan Washusen [mailto:dan@reactive.org] 
Sent: 22 September 2010 08:39
To: user@cassandra.apache.org
Subject: Re: Running out of heap

 

http://wiki.apache.org/cassandra/FAQ#i_deleted_what_gives

 

That help?

On Wed, Sep 22, 2010 at 5:27 PM, Chris Jansen
<ch...@cognitomobile.com> wrote:

Hi all,

 

I have written a test application that does a write, read and delete on
one of the sample column families that ship with Cassandra, and for some
reason when I leave it going for an extended period of time I see
Cassandra crash with out of heap exceptions. I don't understand why this
should be as I am deleting the data almost as soon as I have read it. 

 

Also I am seeing the data files grow for Keyspace1, again with
apparently no reason as I am deleting the data as I read it, which
eventually causes the disk space to completely fill up.

 

How can this be, am I using Cassandra in the wrong way or is this a bug?

 

Any help or advice would be greatly appreciated.

 

Thanks in advance,

 

Chris

 

 

PS To give a better idea of what I am doing I've included some of the
source from my Java test app, typically I have 20 threads running in
parallel performing this operation: 

 

                while(true)

                {

                    long startTime = System.currentTimeMillis();

                    key = UUID.randomUUID().toString();

                    long timestamp = System.currentTimeMillis();

                    ColumnPath colPathFdl = new
ColumnPath(columnFamily);

 
colPathFdl.setColumn(("345345345354"+key).getBytes(UTF8));

 

                    boolean broken = true;

 

                    while(broken)

                    {

                        try

                        {

                            client.insert(keyspace, key, colPathFdl,
getBytesFromFile(new
File("/opt/java/apache-cassandra/conf/storage-conf.xml")),timestamp,
ConsistencyLevel.QUORUM);

                            broken = false;

                        }

                        catch(Exception e)

                        {

                            System.out.println("Cannot write: "+key+"
RETRYING");

                            broken=true;

                            e.printStackTrace();

                        }

                    }

                

                    try

                    {

                        Column col = client1.get(keyspace, key,
colPathFdl,ConsistencyLevel.QUORUM).getColumn();

                        System.out.println(key +" column name: " + new
String(col.name, UTF8));

                        //System.out.println("column value: " + new
String(col.value, UTF8));

                        System.out.println(key +" column timestamp: " +
new Date(col.timestamp));

                        

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot read: "+key);

                        e.printStackTrace();                        

                    }

                

                    try

                    {

                        System.out.println(key +" delete column::
"+key);

                        client.remove(keyspace, key, colPathFdl,
timestamp, ConsistencyLevel.QUORUM);

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot delete: "+key);

                        e.printStackTrace();                        

                    }

                    

                    long stopTime = System.currentTimeMillis();

                    long timeTaken = stopTime -startTime;

                    System.err.println(Thread.currentThread().getName()
+  " " +key+ " Last operation took "+ timeTaken+"ms" );

                }

 




NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU.
UK. Company number 02723032. This e-mail message and any attachment is
confidential. It may not be disclosed to or used by anyone other than
the intended recipient. If you have received this e-mail in error please
notify the sender immediately then delete it from your system. Whilst
every effort has been made to check this mail is virus free we accept no
responsibility for software viruses and you should check for viruses
before opening any attachments. Opinions, conclusions and other
information in this email and any attachments which do not relate to the
official business of the company are neither given by the company nor
endorsed by it.

This email message has been scanned for viruses by Mimecast
<http://www.mimecast.com>  

 

Re: Running out of heap

Posted by Dan Washusen <da...@reactive.org>.
http://wiki.apache.org/cassandra/FAQ#i_deleted_what_gives

That help?

On Wed, Sep 22, 2010 at 5:27 PM, Chris Jansen <
chris.jansen@cognitomobile.com> wrote:

> Hi all,
>
>
>
> I have written a test application that does a write, read and delete on one
> of the sample column families that ship with Cassandra, and for some reason
> when I leave it going for an extended period of time I see Cassandra crash
> with out of heap exceptions. I don’t understand why this should be as I am
> deleting the data almost as soon as I have read it.
>
>
>
> Also I am seeing the data files grow for Keyspace1, again with apparently
> no reason as I am deleting the data as I read it, which eventually causes
> the disk space to completely fill up.
>
>
>
> How can this be, am I using Cassandra in the wrong way or is this a bug?
>
>
>
> Any help or advice would be greatly appreciated.
>
>
>
> Thanks in advance,
>
>
>
> Chris
>
>
>
>
>
> PS To give a better idea of what I am doing I’ve included some of the
> source from my Java test app, typically I have 20 threads running in
> parallel performing this operation:
>
>
>
>                 while(true)
>
>                 {
>
>                     long startTime = System.currentTimeMillis();
>
>                     key = UUID.randomUUID().toString();
>
>                     long timestamp = System.currentTimeMillis();
>
>                     ColumnPath colPathFdl = new ColumnPath(columnFamily);
>
>
> colPathFdl.setColumn(("345345345354"+key).getBytes(UTF8));
>
>
>
>                     boolean broken = true;
>
>
>
>                     while(broken)
>
>                     {
>
>                         try
>
>                         {
>
>                             client.insert(keyspace, key, colPathFdl,
> getBytesFromFile(new
> File("/opt/java/apache-cassandra/conf/storage-conf.xml")),timestamp,
> ConsistencyLevel.QUORUM);
>
>                             broken = false;
>
>                         }
>
>                         catch(Exception e)
>
>                         {
>
>                             System.out.println("Cannot write: "+key+"
> RETRYING");
>
>                             broken=true;
>
>                             e.printStackTrace();
>
>                         }
>
>                     }
>
>
>
>                     try
>
>                     {
>
>                         Column col = client1.get(keyspace, key,
> colPathFdl,ConsistencyLevel.QUORUM).getColumn();
>
>                         System.out.println(key +" column name: " + new
> String(col.name, UTF8));
>
>                         //System.out.println("column value: " + new
> String(col.value, UTF8));
>
>                         System.out.println(key +" column timestamp: " + new
> Date(col.timestamp));
>
>
>
>                     }
>
>                     catch(Exception e)
>
>                     {
>
>                         System.out.println("Cannot read: "+key);
>
>                         e.printStackTrace();
>
>                     }
>
>
>
>                     try
>
>                     {
>
>                         System.out.println(key +" delete column:: "+key);
>
>                         client.remove(keyspace, key, colPathFdl, timestamp,
> ConsistencyLevel.QUORUM);
>
>                     }
>
>                     catch(Exception e)
>
>                     {
>
>                         System.out.println("Cannot delete: "+key);
>
>                         e.printStackTrace();
>
>                     }
>
>
>
>                     long stopTime = System.currentTimeMillis();
>
>                     long timeTaken = stopTime -startTime;
>
>                     System.err.println(Thread.currentThread().getName() +
> " " +key+ " Last operation took "+ timeTaken+"ms" );
>
>                 }
>
>
>
>
>
>  NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU. UK.
> Company number 02723032. This e-mail message and any attachment is
> confidential. It may not be disclosed to or used by anyone other than the
> intended recipient. If you have received this e-mail in error please notify
> the sender immediately then delete it from your system. Whilst every effort
> has been made to check this mail is virus free we accept no responsibility
> for software viruses and you should check for viruses before opening any
> attachments. Opinions, conclusions and other information in this email and
> any attachments which do not relate to the official business of the company
> are neither given by the company nor endorsed by it.
>
>   This email message has been scanned for viruses by Mimecast<http://www.mimecast.com>
>

RE: Running out of heap

Posted by Chris Jansen <ch...@cognitomobile.com>.
Thanks Leo, I'll have a read.

 

Regards,

 

Chris

 

From: Matthias L. Jugel [mailto:leo@thinkberg.com] 
Sent: 22 September 2010 08:39
To: user@cassandra.apache.org
Subject: Re: Running out of heap

 

We had similar problems. It may help to read this: 
http://blog.mikiobraun.de/ (Tuning GC for Cassandra)

Regards,

Leo.

 

On 22.09.2010, at 09:27, Chris Jansen wrote:





Hi all,

 

I have written a test application that does a write, read and delete on
one of the sample column families that ship with Cassandra, and for some
reason when I leave it going for an extended period of time I see
Cassandra crash with out of heap exceptions. I don't understand why this
should be as I am deleting the data almost as soon as I have read it.

 

Also I am seeing the data files grow for Keyspace1, again with
apparently no reason as I am deleting the data as I read it, which
eventually causes the disk space to completely fill up.

 

How can this be, am I using Cassandra in the wrong way or is this a bug?

 

Any help or advice would be greatly appreciated.

 

Thanks in advance,

 

Chris

 

 

PS To give a better idea of what I am doing I've included some of the
source from my Java test app, typically I have 20 threads running in
parallel performing this operation:

 

                while(true)

                {

                    long startTime = System.currentTimeMillis();

                    key = UUID.randomUUID().toString();

                    long timestamp = System.currentTimeMillis();

                    ColumnPath colPathFdl = new
ColumnPath(columnFamily);

 
colPathFdl.setColumn(("345345345354"+key).getBytes(UTF8));

 

                    boolean broken = true;

 

                    while(broken)

                    {

                        try

                        {

                            client.insert(keyspace, key, colPathFdl,
getBytesFromFile(new
File("/opt/java/apache-cassandra/conf/storage-conf.xml")),timestamp,
ConsistencyLevel.QUORUM);

                            broken = false;

                        }

                        catch(Exception e)

                        {

                            System.out.println("Cannot write: "+key+"
RETRYING");

                            broken=true;

                            e.printStackTrace();

                        }

                    }

               

                    try

                    {

                        Column col = client1.get(keyspace, key,
colPathFdl,ConsistencyLevel.QUORUM).getColumn();

                        System.out.println(key +" column name: " + new
String(col.name <http://col.name/> , UTF8));

                        //System.out.println("column value: " + new
String(col.value, UTF8));

                        System.out.println(key +" column timestamp: " +
new Date(col.timestamp));

                       

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot read: "+key);

                        e.printStackTrace();                       

                    }

               

                    try

                    {

                        System.out.println(key +" delete column::
"+key);

                        client.remove(keyspace, key, colPathFdl,
timestamp, ConsistencyLevel.QUORUM);

                    }

                    catch(Exception e)

                    {

                        System.out.println("Cannot delete: "+key);

                        e.printStackTrace();                       

                    }

                   

                    long stopTime = System.currentTimeMillis();

                    long timeTaken = stopTime -startTime;

                    System.err.println(Thread.currentThread().getName()
+  " " +key+ " Last operation took "+ timeTaken+"ms" );

                }

 




NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU.
UK. Company number 02723032. This e-mail message and any attachment is
confidential. It may not be disclosed to or used by anyone other than
the intended recipient. If you have received this e-mail in error please
notify the sender immediately then delete it from your system. Whilst
every effort has been made to check this mail is virus free we accept no
responsibility for software viruses and you should check for viruses
before opening any attachments. Opinions, conclusions and other
information in this email and any attachments which do not relate to the
official business of the company are neither given by the company nor
endorsed by it.

This email message has been scanned for viruses by Mimecast
<http://www.mimecast.com/> 

 

Re: Running out of heap

Posted by "Matthias L. Jugel" <le...@thinkberg.com>.
We had similar problems. It may help to read this: 
http://blog.mikiobraun.de/ (Tuning GC for Cassandra)

Regards,

Leo.

On 22.09.2010, at 09:27, Chris Jansen wrote:

> Hi all,
>  
> I have written a test application that does a write, read and delete on one of the sample column families that ship with Cassandra, and for some reason when I leave it going for an extended period of time I see Cassandra crash with out of heap exceptions. I don’t understand why this should be as I am deleting the data almost as soon as I have read it.
>  
> Also I am seeing the data files grow for Keyspace1, again with apparently no reason as I am deleting the data as I read it, which eventually causes the disk space to completely fill up.
>  
> How can this be, am I using Cassandra in the wrong way or is this a bug?
>  
> Any help or advice would be greatly appreciated.
>  
> Thanks in advance,
>  
> Chris
>  
>  
> PS To give a better idea of what I am doing I’ve included some of the source from my Java test app, typically I have 20 threads running in parallel performing this operation:
>  
>                 while(true)
>                 {
>                     long startTime = System.currentTimeMillis();
>                     key = UUID.randomUUID().toString();
>                     long timestamp = System.currentTimeMillis();
>                     ColumnPath colPathFdl = new ColumnPath(columnFamily);
>                     colPathFdl.setColumn(("345345345354"+key).getBytes(UTF8));
>  
>                     boolean broken = true;
>  
>                     while(broken)
>                     {
>                         try
>                         {
>                             client.insert(keyspace, key, colPathFdl, getBytesFromFile(new File("/opt/java/apache-cassandra/conf/storage-conf.xml")),timestamp, ConsistencyLevel.QUORUM);
>                             broken = false;
>                         }
>                         catch(Exception e)
>                         {
>                             System.out.println("Cannot write: "+key+" RETRYING");
>                             broken=true;
>                             e.printStackTrace();
>                         }
>                     }
>                
>                     try
>                     {
>                         Column col = client1.get(keyspace, key, colPathFdl,ConsistencyLevel.QUORUM).getColumn();
>                         System.out.println(key +" column name: " + new String(col.name, UTF8));
>                         //System.out.println("column value: " + new String(col.value, UTF8));
>                         System.out.println(key +" column timestamp: " + new Date(col.timestamp));
>                        
>                     }
>                     catch(Exception e)
>                     {
>                         System.out.println("Cannot read: "+key);
>                         e.printStackTrace();                       
>                     }
>                
>                     try
>                     {
>                         System.out.println(key +" delete column:: "+key);
>                         client.remove(keyspace, key, colPathFdl, timestamp, ConsistencyLevel.QUORUM);
>                     }
>                     catch(Exception e)
>                     {
>                         System.out.println("Cannot delete: "+key);
>                         e.printStackTrace();                       
>                     }
>                    
>                     long stopTime = System.currentTimeMillis();
>                     long timeTaken = stopTime -startTime;
>                     System.err.println(Thread.currentThread().getName() +  " " +key+ " Last operation took "+ timeTaken+"ms" );
>                 }
>  
> 
> 
> 
> NOTICE: Cognito Limited. Benham Valence, Newbury, Berkshire, RG20 8LU. UK. Company number 02723032. This e-mail message and any attachment is confidential. It may not be disclosed to or used by anyone other than the intended recipient. If you have received this e-mail in error please notify the sender immediately then delete it from your system. Whilst every effort has been made to check this mail is virus free we accept no responsibility for software viruses and you should check for viruses before opening any attachments. Opinions, conclusions and other information in this email and any attachments which do not relate to the official business of the company are neither given by the company nor endorsed by it.
> 
> This email message has been scanned for viruses by Mimecast
>