You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Jean Tremblay <je...@zen-innovations.com> on 2016/01/14 17:56:10 UTC

Cassandra 3.1.1 with respect to HeapSpace

Hi,

I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
I use Cassandra 3.1.1.
I use the following setup for the memory:
  MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="496M"

I have been loading a lot of data in this cluster over the last 24 hours. The system behaved I think very nicely. It was loading very fast, and giving excellent read time. There was no error messages until this one:


ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602 JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
at org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) ~[apache-cassandra-3.1.1.jar:3.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.1.1.jar:3.1.1]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]

4 nodes out of 5 crashed with this error message. Now when I want to restart the first node I have the following error;

ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Unexpected error deserializing mutation; saved to /tmp/mutation7465380878750576105dat.  This may be caused by replaying a mutation against a table with the same name but incompatible schema.  Exception follows: org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a map
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677) [apache-cassandra-3.1.1.jar:3.1.1]

I can no longer start my nodes.

How can I restart my cluster?
Is this problem known?
Is there a better Cassandra 3 version which would behave better with respect to this problem?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.


Thank you very much for your advice.

Kind regards

Jean

Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Jean Tremblay <je...@zen-innovations.com>.
Thank you Sebastián!

On 15 Jan 2016, at 19:09 , Sebastian Estevez <se...@datastax.com>> wrote:

The recommended (and default when available) heap size for Cassandra is 8GB and for New size it's 100mb per core.

Your milage may vary based on workload, hardware etc.

There are also some alternative JVM tuning schools of thought. See cassandra-8150 (large heap) and CASSANDRA-7486 (G1GC).



All the best,

[datastax_logo.png]<http://www.datastax.com/>
Sebastián Estévez
Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com<ma...@datastax.com>
[linkedin.png]<https://www.linkedin.com/company/datastax> [facebook.png] <https://www.facebook.com/datastax>  [twitter.png] <https://twitter.com/datastax>  [g+.png] <https://plus.google.com/+Datastax/about>  [https://lh6.googleusercontent.com/24_538J0j5M0NHQx-jkRiV_IHrhsh-98hpi--Qz9b0-I4llvWuYI6LgiVJsul0AhxL0gMTOHgw3G0SvIXaT2C7fsKKa_DdQ2uOJ-bQ6h_mQ7k7iMybcR1dr1VhWgLMxcmg] <http://feeds.feedburner.com/datastax>
<http://goog_410786983/>

[http://learn.datastax.com/rs/059-YLZ-577/images/Gartner_728x90_Sig4.png]<http://www.datastax.com/gartner-magic-quadrant-odbms>

DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Fri, Jan 15, 2016 at 4:00 AM, Jean Tremblay <je...@zen-innovations.com>> wrote:
Thank you Sebastián for your useful advice. I managed restarting the nodes, but I needed to delete all the commit logs, not only the last one specified. Nevertheless I’m back in business.

Would there be a better memory configuration to select for my nodes in a C* 3 cluster? Currently I use MAX_HEAP_SIZE=“6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.

Thanks for your help.

Jean

On 15 Jan 2016, at 24:24 , Sebastian Estevez <se...@datastax.com>> wrote:

Try starting the other nodes. You may have to delete or mv the commitlog segment referenced in the error message for the node to come up since apparently it is corrupted.

All the best,

[datastax_logo.png]<http://www.datastax.com/>
Sebastián Estévez
Solutions Architect | 954 905 8615<tel:954%20905%208615> | sebastian.estevez@datastax.com<ma...@datastax.com>
[linkedin.png]<https://www.linkedin.com/company/datastax> [facebook.png] <https://www.facebook.com/datastax>  [twitter.png] <https://twitter.com/datastax>  [g+.png] <https://plus.google.com/+Datastax/about>  [https://lh6.googleusercontent.com/24_538J0j5M0NHQx-jkRiV_IHrhsh-98hpi--Qz9b0-I4llvWuYI6LgiVJsul0AhxL0gMTOHgw3G0SvIXaT2C7fsKKa_DdQ2uOJ-bQ6h_mQ7k7iMybcR1dr1VhWgLMxcmg] <http://feeds.feedburner.com/datastax>
<http://goog_410786983/>

[http://learn.datastax.com/rs/059-YLZ-577/images/Gartner_728x90_Sig4.png]<http://www.datastax.com/gartner-magic-quadrant-odbms>

DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Thu, Jan 14, 2016 at 1:00 PM, Jean Tremblay <je...@zen-innovations.com>> wrote:
How can I restart?
It blocks with the error listed below.
Are my memory settings good for my configuration?

On 14 Jan 2016, at 18:30, Jake Luciani <ja...@gmail.com>> wrote:

Yes you can restart without data loss.

Can you please include info about how much data you have loaded per node and perhaps what your schema looks like?

Thanks

On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay <je...@zen-innovations.com>> wrote:

Ok, I will open a ticket.

How could I restart my cluster without loosing everything ?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.

Thanks

Jean

On 14 Jan 2016, at 18:19, Tyler Hobbs <ty...@datastax.com>> wrote:

I don't think that's a known issue.  Can you open a ticket at https://issues.apache.org/jira/browse/CASSANDRA and attach your schema along with the commitlog files and the mutation that was saved to /tmp?

On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <je...@zen-innovations.com>> wrote:
Hi,

I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
I use Cassandra 3.1.1.
I use the following setup for the memory:
  MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="496M"

I have been loading a lot of data in this cluster over the last 24 hours. The system behaved I think very nicely. It was loading very fast, and giving excellent read time. There was no error messages until this one:


ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602 JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
at org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) ~[apache-cassandra-3.1.1.jar:3.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.1.1.jar:3.1.1]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]

4 nodes out of 5 crashed with this error message. Now when I want to restart the first node I have the following error;

ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Unexpected error deserializing mutation; saved to /tmp/mutation7465380878750576105dat.  This may be caused by replaying a mutation against a table with the same name but incompatible schema.  Exception follows: org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a map
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677) [apache-cassandra-3.1.1.jar:3.1.1]

I can no longer start my nodes.

How can I restart my cluster?
Is this problem known?
Is there a better Cassandra 3 version which would behave better with respect to this problem?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.


Thank you very much for your advice.

Kind regards

Jean



--
Tyler Hobbs
DataStax<http://datastax.com/>



--
http://twitter.com/tjake





Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Sebastian Estevez <se...@datastax.com>.
The recommended (and default when available) heap size for Cassandra is 8GB
and for New size it's 100mb per core.

Your milage may vary based on workload, hardware etc.

There are also some alternative JVM tuning schools of thought. See
cassandra-8150 (large heap) and CASSANDRA-7486 (G1GC).



All the best,


[image: datastax_logo.png] <http://www.datastax.com/>

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com

[image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
<https://twitter.com/datastax> [image: g+.png]
<https://plus.google.com/+Datastax/about>
<http://feeds.feedburner.com/datastax>
<http://goog_410786983>


<http://www.datastax.com/gartner-magic-quadrant-odbms>

DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Fri, Jan 15, 2016 at 4:00 AM, Jean Tremblay <
jean.tremblay@zen-innovations.com> wrote:

> Thank you Sebastián for your useful advice. I managed restarting the
> nodes, but I needed to delete all the commit logs, not only the last one
> specified. Nevertheless I’m back in business.
>
> Would there be a better memory configuration to select for my nodes in a
> C* 3 cluster? Currently I use MAX_HEAP_SIZE=“6G" HEAP_NEWSIZE=“496M” for
> a 16M RAM node.
>
> Thanks for your help.
>
> Jean
>
> On 15 Jan 2016, at 24:24 , Sebastian Estevez <
> sebastian.estevez@datastax.com> wrote:
>
>
> Try starting the other nodes. You may have to delete or mv the commitlog
> segment referenced in the error message for the node to come up since
> apparently it is corrupted.
>
> All the best,
>
> [image: datastax_logo.png] <http://www.datastax.com/>
> Sebastián Estévez
> Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com
> [image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
> <https://twitter.com/datastax> [image: g+.png]
> <https://plus.google.com/+Datastax/about>
> <http://feeds.feedburner.com/datastax>
> <http://goog_410786983/>
>
> <http://www.datastax.com/gartner-magic-quadrant-odbms>
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Thu, Jan 14, 2016 at 1:00 PM, Jean Tremblay <
> jean.tremblay@zen-innovations.com> wrote:
>
>> How can I restart?
>> It blocks with the error listed below.
>> Are my memory settings good for my configuration?
>>
>> On 14 Jan 2016, at 18:30, Jake Luciani <ja...@gmail.com> wrote:
>>
>> Yes you can restart without data loss.
>>
>> Can you please include info about how much data you have loaded per node
>> and perhaps what your schema looks like?
>>
>> Thanks
>>
>> On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay <
>> jean.tremblay@zen-innovations.com> wrote:
>>
>>>
>>> Ok, I will open a ticket.
>>>
>>> How could I restart my cluster without loosing everything ?
>>> Would there be a better memory configuration to select for my nodes?
>>> Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.
>>>
>>> Thanks
>>>
>>> Jean
>>>
>>> On 14 Jan 2016, at 18:19, Tyler Hobbs <ty...@datastax.com> wrote:
>>>
>>> I don't think that's a known issue.  Can you open a ticket at
>>> https://issues.apache.org/jira/browse/CASSANDRA and attach your schema
>>> along with the commitlog files and the mutation that was saved to /tmp?
>>>
>>> On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <
>>> jean.tremblay@zen-innovations.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
>>>> I use Cassandra 3.1.1.
>>>> I use the following setup for the memory:
>>>>   MAX_HEAP_SIZE="6G"
>>>> HEAP_NEWSIZE="496M"
>>>>
>>>> I have been loading a lot of data in this cluster over the last 24
>>>> hours. The system behaved I think very nicely. It was loading very fast,
>>>> and giving excellent read time. There was no error messages until this one:
>>>>
>>>>
>>>> ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602
>>>> JVMStabilityInspector.java:139 - JVM state determined to be unstable.
>>>> Exiting forcefully due to:
>>>> java.lang.OutOfMemoryError: Java heap space
>>>> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
>>>> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
>>>> at
>>>> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>>> ~[na:1.8.0_65]
>>>> at
>>>> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
>>>>
>>>> 4 nodes out of 5 crashed with this error message. Now when I want to
>>>> restart the first node I have the following error;
>>>>
>>>> ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 -
>>>> Exiting due to error while processing commit log during initialization.
>>>> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException:
>>>> Unexpected error deserializing mutation; saved to
>>>> /tmp/mutation7465380878750576105dat.  This may be caused by replaying a
>>>> mutation against a table with the same name but incompatible schema.
>>>> Exception follows: org.apache.cassandra.serializers.MarshalException: Not
>>>> enough bytes to read a map
>>>> at
>>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>> at
>>>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677)
>>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>>
>>>> I can no longer start my nodes.
>>>>
>>>> How can I restart my cluster?
>>>> Is this problem known?
>>>> Is there a better Cassandra 3 version which would behave better with
>>>> respect to this problem?
>>>> Would there be a better memory configuration to select for my nodes?
>>>> Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM
>>>> node.
>>>>
>>>>
>>>> Thank you very much for your advice.
>>>>
>>>> Kind regards
>>>>
>>>> Jean
>>>>
>>>
>>>
>>>
>>> --
>>> Tyler Hobbs
>>> DataStax <http://datastax.com/>
>>>
>>>
>>
>>
>> --
>> http://twitter.com/tjake
>>
>>
>
>

Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Jean Tremblay <je...@zen-innovations.com>.
Thank you Sebastián for your useful advice. I managed restarting the nodes, but I needed to delete all the commit logs, not only the last one specified. Nevertheless I’m back in business.

Would there be a better memory configuration to select for my nodes in a C* 3 cluster? Currently I use MAX_HEAP_SIZE=“6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.

Thanks for your help.

Jean

On 15 Jan 2016, at 24:24 , Sebastian Estevez <se...@datastax.com>> wrote:

Try starting the other nodes. You may have to delete or mv the commitlog segment referenced in the error message for the node to come up since apparently it is corrupted.

All the best,

[datastax_logo.png]<http://www.datastax.com/>
Sebastián Estévez
Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com<ma...@datastax.com>
[linkedin.png]<https://www.linkedin.com/company/datastax> [facebook.png] <https://www.facebook.com/datastax>  [twitter.png] <https://twitter.com/datastax>  [g+.png] <https://plus.google.com/+Datastax/about>  [https://lh6.googleusercontent.com/24_538J0j5M0NHQx-jkRiV_IHrhsh-98hpi--Qz9b0-I4llvWuYI6LgiVJsul0AhxL0gMTOHgw3G0SvIXaT2C7fsKKa_DdQ2uOJ-bQ6h_mQ7k7iMybcR1dr1VhWgLMxcmg] <http://feeds.feedburner.com/datastax>
<http://goog_410786983/>

[http://learn.datastax.com/rs/059-YLZ-577/images/Gartner_728x90_Sig4.png]<http://www.datastax.com/gartner-magic-quadrant-odbms>

DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Thu, Jan 14, 2016 at 1:00 PM, Jean Tremblay <je...@zen-innovations.com>> wrote:
How can I restart?
It blocks with the error listed below.
Are my memory settings good for my configuration?

On 14 Jan 2016, at 18:30, Jake Luciani <ja...@gmail.com>> wrote:

Yes you can restart without data loss.

Can you please include info about how much data you have loaded per node and perhaps what your schema looks like?

Thanks

On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay <je...@zen-innovations.com>> wrote:

Ok, I will open a ticket.

How could I restart my cluster without loosing everything ?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.

Thanks

Jean

On 14 Jan 2016, at 18:19, Tyler Hobbs <ty...@datastax.com>> wrote:

I don't think that's a known issue.  Can you open a ticket at https://issues.apache.org/jira/browse/CASSANDRA and attach your schema along with the commitlog files and the mutation that was saved to /tmp?

On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <je...@zen-innovations.com>> wrote:
Hi,

I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
I use Cassandra 3.1.1.
I use the following setup for the memory:
  MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="496M"

I have been loading a lot of data in this cluster over the last 24 hours. The system behaved I think very nicely. It was loading very fast, and giving excellent read time. There was no error messages until this one:


ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602 JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
at org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) ~[apache-cassandra-3.1.1.jar:3.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.1.1.jar:3.1.1]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]

4 nodes out of 5 crashed with this error message. Now when I want to restart the first node I have the following error;

ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Unexpected error deserializing mutation; saved to /tmp/mutation7465380878750576105dat.  This may be caused by replaying a mutation against a table with the same name but incompatible schema.  Exception follows: org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a map
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677) [apache-cassandra-3.1.1.jar:3.1.1]

I can no longer start my nodes.

How can I restart my cluster?
Is this problem known?
Is there a better Cassandra 3 version which would behave better with respect to this problem?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.


Thank you very much for your advice.

Kind regards

Jean



--
Tyler Hobbs
DataStax<http://datastax.com/>



--
http://twitter.com/tjake



Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Sebastian Estevez <se...@datastax.com>.
Try starting the other nodes. You may have to delete or mv the commitlog
segment referenced in the error message for the node to come up since
apparently it is corrupted.

All the best,


[image: datastax_logo.png] <http://www.datastax.com/>

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com

[image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
<https://twitter.com/datastax> [image: g+.png]
<https://plus.google.com/+Datastax/about>
<http://feeds.feedburner.com/datastax>
<http://goog_410786983>


<http://www.datastax.com/gartner-magic-quadrant-odbms>

DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Thu, Jan 14, 2016 at 1:00 PM, Jean Tremblay <
jean.tremblay@zen-innovations.com> wrote:

> How can I restart?
> It blocks with the error listed below.
> Are my memory settings good for my configuration?
>
> On 14 Jan 2016, at 18:30, Jake Luciani <ja...@gmail.com> wrote:
>
> Yes you can restart without data loss.
>
> Can you please include info about how much data you have loaded per node
> and perhaps what your schema looks like?
>
> Thanks
>
> On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay <
> jean.tremblay@zen-innovations.com> wrote:
>
>>
>> Ok, I will open a ticket.
>>
>> How could I restart my cluster without loosing everything ?
>> Would there be a better memory configuration to select for my nodes?
>> Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.
>>
>> Thanks
>>
>> Jean
>>
>> On 14 Jan 2016, at 18:19, Tyler Hobbs <ty...@datastax.com> wrote:
>>
>> I don't think that's a known issue.  Can you open a ticket at
>> https://issues.apache.org/jira/browse/CASSANDRA and attach your schema
>> along with the commitlog files and the mutation that was saved to /tmp?
>>
>> On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <
>> jean.tremblay@zen-innovations.com> wrote:
>>
>>> Hi,
>>>
>>> I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
>>> I use Cassandra 3.1.1.
>>> I use the following setup for the memory:
>>>   MAX_HEAP_SIZE="6G"
>>> HEAP_NEWSIZE="496M"
>>>
>>> I have been loading a lot of data in this cluster over the last 24
>>> hours. The system behaved I think very nicely. It was loading very fast,
>>> and giving excellent read time. There was no error messages until this one:
>>>
>>>
>>> ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602
>>> JVMStabilityInspector.java:139 - JVM state determined to be unstable.
>>> Exiting forcefully due to:
>>> java.lang.OutOfMemoryError: Java heap space
>>> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
>>> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
>>> at
>>> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> ~[na:1.8.0_65]
>>> at
>>> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
>>>
>>> 4 nodes out of 5 crashed with this error message. Now when I want to
>>> restart the first node I have the following error;
>>>
>>> ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 -
>>> Exiting due to error while processing commit log during initialization.
>>> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException:
>>> Unexpected error deserializing mutation; saved to
>>> /tmp/mutation7465380878750576105dat.  This may be caused by replaying a
>>> mutation against a table with the same name but incompatible schema.
>>> Exception follows: org.apache.cassandra.serializers.MarshalException: Not
>>> enough bytes to read a map
>>> at
>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>> at
>>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677)
>>> [apache-cassandra-3.1.1.jar:3.1.1]
>>>
>>> I can no longer start my nodes.
>>>
>>> How can I restart my cluster?
>>> Is this problem known?
>>> Is there a better Cassandra 3 version which would behave better with
>>> respect to this problem?
>>> Would there be a better memory configuration to select for my nodes?
>>> Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM
>>> node.
>>>
>>>
>>> Thank you very much for your advice.
>>>
>>> Kind regards
>>>
>>> Jean
>>>
>>
>>
>>
>> --
>> Tyler Hobbs
>> DataStax <http://datastax.com/>
>>
>>
>
>
> --
> http://twitter.com/tjake
>
>

Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Jean Tremblay <je...@zen-innovations.com>.
How can I restart?
It blocks with the error listed below.
Are my memory settings good for my configuration?

On 14 Jan 2016, at 18:30, Jake Luciani <ja...@gmail.com>> wrote:

Yes you can restart without data loss.

Can you please include info about how much data you have loaded per node and perhaps what your schema looks like?

Thanks

On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay <je...@zen-innovations.com>> wrote:

Ok, I will open a ticket.

How could I restart my cluster without loosing everything ?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.

Thanks

Jean

On 14 Jan 2016, at 18:19, Tyler Hobbs <ty...@datastax.com>> wrote:

I don't think that's a known issue.  Can you open a ticket at https://issues.apache.org/jira/browse/CASSANDRA and attach your schema along with the commitlog files and the mutation that was saved to /tmp?

On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <je...@zen-innovations.com>> wrote:
Hi,

I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
I use Cassandra 3.1.1.
I use the following setup for the memory:
  MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="496M"

I have been loading a lot of data in this cluster over the last 24 hours. The system behaved I think very nicely. It was loading very fast, and giving excellent read time. There was no error messages until this one:


ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602 JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
at org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) ~[apache-cassandra-3.1.1.jar:3.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.1.1.jar:3.1.1]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]

4 nodes out of 5 crashed with this error message. Now when I want to restart the first node I have the following error;

ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Unexpected error deserializing mutation; saved to /tmp/mutation7465380878750576105dat.  This may be caused by replaying a mutation against a table with the same name but incompatible schema.  Exception follows: org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a map
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677) [apache-cassandra-3.1.1.jar:3.1.1]

I can no longer start my nodes.

How can I restart my cluster?
Is this problem known?
Is there a better Cassandra 3 version which would behave better with respect to this problem?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.


Thank you very much for your advice.

Kind regards

Jean



--
Tyler Hobbs
DataStax<http://datastax.com/>



--
http://twitter.com/tjake

Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Jake Luciani <ja...@gmail.com>.
Yes you can restart without data loss.

Can you please include info about how much data you have loaded per node
and perhaps what your schema looks like?

Thanks

On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay <
jean.tremblay@zen-innovations.com> wrote:

>
> Ok, I will open a ticket.
>
> How could I restart my cluster without loosing everything ?
> Would there be a better memory configuration to select for my nodes?
> Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.
>
> Thanks
>
> Jean
>
> On 14 Jan 2016, at 18:19, Tyler Hobbs <ty...@datastax.com> wrote:
>
> I don't think that's a known issue.  Can you open a ticket at
> https://issues.apache.org/jira/browse/CASSANDRA and attach your schema
> along with the commitlog files and the mutation that was saved to /tmp?
>
> On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <
> jean.tremblay@zen-innovations.com> wrote:
>
>> Hi,
>>
>> I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
>> I use Cassandra 3.1.1.
>> I use the following setup for the memory:
>>   MAX_HEAP_SIZE="6G"
>> HEAP_NEWSIZE="496M"
>>
>> I have been loading a lot of data in this cluster over the last 24 hours.
>> The system behaved I think very nicely. It was loading very fast, and
>> giving excellent read time. There was no error messages until this one:
>>
>>
>> ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602
>> JVMStabilityInspector.java:139 - JVM state determined to be unstable.
>> Exiting forcefully due to:
>> java.lang.OutOfMemoryError: Java heap space
>> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
>> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
>> at
>> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> ~[na:1.8.0_65]
>> at
>> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>> ~[apache-cassandra-3.1.1.jar:3.1.1]
>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
>>
>> 4 nodes out of 5 crashed with this error message. Now when I want to
>> restart the first node I have the following error;
>>
>> ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 -
>> Exiting due to error while processing commit log during initialization.
>> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException:
>> Unexpected error deserializing mutation; saved to
>> /tmp/mutation7465380878750576105dat.  This may be caused by replaying a
>> mutation against a table with the same name but incompatible schema.
>> Exception follows: org.apache.cassandra.serializers.MarshalException: Not
>> enough bytes to read a map
>> at
>> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>> at
>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677)
>> [apache-cassandra-3.1.1.jar:3.1.1]
>>
>> I can no longer start my nodes.
>>
>> How can I restart my cluster?
>> Is this problem known?
>> Is there a better Cassandra 3 version which would behave better with
>> respect to this problem?
>> Would there be a better memory configuration to select for my nodes?
>> Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM
>> node.
>>
>>
>> Thank you very much for your advice.
>>
>> Kind regards
>>
>> Jean
>>
>
>
>
> --
> Tyler Hobbs
> DataStax <http://datastax.com/>
>
>


-- 
http://twitter.com/tjake

Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Jean Tremblay <je...@zen-innovations.com>.
Ok, I will open a ticket.

How could I restart my cluster without loosing everything ?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.

Thanks

Jean

On 14 Jan 2016, at 18:19, Tyler Hobbs <ty...@datastax.com>> wrote:

I don't think that's a known issue.  Can you open a ticket at https://issues.apache.org/jira/browse/CASSANDRA and attach your schema along with the commitlog files and the mutation that was saved to /tmp?

On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <je...@zen-innovations.com>> wrote:
Hi,

I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
I use Cassandra 3.1.1.
I use the following setup for the memory:
  MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="496M"

I have been loading a lot of data in this cluster over the last 24 hours. The system behaved I think very nicely. It was loading very fast, and giving excellent read time. There was no error messages until this one:


ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602 JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
at org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) ~[apache-cassandra-3.1.1.jar:3.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.1.1.jar:3.1.1]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]

4 nodes out of 5 crashed with this error message. Now when I want to restart the first node I have the following error;

ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Unexpected error deserializing mutation; saved to /tmp/mutation7465380878750576105dat.  This may be caused by replaying a mutation against a table with the same name but incompatible schema.  Exception follows: org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a map
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549) [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677) [apache-cassandra-3.1.1.jar:3.1.1]

I can no longer start my nodes.

How can I restart my cluster?
Is this problem known?
Is there a better Cassandra 3 version which would behave better with respect to this problem?
Would there be a better memory configuration to select for my nodes? Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.


Thank you very much for your advice.

Kind regards

Jean



--
Tyler Hobbs
DataStax<http://datastax.com/>

Re: Cassandra 3.1.1 with respect to HeapSpace

Posted by Tyler Hobbs <ty...@datastax.com>.
I don't think that's a known issue.  Can you open a ticket at
https://issues.apache.org/jira/browse/CASSANDRA and attach your schema
along with the commitlog files and the mutation that was saved to /tmp?

On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay <
jean.tremblay@zen-innovations.com> wrote:

> Hi,
>
> I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
> I use Cassandra 3.1.1.
> I use the following setup for the memory:
>   MAX_HEAP_SIZE="6G"
> HEAP_NEWSIZE="496M"
>
> I have been loading a lot of data in this cluster over the last 24 hours.
> The system behaved I think very nicely. It was loading very fast, and
> giving excellent read time. There was no error messages until this one:
>
>
> ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602
> JVMStabilityInspector.java:139 - JVM state determined to be unstable.
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
> at
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_65]
> at
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
> ~[apache-cassandra-3.1.1.jar:3.1.1]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
>
> 4 nodes out of 5 crashed with this error message. Now when I want to
> restart the first node I have the following error;
>
> ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 -
> Exiting due to error while processing commit log during initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException:
> Unexpected error deserializing mutation; saved to
> /tmp/mutation7465380878750576105dat.  This may be caused by replaying a
> mutation against a table with the same name but incompatible schema.
> Exception follows: org.apache.cassandra.serializers.MarshalException: Not
> enough bytes to read a map
> at
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549)
> [apache-cassandra-3.1.1.jar:3.1.1]
> at
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677)
> [apache-cassandra-3.1.1.jar:3.1.1]
>
> I can no longer start my nodes.
>
> How can I restart my cluster?
> Is this problem known?
> Is there a better Cassandra 3 version which would behave better with
> respect to this problem?
> Would there be a better memory configuration to select for my nodes?
> Currently I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.
>
>
> Thank you very much for your advice.
>
> Kind regards
>
> Jean
>



-- 
Tyler Hobbs
DataStax <http://datastax.com/>