You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Aiman Parvaiz <ai...@flipagram.com> on 2015/06/09 06:48:53 UTC

C* 2.0.15 - java.lang.NegativeArraySizeException

Hi everyone
I am running C* 2.0.9 and decided to do a rolling upgrade. Added a node of
C* 2.0.15 in the existing cluster and saw this twice:

Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,658
INFO CompactionExecutor:4 CompactionTask.runMayThrow - Compacting
[SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-37-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-40-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-42-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-38-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-39-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-44-Data.db')]



Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,669
ERROR CompactionExecutor:4 CassandraDaemon.uncaughtException - Exception in
thread Thread[CompactionExecutor:4,1,main]
Jun  9 02:27:20 prod-cass23.localdomain
*java.lang.NegativeArraySizeException*
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
Jun  9 02:27:20 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
Jun  9 02:27:20 prod-cass23.localdomain     at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
Jun  9 02:27:20 prod-cass23.localdomain     at
java.util.concurrent.FutureTask.run(FutureTask.java:262)
Jun  9 02:27:20 prod-cass23.localdomain     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
Jun  9 02:27:20 prod-cass23.localdomain     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
Jun  9 02:27:20 prod-cass23.localdomain     at
java.lang.Thread.run(Thread.java:745)
Jun  9 02:27:47 prod-cass23.localdomain cassandra: 2015-06-09 02:27:47,725
INFO main StorageService.setMode - JOINING: Starting to bootstrap...

As you can see this happened first time even before Joining. Second
occasion stack trace:

Jun  9 02:32:15 prod-cass23.localdomain cassandra: 2015-06-09 02:32:15,097
ERROR CompactionExecutor:6 CassandraDaemon.uncaughtException - Exception in
thread Thread[CompactionExecutor:6,1,main]
Jun  9 02:32:15 prod-cass23.localdomain java.lang.NegativeArraySizeException
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
Jun  9 02:32:15 prod-cass23.localdomain     at
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
Jun  9 02:32:15 prod-cass23.localdomain     at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
Jun  9 02:32:15 prod-cass23.localdomain     at
java.util.concurrent.FutureTask.run(FutureTask.java:262)
Jun  9 02:32:15 prod-cass23.localdomain     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
Jun  9 02:32:15 prod-cass23.localdomain     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
Jun  9 02:32:15 prod-cass23.localdomain     at
java.lang.Thread.run(Thread.java:745)

The node is not mis behaving as such and I am not seeing any behavior out
of ordinary as of now. Please advice about this error and if possible why
this occurred in first place. Any help is appreciated.

Thanks

RE: C* 2.0.15 - java.lang.NegativeArraySizeException

Posted by SE...@homedepot.com.
Right. I have had very few problems running mixed versions for normal operations (as long as the versions are “close”). During upgrades, I turn off repairs. Adding/replacing nodes is very infrequent for me, so not much of a consideration. We upgrade as quickly as we can, however, to protect against issues. Our clusters are not huge, though.

NOTE: the above applies to version 1.0.8, 1.2.x and 2.0.x. The 1.1.x versions are painful and problematic.

Sean Durity

From: Robert Coli [mailto:rcoli@eventbrite.com]
Sent: Tuesday, June 09, 2015 6:19 PM
To: user@cassandra.apache.org
Subject: Re: C* 2.0.15 - java.lang.NegativeArraySizeException

On Tue, Jun 9, 2015 at 2:35 PM, Aiman Parvaiz <ai...@flipagram.com>> wrote:
Thanks Sean, in this scenario also I would end up running 2 versions of Cassandra as I am planning to do a rolling upgrade and hence zero downtime. Upgrading in place one node at a time would lead to running 2 versions, please let me know if I am missing something here.

Running a cluster with nodes of two different versions during a rolling upgrade, for the duration of the upgrade, is (mostly) supported [1].

Modifying cluster topology (adding or removing or replacing nodes) during such an upgrade is not.

It is a fair statement that with very large clusters and very slow upgradesstables, the ability of any operator to operate in this manner approaches not-possible. I don't know how people with truly huge clusters deal with this race.

=Rob
 [1] some operations are not supported possible with some combinations of versions. for example one cannot repair in some cases.

________________________________

The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.

Re: C* 2.0.15 - java.lang.NegativeArraySizeException

Posted by Robert Coli <rc...@eventbrite.com>.
On Tue, Jun 9, 2015 at 2:35 PM, Aiman Parvaiz <ai...@flipagram.com> wrote:

> Thanks Sean, in this scenario also I would end up running 2 versions of
> Cassandra as I am planning to do a rolling upgrade and hence zero downtime.
> Upgrading in place one node at a time would lead to running 2 versions,
> please let me know if I am missing something here.
>

Running a cluster with nodes of two different versions during a rolling
upgrade, for the duration of the upgrade, is (mostly) supported [1].

Modifying cluster topology (adding or removing or replacing nodes) during
such an upgrade is not.

It is a fair statement that with very large clusters and very slow
upgradesstables, the ability of any operator to operate in this manner
approaches not-possible. I don't know how people with truly huge clusters
deal with this race.

=Rob
 [1] some operations are not supported possible with some combinations of
versions. for example one cannot repair in some cases.

Re: C* 2.0.15 - java.lang.NegativeArraySizeException

Posted by Aiman Parvaiz <ai...@flipagram.com>.
Thanks Sean, in this scenario also I would end up running 2 versions of
Cassandra as I am planning to do a rolling upgrade and hence zero downtime.
Upgrading in place one node at a time would lead to running 2 versions,
please let me know if I am missing something here.

On Tue, Jun 9, 2015 at 2:00 PM, <SE...@homedepot.com> wrote:

>  In my experience, you don’t want to do streaming operations (repairs or
> bootstraps) with mixed Cassandra versions. Upgrade the ring to the new
> version, and then add nodes (or add the nodes at the current version, and
> then upgrade).
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Aiman Parvaiz [mailto:aiman@flipagram.com]
> *Sent:* Tuesday, June 09, 2015 1:29 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: C* 2.0.15 - java.lang.NegativeArraySizeException
>
>
>
> Quick update, saw the same error on another new node, again the node isn't
> really misbehaving uptill now.
>
>
>
> Thanks
>
>
>
> On Mon, Jun 8, 2015 at 9:48 PM, Aiman Parvaiz <ai...@flipagram.com> wrote:
>
> Hi everyone
>
> I am running C* 2.0.9 and decided to do a rolling upgrade. Added a node of
> C* 2.0.15 in the existing cluster and saw this twice:
>
>
>
> Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,658
> INFO CompactionExecutor:4 CompactionTask.runMayThrow - Compacting
> [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-37-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-40-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-42-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-38-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-39-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-44-Data.db')]
>
>
>
>
>
>
>
> Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,669
> ERROR CompactionExecutor:4 CassandraDaemon.uncaughtException - Exception in
> thread Thread[CompactionExecutor:4,1,main]
>
> Jun  9 02:27:20 prod-cass23.localdomain
> *java.lang.NegativeArraySizeException*
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.FutureTask.run(FutureTask.java:262)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.lang.Thread.run(Thread.java:745)
>
> Jun  9 02:27:47 prod-cass23.localdomain cassandra: 2015-06-09 02:27:47,725
> INFO main StorageService.setMode - JOINING: Starting to bootstrap...
>
>
>
> As you can see this happened first time even before Joining. Second
> occasion stack trace:
>
>
>
> Jun  9 02:32:15 prod-cass23.localdomain cassandra: 2015-06-09 02:32:15,097
> ERROR CompactionExecutor:6 CassandraDaemon.uncaughtException - Exception in
> thread Thread[CompactionExecutor:6,1,main]
>
> Jun  9 02:32:15 prod-cass23.localdomain
> java.lang.NegativeArraySizeException
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.FutureTask.run(FutureTask.java:262)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.lang.Thread.run(Thread.java:745)
>
>
>
> The node is not mis behaving as such and I am not seeing any behavior out
> of ordinary as of now. Please advice about this error and if possible why
> this occurred in first place. Any help is appreciated.
>
>
>
> Thanks
>
>
>
>
>
> --
>
> Lead Systems Architect
>
> 10351 Santa Monica Blvd, Suite 3310
>
> Los Angeles CA 90025
>
> ------------------------------
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>



-- 
Lead Systems Architect
10351 Santa Monica Blvd, Suite 3310
Los Angeles CA 90025

RE: C* 2.0.15 - java.lang.NegativeArraySizeException

Posted by SE...@homedepot.com.
In my experience, you don’t want to do streaming operations (repairs or bootstraps) with mixed Cassandra versions. Upgrade the ring to the new version, and then add nodes (or add the nodes at the current version, and then upgrade).


Sean Durity

From: Aiman Parvaiz [mailto:aiman@flipagram.com]
Sent: Tuesday, June 09, 2015 1:29 PM
To: user@cassandra.apache.org
Subject: Re: C* 2.0.15 - java.lang.NegativeArraySizeException

Quick update, saw the same error on another new node, again the node isn't really misbehaving uptill now.

Thanks

On Mon, Jun 8, 2015 at 9:48 PM, Aiman Parvaiz <ai...@flipagram.com>> wrote:
Hi everyone
I am running C* 2.0.9 and decided to do a rolling upgrade. Added a node of C* 2.0.15 in the existing cluster and saw this twice:

Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,658 INFO CompactionExecutor:4 CompactionTask.runMayThrow - Compacting [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-37-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-40-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-42-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-38-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-39-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-44-Data.db')]



Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,669 ERROR CompactionExecutor:4 CassandraDaemon.uncaughtException - Exception in thread Thread[CompactionExecutor:4,1,main]
Jun  9 02:27:20 prod-cass23.localdomain java.lang.NegativeArraySizeException
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
Jun  9 02:27:20 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
Jun  9 02:27:20 prod-cass23.localdomain     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
Jun  9 02:27:20 prod-cass23.localdomain     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
Jun  9 02:27:20 prod-cass23.localdomain     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
Jun  9 02:27:20 prod-cass23.localdomain     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
Jun  9 02:27:20 prod-cass23.localdomain     at java.lang.Thread.run(Thread.java:745)
Jun  9 02:27:47 prod-cass23.localdomain cassandra: 2015-06-09 02:27:47,725 INFO main StorageService.setMode - JOINING: Starting to bootstrap...

As you can see this happened first time even before Joining. Second occasion stack trace:

Jun  9 02:32:15 prod-cass23.localdomain cassandra: 2015-06-09 02:32:15,097 ERROR CompactionExecutor:6 CassandraDaemon.uncaughtException - Exception in thread Thread[CompactionExecutor:6,1,main]
Jun  9 02:32:15 prod-cass23.localdomain java.lang.NegativeArraySizeException
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
Jun  9 02:32:15 prod-cass23.localdomain     at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
Jun  9 02:32:15 prod-cass23.localdomain     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
Jun  9 02:32:15 prod-cass23.localdomain     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
Jun  9 02:32:15 prod-cass23.localdomain     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
Jun  9 02:32:15 prod-cass23.localdomain     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
Jun  9 02:32:15 prod-cass23.localdomain     at java.lang.Thread.run(Thread.java:745)

The node is not mis behaving as such and I am not seeing any behavior out of ordinary as of now. Please advice about this error and if possible why this occurred in first place. Any help is appreciated.

Thanks



--
Lead Systems Architect
10351 Santa Monica Blvd, Suite 3310
Los Angeles CA 90025

________________________________

The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.

Re: C* 2.0.15 - java.lang.NegativeArraySizeException

Posted by Aiman Parvaiz <ai...@flipagram.com>.
Quick update, saw the same error on another new node, again the node isn't
really misbehaving uptill now.

Thanks

On Mon, Jun 8, 2015 at 9:48 PM, Aiman Parvaiz <ai...@flipagram.com> wrote:

> Hi everyone
> I am running C* 2.0.9 and decided to do a rolling upgrade. Added a node of
> C* 2.0.15 in the existing cluster and saw this twice:
>
> Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,658
> INFO CompactionExecutor:4 CompactionTask.runMayThrow - Compacting
> [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-37-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-40-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-42-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-38-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-39-Data.db'),
> SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-jb-44-Data.db')]
>
>
>
> Jun  9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,669
> ERROR CompactionExecutor:4 CassandraDaemon.uncaughtException - Exception in
> thread Thread[CompactionExecutor:4,1,main]
> Jun  9 02:27:20 prod-cass23.localdomain
> *java.lang.NegativeArraySizeException*
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.FutureTask.run(FutureTask.java:262)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> Jun  9 02:27:20 prod-cass23.localdomain     at
> java.lang.Thread.run(Thread.java:745)
> Jun  9 02:27:47 prod-cass23.localdomain cassandra: 2015-06-09 02:27:47,725
> INFO main StorageService.setMode - JOINING: Starting to bootstrap...
>
> As you can see this happened first time even before Joining. Second
> occasion stack trace:
>
> Jun  9 02:32:15 prod-cass23.localdomain cassandra: 2015-06-09 02:32:15,097
> ERROR CompactionExecutor:6 CassandraDaemon.uncaughtException - Exception in
> thread Thread[CompactionExecutor:6,1,main]
> Jun  9 02:32:15 prod-cass23.localdomain
> java.lang.NegativeArraySizeException
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.utils.EstimatedHistogram$EstimatedHistogramSerializer.deserialize(EstimatedHistogram.java:335)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:462)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:448)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata$SSTableMetadataSerializer.deserialize(SSTableMetadata.java:432)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableReader.getAncestors(SSTableReader.java:1366)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.io.sstable.SSTableMetadata.createCollector(SSTableMetadata.java:134)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.createCompactionWriter(CompactionTask.java:316)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:162)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.FutureTask.run(FutureTask.java:262)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> Jun  9 02:32:15 prod-cass23.localdomain     at
> java.lang.Thread.run(Thread.java:745)
>
> The node is not mis behaving as such and I am not seeing any behavior out
> of ordinary as of now. Please advice about this error and if possible why
> this occurred in first place. Any help is appreciated.
>
> Thanks
>



-- 
Lead Systems Architect
10351 Santa Monica Blvd, Suite 3310
Los Angeles CA 90025