You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "HB (JIRA)" <ji...@apache.org> on 2011/02/18 15:56:39 UTC

[jira] Created: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

java.lang.RuntimeException: java.lang.NegativeArraySizeException
----------------------------------------------------------------

                 Key: CASSANDRA-2195
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
             Project: Cassandra
          Issue Type: Bug
          Components: Core
    Affects Versions: 0.7.2
         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
            Reporter: HB


When putting my 0.7.2 node under load, I get a large amount of these: 

ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
java.lang.RuntimeException: java.lang.NegativeArraySizeException
        at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NegativeArraySizeException
        at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
        at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
        at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
        at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
        at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
        at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
        at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
        at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
        at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
        at org.apache.cassandra.db.Table.apply(Table.java:445)
        at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
        at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
        at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
        ... 3 more

On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "HB (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12997876#comment-12997876 ] 

HB commented on CASSANDRA-2195:
-------------------------------

I'm sorry to have to say that I had to wipe the node and recommission it, as my backup node got stuck in a major compaction with no space left to finish it. This means the dataset that was causing the pain is gone. On the bright side, I've been running it under load for a while now and it seems to be doing alright, which at least means my code is working reasonably (though with the small amount of keys involved since it's an empty node, it's obviously not a very reliable test).

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Assignee: Stu Hood
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12996514#comment-12996514 ] 

Jonathan Ellis commented on CASSANDRA-2195:
-------------------------------------------

also, to verify that there are no problems with the row data itself, can you run sstable2json on your sstable files?  (i suspect there is not since compaction worked, but just to rule it out...)  sstable2json will throw very noisy errors and abort if it runs into a problem, so if it completes with the most recent output being json-looking, then it's clean.

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "HB (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12997721#comment-12997721 ] 

HB commented on CASSANDRA-2195:
-------------------------------

Ok here's an update: I tried to sstable2json a few of my .db files, and this is what happens: sstable2json creates file handles until it eventually exits with 
Exception in thread "main" java.io.IOError: java.io.FileNotFoundException: /var/lib/cassandra/data/<ks>/Search-1265-Index.db (Too many open files)
        at org.apache.cassandra.io.util.BufferedSegmentedFile.getSegment(BufferedSegmentedFile.java:68)
        at org.apache.cassandra.io.util.SegmentedFile$SegmentIterator.next(SegmentedFile.java:130)
        at org.apache.cassandra.io.util.SegmentedFile$SegmentIterator.next(SegmentedFile.java:109)
        at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:472)
        at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:563)
        at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:49)
        at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:68)
        at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
        at org.apache.cassandra.tools.SSTableExport.serializeRow(SSTableExport.java:175)
        at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:353)
        at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:375)
        at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:388)
        at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:446)
Caused by: java.io.FileNotFoundException: /var/lib/cassandra/data/<ks>/Search-1265-Index.db (Too many open files)
        at java.io.RandomAccessFile.open(Native Method)
        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
        at org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:116)
        at org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:111)
        at org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:96)
        at org.apache.cassandra.io.util.BufferedSegmentedFile.getSegment(BufferedSegmentedFile.java:62)
        ... 12 more
Checked with lsof -n, doing a line count shows it gets near 65535 handles before it gives up, which is the limit shown by ulimit -H -a for open files). Sometimes it manages to write some actual json (in some cases 40-50MB), which at first glance looks ok (though I did notice the keys are encoded in hex, whereas they used to be plain text -- dehexing them does show expected values, though). I should also note that I'm successfully running sstable2json on an older version of the dataset, taken when it was still running on 0.6.

I think I failed to mention this dataset used to be owned by a 0.6.x instance, I moved over the files to a differen server, converted the original config file and made 0.7 load it. I also updated the CF by adding an UTF8Type extra column with index_type: KEYS. Also, in addition to Search-xxx-* I now have a number of Search-f-xxxx-* and Search.64617465-f-xx.* which I didn't use to have, is this ok?

Something definitely seems to be up with my sstables. Since this is a test node, I can afford to lose this dataset, but of course I'd like to find out what went wrong so it doesn't happen again (to me or others), so I hope there's anything you can extract from this information that is helpful.


> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Assignee: Stu Hood
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12996537#comment-12996537 ] 

Jonathan Ellis commented on CASSANDRA-2195:
-------------------------------------------

Another thing to try: turn off mmap'd I/O (set disk_access_mode: standard in cassandra.yaml) to see if it's another bug in the ByteBuffer layer like CASSANDRA-2165.

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Resolved: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis resolved CASSANDRA-2195.
---------------------------------------

       Resolution: Duplicate
    Fix Version/s:     (was: 0.7.3)
         Assignee:     (was: Stu Hood)

Looks like 2216 is the culprit all right.  The fix there is committed and CASSANDRA-2217 is open to provide a rebuild-sstables-with-current-version-bloom-filters tool.

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Priority: Blocker
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12996507#comment-12996507 ] 

Jonathan Ellis commented on CASSANDRA-2195:
-------------------------------------------

I take it "after a few minutes" includes additional writes?

How big is your cluster?  Can you reproduce on a single-node cluster?

How can we narrow this down?  Can you reproduce with appropriate settings of contrib/stress, for instance?  If not can you give us a stripped-down sample of your application that can reproduce the problem?

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "HB (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12997754#comment-12997754 ] 

HB commented on CASSANDRA-2195:
-------------------------------

I should note that disk access mode was set to standard already.

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Assignee: Stu Hood
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "Sylvain Lebresne (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12997763#comment-12997763 ] 

Sylvain Lebresne commented on CASSANDRA-2195:
---------------------------------------------

HB, since it is a test node, do you mind trying the patch attached to CASSANDRA-2216, force a compaction again and check if you can reproduce.

As for your json2sstable problem, this is just due to too many open files. Not sure it is justified that it opens so many files, many json2sstable leaks file descriptor, but ni any case this is not related to a potential corruption problem (and if needed you can probably have it works by increasing the allowing number of open file using ulimit). But right now, my money is on CASSANDRA-2216.

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Assignee: Stu Hood
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Updated: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis updated CASSANDRA-2195:
--------------------------------------

         Priority: Blocker  (was: Major)
    Fix Version/s: 0.7.3

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Assigned: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "Stu Hood (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stu Hood reassigned CASSANDRA-2195:
-----------------------------------

    Assignee: Stu Hood

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Assignee: Stu Hood
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException

Posted by "HB (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12996529#comment-12996529 ] 

HB commented on CASSANDRA-2195:
-------------------------------

Thanks for your reply, Jonathan. This is a single node cluster (recently upgraded from 0.6). I have a dev version of our web app, when I test locally with 
a small number of requests, everything is ok, but if I move to our live app, it starts throwing exceptions fairly quickly. Also, no preceeding errors, just that exception for at least a large number of mutations). What our app does, is try to get a row, if it doesn't exist it will write a row after that (we're using C for simple caching), about 30 of these per page load and probably a few dozen loads per second, so a few hundred reads and writes under load. I will try your suggestions on monday, also I'll test with just reads and just writes to see if I can narrow it down to one of those as opposed to a combination. I'll post my findings on monday, as well as relevant parts from the app (the environment field of this post has most of it, though, it's really quite simple: simple insert, key / value and simple read, using the Pelops library), since our shop is closed in the weekend.

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column> columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY); operations.
>            Reporter: HB
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the -Compacted files where removed and the node seemed to start up, querying some random rows seemed to go alright but after a few minutes I started getting the above messages again. I'm grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira