You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Daniel Kador (JIRA)" <ji...@apache.org> on 2013/12/21 21:46:10 UTC

[jira] [Comment Edited] (CASSANDRA-4206) AssertionError: originally calculated column size of 629444349 but now it is 588008950

    [ https://issues.apache.org/jira/browse/CASSANDRA-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13854930#comment-13854930 ] 

Daniel Kador edited comment on CASSANDRA-4206 at 12/21/13 8:44 PM:
-------------------------------------------------------------------

We're seeing the AssertionError in LazilyCompactedRow a lot too on 1.2.8.  Not a problem with hinted handoffs for us.  We just have a large CF (close to 1TB on disk) that has had most of its row keys deleted at this point.  Lots of pending compactions on many nodes and these errors seem to stymie progress.  Hoping to get those compactions to finish to reclaim disk...

Has anybody figured out a decent workaround?  Should we try disabling multithreaded_compaction?  Looks like folks are still seeing the errors with that off (it's on for us).

How stupid would running Cassandra with assertions off be?

Alternatively, has anybody who's had this problem attempted to upgrade to 2.0 and had the problem fixed?


was (Author: dkador):
We're seeing the AssertionError in LazilyCompactedRow a lot too on 1.2.10.  Not a problem with hinted handoffs for us.  We just have a large CF (close to 1TB on disk) that has had most of its row keys deleted at this point.  Lots of pending compactions on many nodes and these errors seem to stymie progress.  Hoping to get those compactions to finish to reclaim disk...

Has anybody figured out a decent workaround?  Should we try disabling multithreaded_compaction?  Looks like folks are still seeing the errors with that off (it's on for us).

How stupid would running Cassandra with assertions off be?

Alternatively, has anybody who's had this problem attempted to upgrade to 2.0 and had the problem fixed?

> AssertionError: originally calculated column size of 629444349 but now it is 588008950
> --------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-4206
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4206
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.0.9
>         Environment: Debian Squeeze Linux, kernel 2.6.32, sun-java6-bin 6.26-0squeeze1
>            Reporter: Patrik Modesto
>
> I've 4 node cluster of Cassandra 1.0.9. There is a rfTest3 keyspace with RF=3 and one CF with two secondary indexes. I'm importing data into this CF using Hadoop Mapreduce job, each row has less than 10 colkumns. From JMX:
> MaxRowSize:  1597
> MeanRowSize: 369
> And there are some tens of millions of rows.
> It's write-heavy usage and there is a big pressure on each node, there are quite some dropped mutations on each node. After ~12 hours of inserting I see these assertion exceptiona on 3 out of four nodes:
> {noformat}
> ERROR 06:25:40,124 Fatal exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException:
> java.lang.AssertionError: originally calculated column size of 629444349 but now it is 588008950
>        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:388)
>        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:256)
>        at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:84)
>        at org.apache.cassandra.db.HintedHandOffManager$3.runMayThrow(HintedHandOffManager.java:437)
>        at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException:
> java.lang.AssertionError: originally calculated column size of
> 629444349 but now it is 588008950
>        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:384)
>        ... 7 more
> Caused by: java.lang.AssertionError: originally calculated column size
> of 629444349 but now it is 588008950
>        at org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:124)
>        at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
>        at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:161)
>        at org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:380)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        ... 3 more
> {noformat}
> Few lines regarding Hints from the output.log:
> {noformat}
>  INFO 06:21:26,202 Compacting large row system/HintsColumnFamily:70000000000000000000000000000000 (1712834057 bytes) incrementally
>  INFO 06:22:52,610 Compacting large row system/HintsColumnFamily:10000000000000000000000000000000 (2616073981 bytes) incrementally
>  INFO 06:22:59,111 flushing high-traffic column family CFS(Keyspace='system', ColumnFamily='HintsColumnFamily') (estimated 305147360 bytes)
>  INFO 06:22:59,813 Enqueuing flush of Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 7452 ops)
>  INFO 06:22:59,814 Writing Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 7452 ops)
> {noformat}
> I think the problem may be somehow connected to an IntegerType secondary index. I had a different problem with CF with two secondary indexes, the first UTF8Type, the second IntegerType. After a few hours of inserting data in the afternoon and midnight repair+compact, the next day I couldn't find any row using the IntegerType secondary index. The output was like this:
> {noformat}
> [default@rfTest3] get IndexTest where col1 = '3230727:http://zaskolak.cz/download.php';
> -------------------
> RowKey: 3230727:8383582:http://zaskolak.cz/download.php
> => (column=col1, value=3230727:http://zaskolak.cz/download.php, timestamp=1335348630332000)
> => (column=col2, value=8383582, timestamp=1335348630332000)
> -------------------
> RowKey: 3230727:8383583:http://zaskolak.cz/download.php
> => (column=col1, value=3230727:http://zaskolak.cz/download.php, timestamp=1335348449078000)
> => (column=col2, value=8383583, timestamp=1335348449078000)
> -------------------
> RowKey: 3230727:8383579:http://zaskolak.cz/download.php
> => (column=col1, value=3230727:http://zaskolak.cz/download.php, timestamp=1335348778577000)
> => (column=col2, value=8383579, timestamp=1335348778577000)
> 3 Rows Returned.
> Elapsed time: 292 msec(s).
> [default@rfTest3] get IndexTest where col2 = 8383583;
> 0 Row Returned.
> Elapsed time: 7 msec(s
> {noformat}
> You can see there really is an 8383583 in col2 in on of the listed rows, but the search by secondary index returns nothing.
> The Assert Exception also happend only on CF with the secondary index of IntegerType. There were also secondary indexes of UTF8Type and
> LongType types. It's the first time I've tried secondary indexes of other type than UTF8Type.
> Regards,
> Patrik



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)