You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Eric Lubow (JIRA)" <ji...@apache.org> on 2014/01/01 17:53:12 UTC

[jira] [Commented] (CASSANDRA-4206) AssertionError: originally calculated column size of 629444349 but now it is 588008950

    [ https://issues.apache.org/jira/browse/CASSANDRA-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859902#comment-13859902 ] 

Eric Lubow commented on CASSANDRA-4206:
---------------------------------------

We are seeing this as well with 1.2.11.  As was mentioned above, knowing which CF would very useful here.  It seems to be happening to hints.  The only other major action we are seeing that is out of the ordinary is thousands of hint SSTables being transferred at a time.  Here is the Java error:

{quote}
ERROR [HintedHandoff:6] 2014-01-01 16:45:19,914 CassandraDaemon.java (line 191) Exception in thread Thread[HintedHandoff:6,1,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: originally calculated column size of 1028119265 but now it is 1028119453
	at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:436)
	at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
	at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
	at org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:724)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: originally calculated column size of 1028119265 but now it is 1028119453
	at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
	at java.util.concurrent.FutureTask.get(FutureTask.java:111)
	at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:432)
	... 6 more
Caused by: java.lang.AssertionError: originally calculated column size of 1028119265 but now it is 1028119453
	at org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
	at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
	at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
	at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
	at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
	at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
	at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
	... 3 more
{quote}

Multithreaded compactions are set to false in our cluster on all nodes.  We also don't have any pending compactions in the cluster.  Just seeing this error a lot in the logs.  The error seems to happen more frequently during bootstraps or repairs that have a lot of work to do.

> AssertionError: originally calculated column size of 629444349 but now it is 588008950
> --------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-4206
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4206
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.0.9
>         Environment: Debian Squeeze Linux, kernel 2.6.32, sun-java6-bin 6.26-0squeeze1
>            Reporter: Patrik Modesto
>
> I've 4 node cluster of Cassandra 1.0.9. There is a rfTest3 keyspace with RF=3 and one CF with two secondary indexes. I'm importing data into this CF using Hadoop Mapreduce job, each row has less than 10 colkumns. From JMX:
> MaxRowSize:  1597
> MeanRowSize: 369
> And there are some tens of millions of rows.
> It's write-heavy usage and there is a big pressure on each node, there are quite some dropped mutations on each node. After ~12 hours of inserting I see these assertion exceptiona on 3 out of four nodes:
> {noformat}
> ERROR 06:25:40,124 Fatal exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException:
> java.lang.AssertionError: originally calculated column size of 629444349 but now it is 588008950
>        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:388)
>        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:256)
>        at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:84)
>        at org.apache.cassandra.db.HintedHandOffManager$3.runMayThrow(HintedHandOffManager.java:437)
>        at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException:
> java.lang.AssertionError: originally calculated column size of
> 629444349 but now it is 588008950
>        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:384)
>        ... 7 more
> Caused by: java.lang.AssertionError: originally calculated column size
> of 629444349 but now it is 588008950
>        at org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:124)
>        at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
>        at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:161)
>        at org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:380)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        ... 3 more
> {noformat}
> Few lines regarding Hints from the output.log:
> {noformat}
>  INFO 06:21:26,202 Compacting large row system/HintsColumnFamily:70000000000000000000000000000000 (1712834057 bytes) incrementally
>  INFO 06:22:52,610 Compacting large row system/HintsColumnFamily:10000000000000000000000000000000 (2616073981 bytes) incrementally
>  INFO 06:22:59,111 flushing high-traffic column family CFS(Keyspace='system', ColumnFamily='HintsColumnFamily') (estimated 305147360 bytes)
>  INFO 06:22:59,813 Enqueuing flush of Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 7452 ops)
>  INFO 06:22:59,814 Writing Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 7452 ops)
> {noformat}
> I think the problem may be somehow connected to an IntegerType secondary index. I had a different problem with CF with two secondary indexes, the first UTF8Type, the second IntegerType. After a few hours of inserting data in the afternoon and midnight repair+compact, the next day I couldn't find any row using the IntegerType secondary index. The output was like this:
> {noformat}
> [default@rfTest3] get IndexTest where col1 = '3230727:http://zaskolak.cz/download.php';
> -------------------
> RowKey: 3230727:8383582:http://zaskolak.cz/download.php
> => (column=col1, value=3230727:http://zaskolak.cz/download.php, timestamp=1335348630332000)
> => (column=col2, value=8383582, timestamp=1335348630332000)
> -------------------
> RowKey: 3230727:8383583:http://zaskolak.cz/download.php
> => (column=col1, value=3230727:http://zaskolak.cz/download.php, timestamp=1335348449078000)
> => (column=col2, value=8383583, timestamp=1335348449078000)
> -------------------
> RowKey: 3230727:8383579:http://zaskolak.cz/download.php
> => (column=col1, value=3230727:http://zaskolak.cz/download.php, timestamp=1335348778577000)
> => (column=col2, value=8383579, timestamp=1335348778577000)
> 3 Rows Returned.
> Elapsed time: 292 msec(s).
> [default@rfTest3] get IndexTest where col2 = 8383583;
> 0 Row Returned.
> Elapsed time: 7 msec(s
> {noformat}
> You can see there really is an 8383583 in col2 in on of the listed rows, but the search by secondary index returns nothing.
> The Assert Exception also happend only on CF with the secondary index of IntegerType. There were also secondary indexes of UTF8Type and
> LongType types. It's the first time I've tried secondary indexes of other type than UTF8Type.
> Regards,
> Patrik



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)