You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Shotaro Kamio (JIRA)" <ji...@apache.org> on 2011/03/08 05:26:59 UTC

[jira] Commented: (CASSANDRA-2282) ReadCallback AssertionError: resolver.getMessageCount() <= endpoints.size()

    [ https://issues.apache.org/jira/browse/CASSANDRA-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13003804#comment-13003804 ] 

Shotaro Kamio commented on CASSANDRA-2282:
------------------------------------------

In our case, the same AssertionError occurs on multi node cluster with replication factor = 3 (0.7.3 release version).
Feeding data into cassandra looks ok (consistency level = QUORUM). Though, UnavailableException was received via hector 0.7.0-28 several times. It warns about number of replica (see the following stack trace). It might relate to this problem. But not sure. 

When querying data, AssertionError occurs in cassandra and client gets timedout exception.
Our client queries in several query types in different column families. This timeout occurs quite often in secondary index query.
The error is only logged on the host which client connects via thrift (according to timestamp in log).

Another experience is when I tried to retrieve data via CLI.
Query like "list Standard1 limit 10" returns results normally. But the cassandra logs the AssertionError on the host. Other node doesn't have the log.
(When query returns "null" (I guess there is not enough replica), this exception is very likely to be logged.)

-------------------
* Unavailable exception stack trace received via hector 0.7.0-28 when feeding data into cassandra:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level.
        at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52)
        at me.prettyprint.cassandra.service.KeyspaceServiceImpl$1.execute(KeyspaceServiceImpl.java:95)
        at me.prettyprint.cassandra.service.KeyspaceServiceImpl$1.execute(KeyspaceServiceImpl.java:88)
        at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:101)
        at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:221)
        at me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:129)
        at me.prettyprint.cassandra.service.KeyspaceServiceImpl.batchMutate(KeyspaceServiceImpl.java:100)
        at me.prettyprint.cassandra.service.KeyspaceServiceImpl.batchMutate(KeyspaceServiceImpl.java:106)
        at me.prettyprint.cassandra.model.MutatorImpl$2.doInKeyspace(MutatorImpl.java:203)
        at me.prettyprint.cassandra.model.MutatorImpl$2.doInKeyspace(MutatorImpl.java:200)
        at me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
        at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:85)
        at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:200)
        at jp.co.rakuten.gsp.cassandra_connector.feeder.CassandraFeeder.batchInsert(CassandraFeeder.java:506)
        at jp.co.rakuten.gsp.purchase_history.cassandra_connector.PHCassandraFeeder.consume(PHCassandraFeeder.java:240)
        at jp.co.rakuten.gsp.cassandra_connector.feeder.CassandraFeeder.process(CassandraFeeder.java:330)
        at jp.co.rakuten.gsp.cassandra_connector.feeder.Feeder.run(Feeder.java:164)
        at java.lang.Thread.run(Thread.java:662)
Caused by: UnavailableException()
        at org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:16485)
        at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:916)
        at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:890)
        at me.prettyprint.cassandra.service.KeyspaceServiceImpl$1.execute(KeyspaceServiceImpl.java:93)
        ... 16 more



> ReadCallback AssertionError: resolver.getMessageCount() <= endpoints.size()
> ---------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2282
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2282
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.3
>            Reporter: Tyler Hobbs
>
> In a three node cluster with RF=2, when trying to page through all rows with get_range_slices() at CL.ONE, I get timeouts on the client side.  Looking at the Cassandra logs, all of the nodes show the following AssertionError repeatedly:
> {noformat}
> ERROR [RequestResponseStage:2] 2011-03-07 19:10:27,527 DebuggableThreadPoolExecutor.java (line 103) Error in ThreadPoolExecutor
> java.lang.AssertionError
>         at org.apache.cassandra.service.ReadCallback.response(ReadCallback.java:127)
>         at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:49)
>         at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:636)
> ERROR [RequestResponseStage:2] 2011-03-07 19:10:27,529 AbstractCassandraDaemon.java (line 114) Fatal exception in thread Thread[RequestResponseStage:2,5,main]
> java.lang.AssertionError
>         at org.apache.cassandra.service.ReadCallback.response(ReadCallback.java:127)
>         at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:49)
>         at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:636)
> {noformat}
> The nodes are all running 0.7.3.  The cluster was at size 3 before any data was inserted, and everything else appears perfectly healthy.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira