You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Nikos Mitsis (JIRA)" <ji...@apache.org> on 2018/09/17 08:30:00 UTC

[jira] [Commented] (CASSANDRA-14672) After deleting data in 3.11.3, reads fail with "open marker and close marker have different deletion times"

    [ https://issues.apache.org/jira/browse/CASSANDRA-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617214#comment-16617214 ] 

Nikos Mitsis commented on CASSANDRA-14672:
------------------------------------------

We did some testing on a replica environment. Specifically added some printf statements to check the timestamps on open and close marker that Cassandra complains.

The replication factor is 3, nodes 6, 8 & 9 contain the data. Node 8 & 9 are throwing the exception, node 6 when queried with consistency one does not fail (returns proper data - see below why).

On node8 & node9 the following timestamps are found:

Printf:

{noformat}
==> openMarkerDeletionTime: null
==> openMarkerDeletionTime: deletedAt=1537103654634113, localDeletion=1537103654
==> deletionTime: deletedAt=1530205388555918, localDeletion=1530205388
{noformat}

Exception:
{noformat}
WARN [ReadStage-1] 2018-09-16 14:40:44,252 AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread Thread[ReadStage-1,5,main]: {}
java.lang.IllegalStateException: ==> UnfilteredRowIterator for pvpms_mevents.measurement_events_dbl has an illegal RT bounds sequence: open marker and close marker have different deletion times [deletedAt=1537103654634113, localDeletion=1537103654 deletedAt=1530205388555918, localDeletion=1530205388]
{noformat}

Converted timestamps:
{noformat}
openDeletionTime: Sun Sep 16 13:14:14 UTC 2018
closeDeletionTime: Thu Jun 28 17:03:08 UTC 2018
{noformat}

DELETE FROM command above was run on Sep 16.

On node6 :

Printf:
{noformat}
==> openMarkerDeletionTime: null
==> openMarkerDeletionTime: deletedAt=1537103654634113, localDeletion=1537103654
==> deletionTime: deletedAt=1537103654634113, localDeletion=1537103654
{noformat}

Converted timestamps:
{noformat}
Sun Sep 16 13:14:14 UTC 2018
{noformat}

No Exception!

We did a json dump of the specified data from node 8 & 9 and found a range_tombstone_bound with timestamp start/end of "2018-06-28T17:03:08Z" that contains data from Jul & Aug (see rows below). On node 6 the same same data are not within a tombstone marker (it’s the same json without the range_tombstone_bound).

{noformat}
   "partition" : {
      {
        "type" : "range_tombstone_bound",
        "start" : {
          "type" : "exclusive",
          "clustering" : [ "2018-06-27 04:55:00.000Z" ],
          "deletion_info" : { "marked_deleted" : "2018-06-28T17:03:08.555918Z", "local_delete_time" : "2018-06-28T17:03:08Z" }
        }
      },
      {
        "type" : "row",
        "position" : 83860313,
        "clustering" : [ "2018-06-27 05:00:00.000Z" ],
        "liveness_info" : { "tstamp" : "2018-06-28T19:45:30.803293Z" },
        "cells" : [
          { "name" : "event_reception_time", "value" : "2018-06-28 19:45:30.784Z" },
          { "name" : "quality", "value" : 100.0 },
          { "name" : "value", "value" : 408307.66 }
        ]
      },
…
      {
        "type" : "row",
        "position" : 83953463,
        "clustering" : [ "2018-07-19 03:45:00.000Z" ],
        "liveness_info" : { "tstamp" : "2018-07-19T03:46:29.195118Z" },
        "cells" : [
          { "name" : "event_reception_time", "value" : "2018-07-19 03:46:29.193Z" },
          { "name" : "quality", "value" : 100.0 },
          { "name" : "value", "value" : 593846.06 }
        ]
      },
…
      {
        "type" : "row",
        "position" : 84054985,
        "clustering" : [ "2018-08-11 04:00:00.000Z" ],
        "liveness_info" : { "tstamp" : "2018-08-11T04:01:15.708470Z" },
        "cells" : [
          { "name" : "event_reception_time", "value" : "2018-08-11 04:01:15.703Z" },
          { "name" : "quality", "value" : 100.0 },
          { "name" : "value", "value" : 372654.53 }
        ]
      },
      {
        "type" : "range_tombstone_bound",
        "end" : {
          "type" : "inclusive",
          "deletion_info" : { "marked_deleted" : "2018-06-28T17:03:08.555918Z", "local_delete_time" : "2018-06-28T17:03:08Z" }
        }
      }
{noformat}

We have downgraded on the mean time to Cassandra 3.11.2. Shouldn't at least these inconsistencies have more graceful assertions?


> After deleting data in 3.11.3, reads fail with "open marker and close marker have different deletion times"
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-14672
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-14672
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: CentOS 7, GCE, 9 nodes, 4TB disk/~2TB full each, level compaction, timeseries data
>            Reporter: Spiros Ioannou
>            Priority: Blocker
>
> We had 3.11.0, then we upgraded to 3.11.3 last week. We routinely perform deletions as the one described below. After upgrading we run the following deletion query:
>  
> {code:java}
> DELETE FROM measurement_events_dbl WHERE measurement_source_id IN ( 9df798a2-6337-11e8-b52b-42010afa015a,  9df7717e-6337-11e8-b52b-42010afa015a, a08b8042-6337-11e8-b52b-42010afa015a, a08e52cc-6337-11e8-b52b-42010afa015a, a08e6654-6337-11e8-b52b-42010afa015a, a08e6104-6337-11e8-b52b-42010afa015a, a08e6c76-6337-11e8-b52b-42010afa015a, a08e5a9c-6337-11e8-b52b-42010afa015a, a08bcc50-6337-11e8-b52b-42010afa015a) AND year IN (2018) AND measurement_time >= '2018-07-19 04:00:00'{code}
>  
> Immediately after that, trying to read the last value produces an error:
> {code:java}
> select * FROM measurement_events_dbl WHERE measurement_source_id = a08b8042-6337-11e8-b52b-42010afa015a AND year IN (2018) order by measurement_time desc limit 1;
> ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] message="Operation failed - received 0 responses and 2 failures" info={'failures': 2, 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}{code}
>  
> And the following exception: 
> {noformat}
> WARN [ReadStage-4] 2018-08-29 06:59:53,505 AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread Thread[ReadStage-4,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalStateException: UnfilteredRowIterator for pvpms_mevents.measurement_events_dbl has an illegal RT bounds sequence: open marker and close marker have different deletion times
>  at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2601) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_181]
>  at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) [apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [apache-cassandra-3.11.3.jar:3.11.3]
>  at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
> Caused by: java.lang.IllegalStateException: UnfilteredRowIterator for pvpms_mevents.measurement_events_dbl has an illegal RT bounds sequence: open marker and close marker have different deletion times
>  at org.apache.cassandra.db.transform.RTBoundValidator$RowsTransformation.ise(RTBoundValidator.java:103) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.transform.RTBoundValidator$RowsTransformation.applyToMarker(RTBoundValidator.java:81) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:148) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:136) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:92) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:308) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:187) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:180) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:176) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:352) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1889) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2597) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  ... 5 common frames omitted
>  Suppressed: java.lang.IllegalStateException: UnfilteredRowIterator for pvpms_mevents.measurement_events_dbl has an illegal RT bounds sequence: expected all RTs to be closed, but the last one is open
>  at org.apache.cassandra.db.transform.RTBoundValidator$RowsTransformation.ise(RTBoundValidator.java:103) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.transform.RTBoundValidator$RowsTransformation.onPartitionClose(RTBoundValidator.java:96) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.transform.BaseRows.runOnClose(BaseRows.java:91) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:86) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:309) ~[apache-cassandra-3.11.3.jar:3.11.3]
>  ... 12 common frames omitted
>  
> {noformat}
>  
> We have 9 nodes ~2TB each, leveled compaction, repairs run daily in sequence.
> Table definition is:
> {noformat}
> CREATE TABLE pvpms_mevents.measurement_events_dbl (
>  measurement_source_id uuid,
>  year int,
>  measurement_time timestamp,
>  event_reception_time timestamp,
>  quality double,
>  value double,
>  PRIMARY KEY ((measurement_source_id, year), measurement_time)
> ) WITH CLUSTERING ORDER BY (measurement_time ASC)
>  AND bloom_filter_fp_chance = 0.1
>  AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>  AND comment = ''
>  AND compaction = {'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
>  AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
>  AND crc_check_chance = 1.0
>  AND dclocal_read_repair_chance = 0.1
>  AND default_time_to_live = 0
>  AND gc_grace_seconds = 864000
>  AND max_index_interval = 2048
>  AND memtable_flush_period_in_ms = 0
>  AND min_index_interval = 128
>  AND read_repair_chance = 0.0
>  AND speculative_retry = '99PERCENTILE';{noformat}
>  
> We host those on GCE and recreated all the nodes with disk snapshots, and we reproduced the error: after re-running the DELETE with all nodes up and no other queries running, the error was reproduced immediately.
>  
> We tried so far:
> re-running repairs on all nodes and running nodetool garbagecollect with no success.
> We downgraded to 3.11.2 for now, no issues so far.
> This may be related to CASSANDRA-14515



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org