You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "David Capwell (Jira)" <ji...@apache.org> on 2020/03/10 01:41:00 UTC

[jira] [Created] (CASSANDRA-15627) sstable not in the corresponding level in the leveled manifest

David Capwell created CASSANDRA-15627:
-----------------------------------------

             Summary: sstable not in the corresponding level in the leveled manifest
                 Key: CASSANDRA-15627
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-15627
             Project: Cassandra
          Issue Type: Bug
          Components: Local/Compaction, Local/Compaction/LCS
            Reporter: David Capwell


I get the following warning logs when running smoke tests

bq. Live sstable /cassandra/d1/data/ks/table-cce7c54b5abf3f369bb7659a74e9e963/mf-71-big-Data.db from level 0 is not on corresponding level in the leveled manifest. This is not a problem per se, but may indicate an orphaned sstable due to a failed compaction not cleaned up properly.

There are no other warning logs and no error logs; so compaction doesn’t have anything saying there was a failure.

Schema

{code}
CREATE TABLE ks.table (
  pk1 ascii,
  pk2 bigint,
  ck1 ascii,
  ck2 ascii,
  ck3 ascii,
  v1 int,
  v2 ascii, 
  PRIMARY KEY ((pk1,pk2), ck1, ck2, ck3)
) WITH comment = 'test table'
  AND gc_grace_seconds = 1
  AND memtable_flush_period_in_ms = 100
  AND compression = {'class': 'LZ4Compressor'}
  AND compaction = {'class': 'LeveledCompactionStrategy', 'only_purge_repaired_tombstones': true}
  AND CLUSTERING ORDER BY (ck1 DESC,ck2 ASC,ck3 DESC);
{code}

test
* run simulated queries for 30 minutes
* run incremental repair in a loop (once one completes run the next)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org