You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by ma...@apache.org on 2021/04/13 08:53:58 UTC

[cassandra] branch trunk updated (d9e5dd2 -> 1d24c7c)

This is an automated email from the ASF dual-hosted git repository.

marcuse pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


    from d9e5dd2  Fix mixed cluster GROUP BY queries
     new 7152d40  Add autocomplete, errors for provide_overlapping_tombstones
     new 1d24c7c  Merge branch 'cassandra-3.11' into trunk

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt                                        |  1 +
 pylib/cqlshlib/cql3handling.py                     |  4 +++-
 pylib/cqlshlib/test/test_cqlsh_completion.py       |  8 ++++---
 .../apache/cassandra/schema/CompactionParams.java  | 21 +++++++++++++++--
 .../db/compaction/CompactionsCQLTest.java          | 26 ++++++++++++++++++++--
 5 files changed, 52 insertions(+), 8 deletions(-)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org


[cassandra] 01/01: Merge branch 'cassandra-3.11' into trunk

Posted by ma...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

marcuse pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 1d24c7cf497da3013f71f62b493ed346b8c81388
Merge: d9e5dd2 7152d40
Author: Marcus Eriksson <ma...@apache.org>
AuthorDate: Tue Apr 13 10:52:21 2021 +0200

    Merge branch 'cassandra-3.11' into trunk

 CHANGES.txt                                        |  1 +
 pylib/cqlshlib/cql3handling.py                     |  4 +++-
 pylib/cqlshlib/test/test_cqlsh_completion.py       |  8 ++++---
 .../apache/cassandra/schema/CompactionParams.java  | 21 +++++++++++++++--
 .../db/compaction/CompactionsCQLTest.java          | 26 ++++++++++++++++++++--
 5 files changed, 52 insertions(+), 8 deletions(-)

diff --cc CHANGES.txt
index 9662d90,ca69c82..15b8dfb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,62 -1,9 +1,63 @@@
 -3.11.11
 +4.0-rc1
 + * Fix mixed cluster GROUP BY queries (CASSANDRA-16582)
 + * Upgrade jflex to 1.8.2 (CASSANDRA-16576)
 + * Binary releases no longer bundle the apidocs (javadoc) (CASSANDRA-15561)
 + * Fix Streaming Repair metrics (CASSANDRA-16190)
 + * Scheduled (delayed) schema pull tasks should not run after MIGRATION stage shutdown during decommission (CASSANDRA-16495)
 + * When behind a firewall trunk is not buildable, need to allow overriding URLs (CASSANDRA-16563)
 + * Make sure sstables with moved starts are removed correctly in LeveledGenerations (CASSANDRA-16552)
 + * Fix race between secondary index building and active compactions tracking (CASSANDRA-16554)
 + * Migrate dependency handling from maven-ant-tasks to resolver-ant-tasks, removing lib/ directory from version control (CASSANDRA-16391)
 + * Fix 4.0 node sending a repair prepare message to a 3.x node breaking the connection (CASSANDRA-16542)
 + * Removed synchronized modifier from StreamSession#onChannelClose to prevent deadlocking on flush (CASSANDRA-15892)
 + * Throw IOE in AbstractType.writeValue if value has wrong fixed length (CASSANDRA-16500)
 + * Execute background refreshing of auth caches on a dedicated executor (CASSANDRA-15177)
 + * Update bundled java and python drivers to 3.11.0 and 3.25.0 respectively (CASSANDRA-13951)
 + * Add io.netty.tryReflectionSetAccessible=true to j11 server options in order to enable netty to use Unsafe direct byte buffer construction (CASSANDRA-16493)
 + * Make cassandra-stress -node support host:port notation (CASSANDRA-16529)
 + * Better handle legacy gossip application states during (and after) upgrades (CASSANDRA-16525)
 + * Mark StreamingMetrics.ActiveOutboundStreams as deprecated (CASSANDRA-11174)
 + * Increase the cqlsh version number (CASSANDRA-16509)
 + * Fix the CQL generated for the views.where_clause column when some identifiers require quoting (CASSANDRA-16479)
 + * Send FAILED_SESSION_MSG on shutdown and on in-progress repairs during startup (CASSANDRA-16425)
 + * Reinstate removed ApplicationState padding (CASSANDRA-16484)
 + * Expose data dirs to ColumnFamilyStoreMBean (CASSANDRA-16335)
 + * Add possibility to copy SSTables in SSTableImporter instead of moving them (CASSANDRA-16407)
 + * Fix DESCRIBE statement for CUSTOM indices with options (CASSANDRA-16482)
 + * Fix cassandra-stress JMX connection (CASSANDRA-16473)
 + * Avoid processing redundant application states on endpoint changes (CASSANDRA-16381)
 + * Prevent parent repair sessions leak (CASSANDRA-16446)
 + * Fix timestamp issue in SinglePartitionSliceCommandTest testPartitionD…eletionRowDeletionTie (CASSANDRA-16443)
 + * Promote protocol V5 out of beta (CASSANDRA-14973)
 + * Fix incorrect encoding for strings can be UTF8 (CASSANDRA-16429)
 + * Fix node unable to join when RF > N in multi-DC with added warning (CASSANDRA-16296)
 + * Add an option to nodetool tablestats to check sstable location correctness (CASSANDRA-16344) 
 + * Unable to ALTER KEYSPACE while decommissioned/assassinated nodes are in gossip (CASSANDRA-16422)
 + * Metrics backward compatibility restored after CASSANDRA-15066 (CASSANDRA-16083)
 + * Reduce new reserved keywords introduced since 3.0 (CASSANDRA-16439)
 + * Improve system tables handling in case of disk failures (CASSANDRA-14793)
 + * Add access and datacenters to unreserved keywords (CASSANDRA-16398)
 + * Fix nodetool ring, status output when DNS resolution or port printing are in use (CASSANDRA-16283)
 + * Upgrade Jacoco to 0.8.6 (for Java 11 support) (CASSANDRA-16365)
 + * Move excessive repair debug loggings to trace level (CASSANDRA-16406)
 + * Restore validation of each message's protocol version (CASSANDRA-16374)
 + * Upgrade netty and chronicle-queue dependencies to get Auditing and native library loading working on arm64 architectures (CASSANDRA-16384,CASSANDRA-16392)
 + * Release StreamingTombstoneHistogramBuilder spool when switching writers (CASSANDRA-14834)
 + * Correct memtable on-heap size calculations to match actual use (CASSANDRA-16318)
 + * Fix client notifications in CQL protocol v5 (CASSANDRA-16353)
 + * Too defensive check when picking sstables for preview repair (CASSANDRA-16284)
 + * Ensure pre-negotiation native protocol responses have correct stream id (CASSANDRA-16376)
 + * Fix check for -Xlog in cassandra-env.sh (CASSANDRA-16279)
 + * SSLFactory should initialize SSLContext before setting protocols (CASSANDRA-16362)
 + * Restore sasi dependencies jflex, snowball-stemmer, and concurrent-trees, in the cassandra-all pom (CASSANDRA-16303)
 + * Fix DecimalDeserializer#toString OOM (CASSANDRA-14925)
 +Merged from 3.11:
+  * Add autocomplete and error messages for provide_overlapping_tombstones (CASSANDRA-16350)
 - * Add StorageServiceMBean.getKeyspaceReplicationInfo(keyspaceName) (CASSANDRA-16447)
   * Upgrade jackson-databind to 2.9.10.8 (CASSANDRA-16462)
 + * Fix digest computation for queries with fetched but non queried columns (CASSANDRA-15962)
 + * Reduce amount of allocations during batch statement execution (CASSANDRA-16201)
 + * Update jflex-1.6.0.jar to match upstream (CASSANDRA-16393)
  Merged from 3.0:
 - * Scheduled (delayed) schema pull tasks should not run after MIGRATION stage shutdown during decommission (CASSANDRA-16495)
   * Ignore trailing zeros in hint files (CASSANDRA-16523)
   * Refuse DROP COMPACT STORAGE if some 2.x sstables are in use (CASSANDRA-15897)
   * Fix ColumnFilter::toString not returning a valid CQL fragment (CASSANDRA-16483)
diff --cc pylib/cqlshlib/cql3handling.py
index b2403a7,33793d2..68484f5
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@@ -553,7 -537,9 +553,9 @@@ def cf_prop_val_mapval_completer(ctxt, 
      key = dequote_value(ctxt.get_binding('propmapkey')[-1])
      if opt == 'compaction':
          if key == 'class':
 -            return map(escape_value, CqlRuleSet.available_compaction_classes)
 +            return list(map(escape_value, CqlRuleSet.available_compaction_classes))
+         if key == 'provide_overlapping_tombstones':
+             return [Hint('<NONE|ROW|CELL>')]
          return [Hint('<option_value>')]
      elif opt == 'compression':
          if key == 'sstable_compression':
diff --cc test/unit/org/apache/cassandra/db/compaction/CompactionsCQLTest.java
index a37437f,fe15085..c6017d4
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsCQLTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsCQLTest.java
@@@ -32,29 -25,7 +32,28 @@@ import org.junit.Test
  
  import org.apache.cassandra.cql3.CQLTester;
  import org.apache.cassandra.cql3.UntypedResultSet;
 +import org.apache.cassandra.config.Config;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.ColumnFamilyStore;
 +import org.apache.cassandra.db.DeletionTime;
 +import org.apache.cassandra.db.Directories;
 +import org.apache.cassandra.db.Mutation;
 +import org.apache.cassandra.db.RangeTombstone;
- import org.apache.cassandra.db.RowIndexEntry;
 +import org.apache.cassandra.db.RowUpdateBuilder;
 +import org.apache.cassandra.db.Slice;
 +import org.apache.cassandra.db.compaction.writers.CompactionAwareWriter;
- import org.apache.cassandra.db.compaction.writers.MajorLeveledCompactionWriter;
 +import org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter;
 +import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 +import org.apache.cassandra.db.partitions.PartitionUpdate;
 +import org.apache.cassandra.db.rows.Cell;
 +import org.apache.cassandra.db.rows.Row;
 +import org.apache.cassandra.db.rows.Unfiltered;
 +import org.apache.cassandra.db.rows.UnfilteredRowIterator;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +import org.apache.cassandra.io.sstable.ISSTableScanner;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.serializers.MarshalException;
+ import org.apache.cassandra.schema.CompactionParams;
  
  import static org.junit.Assert.assertEquals;
  import static org.junit.Assert.assertFalse;
@@@ -273,447 -229,29 +272,470 @@@ public class CompactionsCQLTest extend
          getCurrentColumnFamilyStore().setCompactionParameters(localOptions);
      }
  
 +    @Test
 +    public void testPerCFSNeverPurgeTombstonesCell() throws Throwable
 +    {
 +        testPerCFSNeverPurgeTombstonesHelper(true);
 +    }
 +
 +    @Test
 +    public void testPerCFSNeverPurgeTombstones() throws Throwable
 +    {
 +        testPerCFSNeverPurgeTombstonesHelper(false);
 +    }
 +
 +    @Test
 +    public void testCompactionInvalidRTs() throws Throwable
 +    {
 +        // set the corruptedTombstoneStrategy to exception since these tests require it - if someone changed the default
 +        // in test/conf/cassandra.yaml they would start failing
 +        DatabaseDescriptor.setCorruptedTombstoneStrategy(Config.CorruptedTombstoneStrategy.exception);
 +        prepare();
 +        // write a range tombstone with negative local deletion time (LDTs are not set by user and should not be negative):
 +        RangeTombstone rt = new RangeTombstone(Slice.ALL, new DeletionTime(System.currentTimeMillis(), -1));
 +        RowUpdateBuilder rub = new RowUpdateBuilder(getCurrentColumnFamilyStore().metadata(), System.currentTimeMillis() * 1000, 22).clustering(33).addRangeTombstone(rt);
 +        rub.build().apply();
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        compactAndValidate();
 +        readAndValidate(true);
 +        readAndValidate(false);
 +    }
 +
 +    @Test
 +    public void testCompactionInvalidTombstone() throws Throwable
 +    {
 +        DatabaseDescriptor.setCorruptedTombstoneStrategy(Config.CorruptedTombstoneStrategy.exception);
 +        prepare();
 +        // write a standard tombstone with negative local deletion time (LDTs are not set by user and should not be negative):
 +        RowUpdateBuilder rub = new RowUpdateBuilder(getCurrentColumnFamilyStore().metadata(), -1, System.currentTimeMillis() * 1000, 22).clustering(33).delete("b");
 +        rub.build().apply();
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        compactAndValidate();
 +        readAndValidate(true);
 +        readAndValidate(false);
 +    }
 +
 +    @Test
 +    public void testCompactionInvalidPartitionDeletion() throws Throwable
 +    {
 +        DatabaseDescriptor.setCorruptedTombstoneStrategy(Config.CorruptedTombstoneStrategy.exception);
 +        prepare();
 +        // write a partition deletion with negative local deletion time (LDTs are not set by user and should not be negative)::
 +        PartitionUpdate pu = PartitionUpdate.simpleBuilder(getCurrentColumnFamilyStore().metadata(), 22).nowInSec(-1).delete().build();
 +        new Mutation(pu).apply();
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        compactAndValidate();
 +        readAndValidate(true);
 +        readAndValidate(false);
 +    }
 +
 +    @Test
 +    public void testCompactionInvalidRowDeletion() throws Throwable
 +    {
 +        DatabaseDescriptor.setCorruptedTombstoneStrategy(Config.CorruptedTombstoneStrategy.exception);
 +        prepare();
 +        // write a row deletion with negative local deletion time (LDTs are not set by user and should not be negative):
 +        RowUpdateBuilder.deleteRowAt(getCurrentColumnFamilyStore().metadata(), System.currentTimeMillis() * 1000, -1, 22, 33).apply();
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        compactAndValidate();
 +        readAndValidate(true);
 +        readAndValidate(false);
 +    }
 +
 +    private void prepare() throws Throwable
 +    {
 +        createTable("CREATE TABLE %s (id int, id2 int, b text, primary key (id, id2))");
 +        for (int i = 0; i < 2; i++)
 +            execute("INSERT INTO %s (id, id2, b) VALUES (?, ?, ?)", i, i, String.valueOf(i));
 +    }
 +
 +    @Test
 +    public void testIndexedReaderRowDeletion() throws Throwable
 +    {
 +        // write enough data to make sure we use an IndexedReader when doing a read, and make sure it fails when reading a corrupt row deletion
 +        DatabaseDescriptor.setCorruptedTombstoneStrategy(Config.CorruptedTombstoneStrategy.exception);
 +        int maxSizePre = DatabaseDescriptor.getColumnIndexSizeInKB();
 +        DatabaseDescriptor.setColumnIndexSize(1024);
 +        prepareWide();
 +        RowUpdateBuilder.deleteRowAt(getCurrentColumnFamilyStore().metadata(), System.currentTimeMillis() * 1000, -1, 22, 33).apply();
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        readAndValidate(true);
 +        readAndValidate(false);
 +        DatabaseDescriptor.setColumnIndexSize(maxSizePre);
 +    }
 +
 +    @Test
 +    public void testIndexedReaderTombstone() throws Throwable
 +    {
 +        // write enough data to make sure we use an IndexedReader when doing a read, and make sure it fails when reading a corrupt standard tombstone
 +        DatabaseDescriptor.setCorruptedTombstoneStrategy(Config.CorruptedTombstoneStrategy.exception);
 +        int maxSizePre = DatabaseDescriptor.getColumnIndexSizeInKB();
 +        DatabaseDescriptor.setColumnIndexSize(1024);
 +        prepareWide();
 +        RowUpdateBuilder rub = new RowUpdateBuilder(getCurrentColumnFamilyStore().metadata(), -1, System.currentTimeMillis() * 1000, 22).clustering(33).delete("b");
 +        rub.build().apply();
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        readAndValidate(true);
 +        readAndValidate(false);
 +        DatabaseDescriptor.setColumnIndexSize(maxSizePre);
 +    }
 +
 +    @Test
 +    public void testIndexedReaderRT() throws Throwable
 +    {
 +        // write enough data to make sure we use an IndexedReader when doing a read, and make sure it fails when reading a corrupt range tombstone
 +        DatabaseDescriptor.setCorruptedTombstoneStrategy(Config.CorruptedTombstoneStrategy.exception);
 +        final int maxSizePreKB = DatabaseDescriptor.getColumnIndexSizeInKB();
 +        DatabaseDescriptor.setColumnIndexSize(1024);
 +        prepareWide();
 +        RangeTombstone rt = new RangeTombstone(Slice.ALL, new DeletionTime(System.currentTimeMillis(), -1));
 +        RowUpdateBuilder rub = new RowUpdateBuilder(getCurrentColumnFamilyStore().metadata(), System.currentTimeMillis() * 1000, 22).clustering(33).addRangeTombstone(rt);
 +        rub.build().apply();
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        readAndValidate(true);
 +        readAndValidate(false);
 +        DatabaseDescriptor.setColumnIndexSize(maxSizePreKB);
 +    }
 +
 +
 +    @Test
 +    public void testLCSThresholdParams() throws Throwable
 +    {
 +        createTable("create table %s (id int, id2 int, t blob, primary key (id, id2)) with compaction = {'class':'LeveledCompactionStrategy', 'sstable_size_in_mb':'1', 'max_threshold':'60'}");
 +        ColumnFamilyStore cfs = getCurrentColumnFamilyStore();
 +        cfs.disableAutoCompaction();
 +        byte [] b = new byte[100 * 1024];
 +        new Random().nextBytes(b);
 +        ByteBuffer value = ByteBuffer.wrap(b);
 +        for (int i = 0; i < 50; i++)
 +        {
 +            for (int j = 0; j < 10; j++)
 +            {
 +                execute("insert into %s (id, id2, t) values (?, ?, ?)", i, j, value);
 +            }
 +            cfs.forceBlockingFlush();
 +        }
 +        assertEquals(50, cfs.getLiveSSTables().size());
 +        LeveledCompactionStrategy lcs = (LeveledCompactionStrategy) cfs.getCompactionStrategyManager().getUnrepairedUnsafe().first();
 +        AbstractCompactionTask act = lcs.getNextBackgroundTask(0);
 +        // we should be compacting all 50 sstables:
 +        assertEquals(50, act.transaction.originals().size());
 +        act.execute(ActiveCompactionsTracker.NOOP);
 +    }
 +
 +    @Test
 +    public void testSTCSinL0() throws Throwable
 +    {
 +        createTable("create table %s (id int, id2 int, t blob, primary key (id, id2)) with compaction = {'class':'LeveledCompactionStrategy', 'sstable_size_in_mb':'1', 'max_threshold':'60'}");
 +        ColumnFamilyStore cfs = getCurrentColumnFamilyStore();
 +        cfs.disableAutoCompaction();
 +        execute("insert into %s (id, id2, t) values (?, ?, ?)", 1,1,"L1");
 +        cfs.forceBlockingFlush();
 +        cfs.forceMajorCompaction();
 +        SSTableReader l1sstable = cfs.getLiveSSTables().iterator().next();
 +        assertEquals(1, l1sstable.getSSTableLevel());
 +        // now we have a single L1 sstable, create many L0 ones:
 +        byte [] b = new byte[100 * 1024];
 +        new Random().nextBytes(b);
 +        ByteBuffer value = ByteBuffer.wrap(b);
 +        for (int i = 0; i < 50; i++)
 +        {
 +            for (int j = 0; j < 10; j++)
 +            {
 +                execute("insert into %s (id, id2, t) values (?, ?, ?)", i, j, value);
 +            }
 +            cfs.forceBlockingFlush();
 +        }
 +        assertEquals(51, cfs.getLiveSSTables().size());
 +
 +        // mark the L1 sstable as compacting to make sure we trigger STCS in L0:
 +        LifecycleTransaction txn = cfs.getTracker().tryModify(l1sstable, OperationType.COMPACTION);
 +        LeveledCompactionStrategy lcs = (LeveledCompactionStrategy) cfs.getCompactionStrategyManager().getUnrepairedUnsafe().first();
 +        AbstractCompactionTask act = lcs.getNextBackgroundTask(0);
 +        // note that max_threshold is 60 (more than the amount of L0 sstables), but MAX_COMPACTING_L0 is 32, which means we will trigger STCS with at most max_threshold sstables
 +        assertEquals(50, act.transaction.originals().size());
 +        assertEquals(0, ((LeveledCompactionTask)act).getLevel());
 +        assertTrue(act.transaction.originals().stream().allMatch(s -> s.getSSTableLevel() == 0));
 +        txn.abort(); // unmark the l1 sstable compacting
 +        act.execute(ActiveCompactionsTracker.NOOP);
 +    }
 +
 +    @Test
 +    public void testAbortNotifications() throws Throwable
 +    {
 +        createTable("create table %s (id int primary key, x blob) with compaction = {'class':'LeveledCompactionStrategy', 'sstable_size_in_mb':1}");
 +        Random r = new Random();
 +        byte [] b = new byte[100 * 1024];
 +        for (int i = 0; i < 1000; i++)
 +        {
 +            r.nextBytes(b);
 +            execute("insert into %s (id, x) values (?, ?)", i, ByteBuffer.wrap(b));
 +        }
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        getCurrentColumnFamilyStore().disableAutoCompaction();
 +        for (int i = 0; i < 1000; i++)
 +        {
 +            r.nextBytes(b);
 +            execute("insert into %s (id, x) values (?, ?)", i, ByteBuffer.wrap(b));
 +        }
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +
 +        LeveledCompactionStrategy lcs = (LeveledCompactionStrategy) getCurrentColumnFamilyStore().getCompactionStrategyManager().getUnrepairedUnsafe().first();
 +        LeveledCompactionTask lcsTask;
 +        while (true)
 +        {
 +            lcsTask = (LeveledCompactionTask) lcs.getNextBackgroundTask(0);
 +            if (lcsTask != null)
 +            {
 +                lcsTask.execute(CompactionManager.instance.active);
 +                break;
 +            }
 +            Thread.sleep(1000);
 +        }
 +        // now all sstables are non-overlapping in L1 - we need them to be in L2:
 +        for (SSTableReader sstable : getCurrentColumnFamilyStore().getLiveSSTables())
 +        {
 +            lcs.removeSSTable(sstable);
 +            sstable.mutateLevelAndReload(2);
 +            lcs.addSSTable(sstable);
 +        }
 +
 +        for (int i = 0; i < 1000; i++)
 +        {
 +            r.nextBytes(b);
 +            execute("insert into %s (id, x) values (?, ?)", i, ByteBuffer.wrap(b));
 +        }
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        // now we have a bunch of sstables in L2 and one in L0 - bump the L0 one to L1:
 +        for (SSTableReader sstable : getCurrentColumnFamilyStore().getLiveSSTables())
 +        {
 +            if (sstable.getSSTableLevel() == 0)
 +            {
 +                lcs.removeSSTable(sstable);
 +                sstable.mutateLevelAndReload(1);
 +                lcs.addSSTable(sstable);
 +            }
 +        }
 +        // at this point we have a single sstable in L1, and a bunch of sstables in L2 - a background compaction should
 +        // trigger an L1 -> L2 compaction which we abort after creating 5 sstables - this notifies LCS that MOVED_START
 +        // sstables have been removed.
 +        try
 +        {
 +            AbstractCompactionTask task = new NotifyingCompactionTask((LeveledCompactionTask) lcs.getNextBackgroundTask(0));
 +            task.execute(CompactionManager.instance.active);
 +            fail("task should throw exception");
 +        }
 +        catch (Exception ignored)
 +        {
 +            // ignored
 +        }
 +
 +        lcsTask = (LeveledCompactionTask) lcs.getNextBackgroundTask(0);
 +        try
 +        {
 +            assertNotNull(lcsTask);
 +        }
 +        finally
 +        {
 +            if (lcsTask != null)
 +                lcsTask.transaction.abort();
 +        }
 +    }
 +
 +    private static class NotifyingCompactionTask extends LeveledCompactionTask
 +    {
 +        public NotifyingCompactionTask(LeveledCompactionTask task)
 +        {
 +            super(task.cfs, task.transaction, task.getLevel(), task.gcBefore, task.getLevel(), false);
 +        }
 +
 +        @Override
 +        public CompactionAwareWriter getCompactionAwareWriter(ColumnFamilyStore cfs,
 +                                                              Directories directories,
 +                                                              LifecycleTransaction txn,
 +                                                              Set<SSTableReader> nonExpiredSSTables)
 +        {
 +            return new MaxSSTableSizeWriter(cfs, directories, txn, nonExpiredSSTables, 1 << 20, 1)
 +            {
 +                int switchCount = 0;
 +                public void switchCompactionLocation(Directories.DataDirectory directory)
 +                {
 +                    switchCount++;
 +                    if (switchCount > 5)
 +                        throw new RuntimeException("Throw after a few sstables have had their starts moved");
 +                    super.switchCompactionLocation(directory);
 +                }
 +            };
 +        }
 +    }
 +
 +    private void prepareWide() throws Throwable
 +    {
 +        createTable("CREATE TABLE %s (id int, id2 int, b text, primary key (id, id2))");
 +        for (int i = 0; i < 100; i++)
 +            execute("INSERT INTO %s (id, id2, b) VALUES (?, ?, ?)", 22, i, StringUtils.repeat("ABCDEFG", 10));
 +    }
 +
 +    private void compactAndValidate()
 +    {
 +        boolean gotException = false;
 +        try
 +        {
 +            getCurrentColumnFamilyStore().forceMajorCompaction();
 +        }
 +        catch(Throwable t)
 +        {
 +            gotException = true;
 +            Throwable cause = t;
 +            while (cause != null && !(cause instanceof MarshalException))
 +                cause = cause.getCause();
 +            assertNotNull(cause);
 +            MarshalException me = (MarshalException) cause;
 +            assertTrue(me.getMessage().contains(getCurrentColumnFamilyStore().metadata.keyspace+"."+getCurrentColumnFamilyStore().metadata.name));
 +            assertTrue(me.getMessage().contains("Key 22"));
 +        }
 +        assertTrue(gotException);
 +        assertSuspectAndReset(getCurrentColumnFamilyStore().getLiveSSTables());
 +    }
 +
 +    private void readAndValidate(boolean asc) throws Throwable
 +    {
 +        execute("select * from %s where id = 0 order by id2 "+(asc ? "ASC" : "DESC"));
 +
 +        boolean gotException = false;
 +        try
 +        {
 +            for (UntypedResultSet.Row r : execute("select * from %s")) {}
 +        }
 +        catch (Throwable t)
 +        {
 +            assertTrue(t instanceof CorruptSSTableException);
 +            gotException = true;
 +            Throwable cause = t;
 +            while (cause != null && !(cause instanceof MarshalException))
 +                cause = cause.getCause();
 +            assertNotNull(cause);
 +            MarshalException me = (MarshalException) cause;
 +            assertTrue(me.getMessage().contains("Key 22"));
 +        }
 +        assertSuspectAndReset(getCurrentColumnFamilyStore().getLiveSSTables());
 +        assertTrue(gotException);
 +        gotException = false;
 +        try
 +        {
 +            execute("select * from %s where id = 22 order by id2 "+(asc ? "ASC" : "DESC"));
 +        }
 +        catch (Throwable t)
 +        {
 +            assertTrue(t instanceof CorruptSSTableException);
 +            gotException = true;
 +            Throwable cause = t;
 +            while (cause != null && !(cause instanceof MarshalException))
 +                cause = cause.getCause();
 +            assertNotNull(cause);
 +            MarshalException me = (MarshalException) cause;
 +            assertTrue(me.getMessage().contains("Key 22"));
 +        }
 +        assertTrue(gotException);
 +        assertSuspectAndReset(getCurrentColumnFamilyStore().getLiveSSTables());
 +    }
 +
 +    public void testPerCFSNeverPurgeTombstonesHelper(boolean deletedCell) throws Throwable
 +    {
 +        createTable("CREATE TABLE %s (id int primary key, b text) with gc_grace_seconds = 0");
 +        for (int i = 0; i < 100; i++)
 +        {
 +            execute("INSERT INTO %s (id, b) VALUES (?, ?)", i, String.valueOf(i));
 +        }
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +
 +        assertTombstones(getCurrentColumnFamilyStore().getLiveSSTables().iterator().next(), false);
 +        if (deletedCell)
 +            execute("UPDATE %s SET b=null WHERE id = ?", 50);
 +        else
 +            execute("DELETE FROM %s WHERE id = ?", 50);
 +        getCurrentColumnFamilyStore().setNeverPurgeTombstones(false);
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        Thread.sleep(2000); // wait for gcgs to pass
 +        getCurrentColumnFamilyStore().forceMajorCompaction();
 +        assertTombstones(getCurrentColumnFamilyStore().getLiveSSTables().iterator().next(), false);
 +        if (deletedCell)
 +            execute("UPDATE %s SET b=null WHERE id = ?", 44);
 +        else
 +            execute("DELETE FROM %s WHERE id = ?", 44);
 +        getCurrentColumnFamilyStore().setNeverPurgeTombstones(true);
 +        getCurrentColumnFamilyStore().forceBlockingFlush();
 +        Thread.sleep(1100);
 +        getCurrentColumnFamilyStore().forceMajorCompaction();
 +        assertTombstones(getCurrentColumnFamilyStore().getLiveSSTables().iterator().next(), true);
 +        // disable it again and make sure the tombstone is gone:
 +        getCurrentColumnFamilyStore().setNeverPurgeTombstones(false);
 +        getCurrentColumnFamilyStore().forceMajorCompaction();
 +        assertTombstones(getCurrentColumnFamilyStore().getLiveSSTables().iterator().next(), false);
 +        getCurrentColumnFamilyStore().truncateBlocking();
 +    }
 +
 +    private void assertSuspectAndReset(Collection<SSTableReader> sstables)
 +    {
 +        assertFalse(sstables.isEmpty());
 +        for (SSTableReader sstable : sstables)
 +        {
 +            assertTrue(sstable.isMarkedSuspect());
 +            sstable.unmarkSuspect();
 +        }
 +    }
 +
 +    private void assertTombstones(SSTableReader sstable, boolean expectTS)
 +    {
 +        boolean foundTombstone = false;
 +        try(ISSTableScanner scanner = sstable.getScanner())
 +        {
 +            while (scanner.hasNext())
 +            {
 +                try (UnfilteredRowIterator iter = scanner.next())
 +                {
 +                    if (!iter.partitionLevelDeletion().isLive())
 +                        foundTombstone = true;
 +                    while (iter.hasNext())
 +                    {
 +                        Unfiltered unfiltered = iter.next();
 +                        assertTrue(unfiltered instanceof Row);
 +                        for (Cell<?> c : ((Row)unfiltered).cells())
 +                        {
 +                            if (c.isTombstone())
 +                                foundTombstone = true;
 +                        }
 +
 +                    }
 +                }
 +            }
 +        }
 +        assertEquals(expectTS, foundTombstone);
 +    }
 +
+      @Test(expected = IllegalArgumentException.class)
+      public void testBadProvidesTombstoneOption()
+      {
+          createTable("CREATE TABLE %s (id text PRIMARY KEY)");
+          Map<String, String> localOptions = new HashMap<>();
+          localOptions.put("class","SizeTieredCompactionStrategy");
+          localOptions.put("provide_overlapping_tombstones","IllegalValue");
+ 
+          getCurrentColumnFamilyStore().setCompactionParameters(localOptions);
+      }
+      @Test
+      public void testProvidesTombstoneOptionverifiation()
+      {
+          createTable("CREATE TABLE %s (id text PRIMARY KEY)");
+          Map<String, String> localOptions = new HashMap<>();
+          localOptions.put("class","SizeTieredCompactionStrategy");
+          localOptions.put("provide_overlapping_tombstones","row");
+ 
+          getCurrentColumnFamilyStore().setCompactionParameters(localOptions);
+          assertEquals(CompactionParams.TombstoneOption.ROW, getCurrentColumnFamilyStore().getCompactionStrategyManager().getCompactionParams().tombstoneOption());
+      }
+ 
+ 
      public boolean verifyStrategies(CompactionStrategyManager manager, Class<? extends AbstractCompactionStrategy> expected)
      {
          boolean found = false;

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org