You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by br...@apache.org on 2021/10/11 17:08:50 UTC

[cassandra] branch cassandra-4.0 updated (c4a07ae -> e57a8dd)

This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a change to branch cassandra-4.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


    from c4a07ae  Merge branch 'cassandra-3.11' into cassandra-4.0
     new 0e12b8d  Don't take snapshots when truncating system tables
     new 9d28beb  Merge branch 'cassandra-3.0' into cassandra-3.11
     new e57a8dd  Merge branch 'cassandra-3.11' into cassandra-4.0

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt                                             |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 17 ++++++++++++++---
 src/java/org/apache/cassandra/db/SystemKeyspace.java    | 12 ++++++------
 3 files changed, 21 insertions(+), 9 deletions(-)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org


[cassandra] 01/01: Merge branch 'cassandra-3.11' into cassandra-4.0

Posted by br...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch cassandra-4.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit e57a8ddbfa1c4d277cd043f621231e0a2a168f89
Merge: c4a07ae 9d28beb
Author: Brandon Williams <br...@apache.org>
AuthorDate: Mon Oct 11 12:05:17 2021 -0500

    Merge branch 'cassandra-3.11' into cassandra-4.0

 CHANGES.txt                                             |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 17 ++++++++++++++---
 src/java/org/apache/cassandra/db/SystemKeyspace.java    | 12 ++++++------
 3 files changed, 21 insertions(+), 9 deletions(-)

diff --cc CHANGES.txt
index f6d3610,d43066e..acd339c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,22 -1,16 +1,23 @@@
 -3.11.12
 +4.0.2
 + * Correct the internode message timestamp if sending node has wrapped (CASSANDRA-16997)
 + * Avoid race causing us to return null in RangesAtEndpoint (CASSANDRA-16965)
 + * Avoid rewriting all sstables during cleanup when transient replication is enabled (CASSANDRA-16966)
 + * Prevent CQLSH from failure on Python 3.10 (CASSANDRA-16987)
 + * Avoid trying to acquire 0 permits from the rate limiter when taking snapshot (CASSANDRA-16872)
 + * Upgrade Caffeine to 2.5.6 (CASSANDRA-15153)
 + * Include SASI components to snapshots (CASSANDRA-15134)
 + * Fix missed wait latencies in the output of `nodetool tpstats -F` (CASSANDRA-16938)
 + * Remove all the state pollution between tests in SSTableReaderTest (CASSANDRA-16888)
 + * Delay auth setup until after gossip has settled to avoid unavailables on startup (CASSANDRA-16783)
 + * Fix clustering order logic in CREATE MATERIALIZED VIEW (CASSANDRA-16898)
 + * org.apache.cassandra.db.rows.ArrayCell#unsharedHeapSizeExcludingData includes data twice (CASSANDRA-16900)
 + * Exclude Jackson 1.x transitive dependency of hadoop* provided dependencies (CASSANDRA-16854)
 +Merged from 3.11:
   * Add key validation to ssstablescrub (CASSANDRA-16969)
   * Update Jackson from 2.9.10 to 2.12.5 (CASSANDRA-16851)
 - * Include SASI components to snapshots (CASSANDRA-15134)
   * Make assassinate more resilient to missing tokens (CASSANDRA-16847)
 - * Exclude Jackson 1.x transitive dependency of hadoop* provided dependencies (CASSANDRA-16854)
 - * Validate SASI tokenizer options before adding index to schema (CASSANDRA-15135)
 - * Fixup scrub output when no data post-scrub and clear up old use of row, which really means partition (CASSANDRA-16835)
 - * Fix ant-junit dependency issue (CASSANDRA-16827)
 - * Reduce thread contention in CommitLogSegment and HintsBuffer (CASSANDRA-16072)
 - * Avoid sending CDC column if not enabled (CASSANDRA-16770)
  Merged from 3.0:
+  * Don't take snapshots when truncating system tables (CASSANDRA-16839)
   * Make -Dtest.methods consistently optional in all Ant test targets (CASSANDRA-17014)
   * Immediately apply stream throughput, considering negative values as unthrottled (CASSANDRA-16959)
   * Do not release new SSTables in offline transactions (CASSANDRA-16975)
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 4855d7c,4a74a2f..c127069
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2271,19 -2277,11 +2282,19 @@@ public class ColumnFamilyStore implemen
                  now = Math.max(now, sstable.maxDataAge);
          truncatedAt = now;
  
 -        Runnable truncateRunnable = () -> {
 -            logger.debug("Discarding sstable data for truncated CF + indexes");
 -            data.notifyTruncated(truncatedAt);
 +        Runnable truncateRunnable = new Runnable()
 +        {
 +            public void run()
 +            {
 +                logger.info("Truncating {}.{} with truncatedAt={}", keyspace.getName(), getTableName(), truncatedAt);
 +                // since truncation can happen at different times on different nodes, we need to make sure
 +                // that any repairs are aborted, otherwise we might clear the data on one node and then
 +                // stream in data that is actually supposed to have been deleted
 +                ActiveRepairService.instance.abort((prs) -> prs.getTableIds().contains(metadata.id),
 +                                                   "Stopping parent sessions {} due to truncation of tableId="+metadata.id);
 +                data.notifyTruncated(truncatedAt);
  
-             if (DatabaseDescriptor.isAutoSnapshot())
+             if (!noSnapshot && DatabaseDescriptor.isAutoSnapshot())
                  snapshot(Keyspace.getTimestampedSnapshotNameWithPrefix(name, SNAPSHOT_TRUNCATE_PREFIX));
  
              discardSSTables(truncatedAt);
diff --cc src/java/org/apache/cassandra/db/SystemKeyspace.java
index cbb7084,ec26a69..34973cb
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@@ -1350,70 -1342,57 +1350,70 @@@ public final class SystemKeyspac
      }
  
      /**
 -     * @return A multimap from keyspace to table for all tables with entries in size estimates
 +     * truncates size_estimates and table_estimates tables
       */
 -
 -    public static synchronized SetMultimap<String, String> getTablesWithSizeEstimates()
 +    public static void clearAllEstimates()
      {
-         for (TableMetadata table : Arrays.asList(LegacySizeEstimates, TableEstimates))
 -        SetMultimap<String, String> keyspaceTableMap = HashMultimap.create();
 -        String cql = String.format("SELECT keyspace_name, table_name FROM %s.%s", SchemaConstants.SYSTEM_KEYSPACE_NAME, SIZE_ESTIMATES);
 -        UntypedResultSet rs = executeInternal(cql);
 -        for (UntypedResultSet.Row row : rs)
++        for (String table : Arrays.asList(LEGACY_SIZE_ESTIMATES, TABLE_ESTIMATES))
          {
-             String cql = String.format("TRUNCATE TABLE " + table.toString());
-             executeInternal(cql);
 -            keyspaceTableMap.put(row.getString("keyspace_name"), row.getString("table_name"));
++            ColumnFamilyStore cfs = Keyspace.open(SchemaConstants.SYSTEM_KEYSPACE_NAME).getColumnFamilyStore(table);
++            cfs.truncateBlockingWithoutSnapshot();
          }
 +    }
  
 -        return keyspaceTableMap;
 +    public static synchronized void updateAvailableRanges(String keyspace, Collection<Range<Token>> completedFullRanges, Collection<Range<Token>> completedTransientRanges)
 +    {
 +        String cql = "UPDATE system.%s SET full_ranges = full_ranges + ?, transient_ranges = transient_ranges + ? WHERE keyspace_name = ?";
 +        executeInternal(format(cql, AVAILABLE_RANGES_V2),
 +                        completedFullRanges.stream().map(SystemKeyspace::rangeToBytes).collect(Collectors.toSet()),
 +                        completedTransientRanges.stream().map(SystemKeyspace::rangeToBytes).collect(Collectors.toSet()),
 +                        keyspace);
      }
  
 -    public static synchronized void updateAvailableRanges(String keyspace, Collection<Range<Token>> completedRanges)
 +    /**
 +     * List of the streamed ranges, where transientness is encoded based on the source, where range was streamed from.
 +     */
 +    public static synchronized AvailableRanges getAvailableRanges(String keyspace, IPartitioner partitioner)
      {
 -        String cql = "UPDATE system.%s SET ranges = ranges + ? WHERE keyspace_name = ?";
 -        Set<ByteBuffer> rangesToUpdate = new HashSet<>(completedRanges.size());
 -        for (Range<Token> range : completedRanges)
 +        String query = "SELECT * FROM system.%s WHERE keyspace_name=?";
 +        UntypedResultSet rs = executeInternal(format(query, AVAILABLE_RANGES_V2), keyspace);
 +
 +        ImmutableSet.Builder<Range<Token>> full = new ImmutableSet.Builder<>();
 +        ImmutableSet.Builder<Range<Token>> trans = new ImmutableSet.Builder<>();
 +        for (UntypedResultSet.Row row : rs)
          {
 -            rangesToUpdate.add(rangeToBytes(range));
 +            Optional.ofNullable(row.getSet("full_ranges", BytesType.instance))
 +                    .ifPresent(full_ranges -> full_ranges.stream()
 +                            .map(buf -> byteBufferToRange(buf, partitioner))
 +                            .forEach(full::add));
 +            Optional.ofNullable(row.getSet("transient_ranges", BytesType.instance))
 +                    .ifPresent(transient_ranges -> transient_ranges.stream()
 +                            .map(buf -> byteBufferToRange(buf, partitioner))
 +                            .forEach(trans::add));
          }
 -        executeInternal(String.format(cql, AVAILABLE_RANGES), rangesToUpdate, keyspace);
 +        return new AvailableRanges(full.build(), trans.build());
      }
  
 -    public static synchronized Set<Range<Token>> getAvailableRanges(String keyspace, IPartitioner partitioner)
 +    public static class AvailableRanges
      {
 -        Set<Range<Token>> result = new HashSet<>();
 -        String query = "SELECT * FROM system.%s WHERE keyspace_name=?";
 -        UntypedResultSet rs = executeInternal(String.format(query, AVAILABLE_RANGES), keyspace);
 -        for (UntypedResultSet.Row row : rs)
 +        public Set<Range<Token>> full;
 +        public Set<Range<Token>> trans;
 +
 +        private AvailableRanges(Set<Range<Token>> full, Set<Range<Token>> trans)
          {
 -            Set<ByteBuffer> rawRanges = row.getSet("ranges", BytesType.instance);
 -            for (ByteBuffer rawRange : rawRanges)
 -            {
 -                result.add(byteBufferToRange(rawRange, partitioner));
 -            }
 +            this.full = full;
 +            this.trans = trans;
          }
 -        return ImmutableSet.copyOf(result);
      }
  
      public static void resetAvailableRanges()
      {
 -        ColumnFamilyStore availableRanges = Keyspace.open(SchemaConstants.SYSTEM_KEYSPACE_NAME).getColumnFamilyStore(AVAILABLE_RANGES);
 +        ColumnFamilyStore availableRanges = Keyspace.open(SchemaConstants.SYSTEM_KEYSPACE_NAME).getColumnFamilyStore(AVAILABLE_RANGES_V2);
-         availableRanges.truncateBlocking();
+         availableRanges.truncateBlockingWithoutSnapshot();
      }
  
 -    public static synchronized void updateTransferredRanges(String description,
 -                                                         InetAddress peer,
 +    public static synchronized void updateTransferredRanges(StreamOperation streamOperation,
 +                                                         InetAddressAndPort peer,
                                                           String keyspace,
                                                           Collection<Range<Token>> streamedRanges)
      {

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org