You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by ad...@apache.org on 2021/09/30 12:16:29 UTC

[cassandra] branch cassandra-3.11 updated (7c067b6 -> dcc9549)

This is an automated email from the ASF dual-hosted git repository.

adelapena pushed a change to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


    from 7c067b6  Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up
     new 3e6faca  Do not release new SSTables in offline transactions
     new dcc9549  Merge branch 'cassandra-3.0' into cassandra-3.11

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt                                        |  1 +
 .../cassandra/db/compaction/CompactionTask.java    | 81 +++++++++----------
 .../db/compaction/CompactionTaskTest.java          | 91 ++++++++++++++++++++++
 3 files changed, 130 insertions(+), 43 deletions(-)
 create mode 100644 test/unit/org/apache/cassandra/db/compaction/CompactionTaskTest.java

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org


[cassandra] 01/01: Merge branch 'cassandra-3.0' into cassandra-3.11

Posted by ad...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

adelapena pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit dcc95492e9b05e1b8ceb3af332d5c0989c7272b0
Merge: 7c067b6 3e6faca
Author: Andrés de la Peña <a....@gmail.com>
AuthorDate: Thu Sep 30 13:07:26 2021 +0100

    Merge branch 'cassandra-3.0' into cassandra-3.11
    
    # Conflicts:
    #	CHANGES.txt
    #	src/java/org/apache/cassandra/db/compaction/CompactionTask.java

 CHANGES.txt                                        |  1 +
 .../cassandra/db/compaction/CompactionTask.java    | 81 +++++++++----------
 .../db/compaction/CompactionTaskTest.java          | 91 ++++++++++++++++++++++
 3 files changed, 130 insertions(+), 43 deletions(-)

diff --cc CHANGES.txt
index 406efbc,6ef52e4..84c5bcf
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -3.0.26:
 +3.11.12
 + * Add key validation to ssstablescrub (CASSANDRA-16969)
 + * Update Jackson from 2.9.10 to 2.12.5 (CASSANDRA-16851)
 + * Include SASI components to snapshots (CASSANDRA-15134)
 + * Make assassinate more resilient to missing tokens (CASSANDRA-16847)
 + * Exclude Jackson 1.x transitive dependency of hadoop* provided dependencies (CASSANDRA-16854)
 + * Validate SASI tokenizer options before adding index to schema (CASSANDRA-15135)
 + * Fixup scrub output when no data post-scrub and clear up old use of row, which really means partition (CASSANDRA-16835)
 + * Fix ant-junit dependency issue (CASSANDRA-16827)
 + * Reduce thread contention in CommitLogSegment and HintsBuffer (CASSANDRA-16072)
 + * Avoid sending CDC column if not enabled (CASSANDRA-16770)
 +Merged from 3.0:
+  * Do not release new SSTables in offline transactions (CASSANDRA-16975)
   * ArrayIndexOutOfBoundsException in FunctionResource#fromName (CASSANDRA-16977, CASSANDRA-16995)
   * CVE-2015-0886 Security vulnerability in jbcrypt is addressed (CASSANDRA-9384)
   * Avoid useless SSTable reads during single partition queries (CASSANDRA-16944)
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 2efcd11,d29d5e6..b990020
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@@ -233,50 -217,25 +233,45 @@@ public class CompactionTask extends Abs
                  }
              }
  
 +            if (transaction.isOffline())
-             {
-                 Refs.release(Refs.selfRefs(newSStables));
-             }
-             else
-             {
-                 // log a bunch of statistics about the result and save to system table compaction_history
- 
-                 long durationInNano = System.nanoTime() - start;
-                 long dTime = TimeUnit.NANOSECONDS.toMillis(durationInNano);
-                 long startsize = inputSizeBytes;
-                 long endsize = SSTableReader.getTotalBytes(newSStables);
-                 double ratio = (double) endsize / (double) startsize;
- 
-                 StringBuilder newSSTableNames = new StringBuilder();
-                 for (SSTableReader reader : newSStables)
-                     newSSTableNames.append(reader.descriptor.baseFilename()).append(",");
-                 long totalSourceRows = 0;
-                 for (int i = 0; i < mergedRowCounts.length; i++)
-                     totalSourceRows += mergedRowCounts[i] * (i + 1);
- 
-                 String mergeSummary = updateCompactionHistory(cfs.keyspace.getName(), cfs.getTableName(), mergedRowCounts, startsize, endsize);
-                 logger.debug(String.format("Compacted (%s) %d sstables to [%s] to level=%d.  %s to %s (~%d%% of original) in %,dms.  Read Throughput = %s, Write Throughput = %s, Row Throughput = ~%,d/s.  %,d total partitions merged to %,d.  Partition merge counts were {%s}",
-                                            taskId,
-                                            transaction.originals().size(),
-                                            newSSTableNames.toString(),
-                                            getLevel(),
-                                            FBUtilities.prettyPrintMemory(startsize),
-                                            FBUtilities.prettyPrintMemory(endsize),
-                                            (int) (ratio * 100),
-                                            dTime,
-                                            FBUtilities.prettyPrintMemoryPerSecond(startsize, durationInNano),
-                                            FBUtilities.prettyPrintMemoryPerSecond(endsize, durationInNano),
-                                            (int) totalSourceCQLRows / (TimeUnit.NANOSECONDS.toSeconds(durationInNano) + 1),
-                                            totalSourceRows,
-                                            totalKeysWritten,
-                                            mergeSummary));
-                 logger.trace("CF Total Bytes Compacted: {}", FBUtilities.prettyPrintMemory(CompactionTask.addToTotalBytesCompacted(endsize)));
-                 logger.trace("Actual #keys: {}, Estimated #keys:{}, Err%: {}", totalKeysWritten, estimatedKeys, ((double)(totalKeysWritten - estimatedKeys)/totalKeysWritten));
-                 cfs.getCompactionStrategyManager().compactionLogger.compaction(startTime, transaction.originals(), System.currentTimeMillis(), newSStables);
- 
-                 // update the metrics
-                 cfs.metric.compactionBytesWritten.inc(endsize);
-             }
++                return;
++
+             // log a bunch of statistics about the result and save to system table compaction_history
 -            long dTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
 -            long startsize = SSTableReader.getTotalBytes(transaction.originals());
++            long durationInNano = System.nanoTime() - start;
++            long dTime = TimeUnit.NANOSECONDS.toMillis(durationInNano);
++            long startsize = inputSizeBytes;
+             long endsize = SSTableReader.getTotalBytes(newSStables);
+             double ratio = (double) endsize / (double) startsize;
+ 
+             StringBuilder newSSTableNames = new StringBuilder();
+             for (SSTableReader reader : newSStables)
+                 newSSTableNames.append(reader.descriptor.baseFilename()).append(",");
 -
 -            if (offline)
 -                return;
 -
 -            double mbps = dTime > 0 ? (double) endsize / (1024 * 1024) / ((double) dTime / 1000) : 0;
 -            Summary mergeSummary = updateCompactionHistory(cfs.keyspace.getName(), cfs.getColumnFamilyName(), mergedRowCounts, startsize, endsize);
 -            logger.debug(String.format("Compacted (%s) %d sstables to [%s] to level=%d.  %,d bytes to %,d (~%d%% of original) in %,dms = %fMB/s.  %,d total partitions merged to %,d.  Partition merge counts were {%s}",
 -                                       taskId, transaction.originals().size(), newSSTableNames.toString(), getLevel(), startsize, endsize, (int) (ratio * 100), dTime, mbps, mergeSummary.totalSourceRows, totalKeysWritten, mergeSummary.partitionMerge));
 -            logger.trace(String.format("CF Total Bytes Compacted: %,d", CompactionTask.addToTotalBytesCompacted(endsize)));
++            long totalSourceRows = 0;
++            for (int i = 0; i < mergedRowCounts.length; i++)
++                totalSourceRows += mergedRowCounts[i] * (i + 1);
++
++            String mergeSummary = updateCompactionHistory(cfs.keyspace.getName(), cfs.getTableName(), mergedRowCounts, startsize, endsize);
++            logger.debug(String.format("Compacted (%s) %d sstables to [%s] to level=%d.  %s to %s (~%d%% of original) in %,dms.  Read Throughput = %s, Write Throughput = %s, Row Throughput = ~%,d/s.  %,d total partitions merged to %,d.  Partition merge counts were {%s}",
++                                       taskId,
++                                       transaction.originals().size(),
++                                       newSSTableNames.toString(),
++                                       getLevel(),
++                                       FBUtilities.prettyPrintMemory(startsize),
++                                       FBUtilities.prettyPrintMemory(endsize),
++                                       (int) (ratio * 100),
++                                       dTime,
++                                       FBUtilities.prettyPrintMemoryPerSecond(startsize, durationInNano),
++                                       FBUtilities.prettyPrintMemoryPerSecond(endsize, durationInNano),
++                                       (int) totalSourceCQLRows / (TimeUnit.NANOSECONDS.toSeconds(durationInNano) + 1),
++                                       totalSourceRows,
++                                       totalKeysWritten,
++                                       mergeSummary));
++            logger.trace("CF Total Bytes Compacted: {}", FBUtilities.prettyPrintMemory(CompactionTask.addToTotalBytesCompacted(endsize)));
+             logger.trace("Actual #keys: {}, Estimated #keys:{}, Err%: {}", totalKeysWritten, estimatedKeys, ((double)(totalKeysWritten - estimatedKeys)/totalKeysWritten));
++            cfs.getCompactionStrategyManager().compactionLogger.compaction(startTime, transaction.originals(), System.currentTimeMillis(), newSStables);
++
++            // update the metrics
++            cfs.metric.compactionBytesWritten.inc(endsize);
          }
      }
  

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org