You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by bl...@apache.org on 2020/12/02 13:56:26 UTC

[cassandra] 01/01: Merge branch cassandra-3.11 into trunk

This is an automated email from the ASF dual-hosted git repository.

blerer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 39d297437983003b8886a3b711ed7f25ea6777f4
Merge: 90b2f3e e94459c
Author: Benjamin Lerer <b....@gmail.com>
AuthorDate: Wed Dec 2 14:47:18 2020 +0100

    Merge branch cassandra-3.11 into trunk

 CHANGES.txt                                        |   3 +
 .../cassandra/db/transform/BaseIterator.java       |  29 ++++-
 .../org/apache/cassandra/metrics/TableMetrics.java |  50 ++++-----
 .../cassandra/db/ColumnFamilyMetricTest.java       | 124 ++++++++++++++++++---
 4 files changed, 157 insertions(+), 49 deletions(-)

diff --cc CHANGES.txt
index 6e96738,5020f99..a51d471
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,58 -1,20 +1,61 @@@
 -3.11.10
 - * Rate limit validation compactions using compaction_throughput_mb_per_sec (CASSANDRA-16161)
 +4.0-beta4
 + * Improve checksumming and compression in protocol V5 (CASSANDRA-15299)
 + * Optimised repair streaming improvements (CASSANDRA-16274)
 + * Update jctools dependency to 3.1.0 (CASSANDRA-16255)
 + * 'SSLEngine closed already' exception on failed outbound connection (CASSANDRA-16277)
 + * Drain and/or shutdown might throw because of slow messaging service shutdown (CASSANDRA-16276)
 + * Upgrade JNA to 5.6.0, dropping support for <=glibc-2.6 systems (CASSANDRA-16212)
 + * Add saved Host IDs to TokenMetadata at startup (CASSANDRA-16246)
 + * Ensure that CacheMetrics.requests is picked up by the metric reporter (CASSANDRA-16228)
 + * Add a ratelimiter to snapshot creation and deletion (CASSANDRA-13019)
 + * Produce consistent tombstone for reads to avoid digest mistmatch (CASSANDRA-15369)
 + * Fix SSTableloader issue when restoring a table named backups (CASSANDRA-16235)
 + * Invalid serialized size for responses caused by increasing message time by 1ms which caused extra bytes in size calculation (CASSANDRA-16103)
 + * Throw BufferOverflowException from DataOutputBuffer for better visibility (CASSANDRA-16214)
 + * TLS connections to the storage port on a node without server encryption configured causes java.io.IOException accessing missing keystore (CASSANDRA-16144)
 + * Internode messaging catches OOMs and does not rethrow (CASSANDRA-15214)
 +Merged from 3.11:
   * SASI's `max_compaction_flush_memory_in_mb` settings over 100GB revert to default of 1GB (CASSANDRA-16071)
  Merged from 3.0:
+  * Fix the counting of cells per partition (CASSANDRA-16259)
   * Fix serial read/non-applying CAS linearizability (CASSANDRA-12126)
   * Avoid potential NPE in JVMStabilityInspector (CASSANDRA-16294)
   * Improved check of num_tokens against the length of initial_token (CASSANDRA-14477)
   * Fix a race condition on ColumnFamilyStore and TableMetrics (CASSANDRA-16228)
   * Remove the SEPExecutor blocking behavior (CASSANDRA-16186)
 - * Fix invalid cell value skipping when reading from disk (CASSANDRA-16223)
 + * Wait for schema agreement when bootstrapping (CASSANDRA-15158)
   * Prevent invoking enable/disable gossip when not in NORMAL (CASSANDRA-16146)
 + * Raise Dynamic Snitch Default Badness Threshold to 1.0 (CASSANDRA-16285)
+ Merged from 2.2:
+  * Fix the histogram merge of the table metrics (CASSANDRA-16259)
  
 -3.11.9
 - * Synchronize Keyspace instance store/clear (CASSANDRA-16210)
 +4.0-beta3
 + * Segregate Network and Chunk Cache BufferPools and Recirculate Partially Freed Chunks (CASSANDRA-15229)
 + * Fail truncation requests when they fail on a replica (CASSANDRA-16208)
 + * Move compact storage validation earlier in startup process (CASSANDRA-16063)
 + * Fix ByteBufferAccessor cast exceptions are thrown when trying to query a virtual table (CASSANDRA-16155)
 + * Consolidate node liveness check for forced repair (CASSANDRA-16113)
 + * Use unsigned short in ValueAccessor.sliceWithShortLength (CASSANDRA-16147)
 + * Abort repairs when getting a truncation request (CASSANDRA-15854)
 + * Remove bad assert when getting active compactions for an sstable (CASSANDRA-15457)
 + * Avoid failing compactions with very large partitions (CASSANDRA-15164)
 + * Prevent NPE in StreamMessage in type lookup (CASSANDRA-16131)
 + * Avoid invalid state transition exception during incremental repair (CASSANDRA-16067)
 + * Allow zero padding in timestamp serialization (CASSANDRA-16105)
 + * Add byte array backed cells (CASSANDRA-15393)
 + * Correctly handle pending ranges with adjacent range movements (CASSANDRA-14801)
 + * Avoid adding locahost when streaming trivial ranges (CASSANDRA-16099)
 + * Add nodetool getfullquerylog (CASSANDRA-15988)
 + * Fix yaml format and alignment in tpstats (CASSANDRA-11402)
 + * Avoid trying to keep track of RTs for endpoints we won't write to during read repair (CASSANDRA-16084)
 + * When compaction gets interrupted, the exception should include the compactionId (CASSANDRA-15954)
 + * Make Table/Keyspace Metric Names Consistent With Each Other (CASSANDRA-15909)
 + * Mutating sstable component may race with entire-sstable-streaming(ZCS) causing checksum validation failure (CASSANDRA-15861)
 + * NPE thrown while updating speculative execution time if keyspace is removed during task execution (CASSANDRA-15949)
 + * Show the progress of data streaming and index build (CASSANDRA-15406)
 + * Add flag to disable chunk cache and disable by default (CASSANDRA-16036)
 + * Upgrade to snakeyaml >= 1.26 version for CVE-2017-18640 fix (CASSANDRA-16150)
 +Merged from 3.11:
   * Fix ColumnFilter to avoid querying cells of unselected complex columns (CASSANDRA-15977)
   * Fix memory leak in CompressedChunkReader (CASSANDRA-15880)
   * Don't attempt value skipping with mixed version cluster (CASSANDRA-15833)
diff --cc src/java/org/apache/cassandra/db/transform/BaseIterator.java
index d00e406,8c938a3..8d7de47
--- a/src/java/org/apache/cassandra/db/transform/BaseIterator.java
+++ b/src/java/org/apache/cassandra/db/transform/BaseIterator.java
@@@ -43,6 -43,10 +43,10 @@@ abstract class BaseIterator<V, I extend
      // Signals that the current child iterator has been signalled to stop.
      Stop stopChild;
  
 -    // Multiple calls to close can have some side effects on the Transformations. By consequence if the iterator is
++    // Multiple call to close can have some side effects on the Transformations. By consequence if the iterator is
+     // already closed we want to ignore extra calls to close.
+     private boolean closed;
+ 
      static class Stop
      {
          // TODO: consider moving "next" into here, so that a stop() when signalled outside of a function call (e.g. in attach)
@@@ -83,14 -87,32 +87,31 @@@
  
      public final void close()
      {
+         // If close has already been called we want to ignore other calls
+         if (closed)
+             return;
+ 
+         closed = true;
 -
          Throwable fail = runOnClose(length);
          if (next instanceof AutoCloseable)
          {
-             try { ((AutoCloseable) next).close(); }
-             catch (Throwable t) { fail = merge(fail, t); }
 -            try
++            try 
+             {
+                 ((AutoCloseable) next).close();
+             }
+             catch (Throwable t)
+             {
+                 fail = merge(fail, t);
+             }
+         }
+         try
+         {
+             input.close();
+         }
+         catch (Throwable t)
+         {
+             fail = merge(fail, t);
          }
-         try { input.close(); }
-         catch (Throwable t) { fail = merge(fail, t); }
          maybeFail(fail);
      }
  
diff --cc src/java/org/apache/cassandra/metrics/TableMetrics.java
index 0bc66b9,a1ded3f..eb42331
--- a/src/java/org/apache/cassandra/metrics/TableMetrics.java
+++ b/src/java/org/apache/cassandra/metrics/TableMetrics.java
@@@ -17,24 -17,22 +17,24 @@@
   */
  package org.apache.cassandra.metrics;
  
 +import static org.apache.cassandra.metrics.CassandraMetricsRegistry.Metrics;
 +
  import java.nio.ByteBuffer;
- import java.util.ArrayList;
- import java.util.EnumMap;
- import java.util.Iterator;
- import java.util.List;
- import java.util.Set;
+ import java.util.*;
  import java.util.concurrent.ConcurrentHashMap;
  import java.util.concurrent.ConcurrentMap;
  import java.util.concurrent.TimeUnit;
 +import java.util.function.Predicate;
  
 +import com.google.common.collect.Iterables;
  import com.google.common.collect.Maps;
 +import com.google.common.collect.Sets;
 +import com.codahale.metrics.Timer;
 +
+ import com.google.common.annotations.VisibleForTesting;
+ 
+ import org.apache.commons.lang3.ArrayUtils;
+ 
 -import com.codahale.metrics.*;
 -import com.codahale.metrics.Timer;
 -
 -import org.apache.cassandra.config.Schema;
 -import org.apache.cassandra.config.SchemaConstants;
  import org.apache.cassandra.db.ColumnFamilyStore;
  import org.apache.cassandra.db.Keyspace;
  import org.apache.cassandra.db.Memtable;
@@@ -62,12 -51,10 +62,10 @@@ import com.codahale.metrics.RatioGauge
   */
  public class TableMetrics
  {
-     public static final long[] EMPTY = new long[0];
- 
      /** Total amount of data stored in the memtable that resides on-heap, including column related overhead and partitions overwritten. */
 -    public final Gauge<Long> memtableOnHeapSize;
 +    public final Gauge<Long> memtableOnHeapDataSize;
      /** Total amount of data stored in the memtable that resides off-heap, including column related overhead and partitions overwritten. */
 -    public final Gauge<Long> memtableOffHeapSize;
 +    public final Gauge<Long> memtableOffHeapDataSize;
      /** Total amount of live data stored in the memtable, excluding any data structure overhead */
      public final Gauge<Long> memtableLiveDataSize;
      /** Total amount of data stored in the memtables (2i and pending flush memtables included) that resides on-heap. */
diff --cc test/unit/org/apache/cassandra/db/ColumnFamilyMetricTest.java
index efd5017,302f84a..21417ed
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyMetricTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyMetricTest.java
@@@ -22,17 -24,15 +24,19 @@@ import java.util.Collection
  import org.junit.BeforeClass;
  import org.junit.Test;
  
 -import com.codahale.metrics.*;
 -
 +import com.codahale.metrics.Counter;
 +import com.codahale.metrics.Gauge;
 +import com.codahale.metrics.Histogram;
 +import com.codahale.metrics.Meter;
 +import com.codahale.metrics.MetricRegistryListener;
 +import com.codahale.metrics.Timer;
  import org.apache.cassandra.SchemaLoader;
  import org.apache.cassandra.Util;
 -import org.apache.cassandra.config.CFMetaData;
  import org.apache.cassandra.io.sstable.format.SSTableReader;
  import org.apache.cassandra.metrics.CassandraMetricsRegistry;
+ import org.apache.cassandra.metrics.TableMetrics;
  import org.apache.cassandra.schema.KeyspaceParams;
++import org.apache.cassandra.schema.TableMetadata;
  import org.apache.cassandra.utils.ByteBufferUtil;
  import org.apache.cassandra.utils.FBUtilities;
  
@@@ -63,11 -64,7 +68,7 @@@ public class ColumnFamilyMetricTes
  
          for (int j = 0; j < 10; j++)
          {
-             new RowUpdateBuilder(cfs.metadata(), FBUtilities.timestampMicros(), String.valueOf(j))
-                     .clustering("0")
-                     .add("val", ByteBufferUtil.EMPTY_BYTE_BUFFER)
-                     .build()
-                     .applyUnsafe();
 -            applyMutation(cfs.metadata, String.valueOf(j), ByteBufferUtil.EMPTY_BYTE_BUFFER, FBUtilities.timestampMicros());
++            applyMutation(cfs.metadata(), String.valueOf(j), ByteBufferUtil.EMPTY_BYTE_BUFFER, FBUtilities.timestampMicros());
          }
          cfs.forceBlockingFlush();
          Collection<SSTableReader> sstables = cfs.getLiveSSTables();
@@@ -99,21 -96,13 +100,13 @@@
          // This confirms another test/set up did not overflow the histogram
          store.metric.colUpdateTimeDeltaHistogram.cf.getSnapshot().get999thPercentile();
  
-         new RowUpdateBuilder(store.metadata(), 0, "4242")
-             .clustering("0")
-             .add("val", ByteBufferUtil.bytes("0"))
-             .build()
-             .applyUnsafe();
 -        applyMutation(store.metadata, "4242", ByteBufferUtil.bytes("0"), 0);
++        applyMutation(store.metadata(), "4242", ByteBufferUtil.bytes("0"), 0);
  
          // The histogram should not have overflowed on the first write
          store.metric.colUpdateTimeDeltaHistogram.cf.getSnapshot().get999thPercentile();
  
          // smallest time delta that would overflow the histogram if unfiltered
-         new RowUpdateBuilder(store.metadata(), 18165375903307L, "4242")
-             .clustering("0")
-             .add("val", ByteBufferUtil.bytes("0"))
-             .build()
-             .applyUnsafe();
 -        applyMutation(store.metadata, "4242", ByteBufferUtil.bytes("1"), 18165375903307L);
++        applyMutation(store.metadata(), "4242", ByteBufferUtil.bytes("1"), 18165375903307L);
  
          // CASSANDRA-11117 - update with large timestamp delta should not overflow the histogram
          store.metric.colUpdateTimeDeltaHistogram.cf.getSnapshot().get999thPercentile();
@@@ -140,6 -129,101 +133,101 @@@
          }
      }
  
+     @Test
+     public void testEstimatedColumnCountHistogramAndEstimatedRowSizeHistogram()
+     {
+         Keyspace keyspace = Keyspace.open("Keyspace1");
+         ColumnFamilyStore store = keyspace.getColumnFamilyStore("Standard2");
+ 
+         store.disableAutoCompaction();
+ 
+         try
+         {
+             // Ensure that there is no SSTables
+             store.truncateBlocking();
+ 
+             assertArrayEquals(new long[0], store.metric.estimatedColumnCountHistogram.getValue());
+ 
 -            applyMutation(store.metadata, "0", bytes(0), FBUtilities.timestampMicros());
 -            applyMutation(store.metadata, "1", bytes(1), FBUtilities.timestampMicros());
++            applyMutation(store.metadata(), "0", bytes(0), FBUtilities.timestampMicros());
++            applyMutation(store.metadata(), "1", bytes(1), FBUtilities.timestampMicros());
+ 
+             // Flushing first SSTable
+             store.forceBlockingFlush();
+ 
+             long[] estimatedColumnCountHistogram = store.metric.estimatedColumnCountHistogram.getValue();
+             assertNumberOfNonZeroValue(estimatedColumnCountHistogram, 1);
+             assertEquals(2, estimatedColumnCountHistogram[0]); //2 rows of one cell in 1 SSTable
+ 
+             long[] estimatedRowSizeHistogram = store.metric.estimatedPartitionSizeHistogram.getValue();
+             // Due to the timestamps we cannot guaranty the size of the row. So we can only check the number of histogram updates.
+             assertEquals(sumValues(estimatedRowSizeHistogram), 2);
+ 
 -            applyMutation(store.metadata, "2", bytes(2), FBUtilities.timestampMicros());
++            applyMutation(store.metadata(), "2", bytes(2), FBUtilities.timestampMicros());
+ 
+             // Flushing second SSTable
+             store.forceBlockingFlush();
+ 
+             estimatedColumnCountHistogram = store.metric.estimatedColumnCountHistogram.getValue();
+             assertNumberOfNonZeroValue(estimatedColumnCountHistogram, 1);
+             assertEquals(3, estimatedColumnCountHistogram[0]); //2 rows of one cell in the first SSTable and 1 row of one cell int the second sstable
+ 
+             estimatedRowSizeHistogram = store.metric.estimatedPartitionSizeHistogram.getValue();
+             assertEquals(sumValues(estimatedRowSizeHistogram), 3);
+         }
+         finally
+         {
+             store.enableAutoCompaction();
+         }
+     }
+ 
+     @Test
+     public void testAddHistogram()
+     {
+         long[] sums = new long[] {0, 0, 0};
+         long[] smaller = new long[] {1, 2};
+ 
+         long[] result = TableMetrics.addHistogram(sums, smaller);
+         assertTrue(result == sums); // Check that we did not create a new array
+         assertArrayEquals(new long[]{1, 2, 0}, result);
+ 
+         long[] equal = new long[] {5, 6, 7};
+ 
+         result = TableMetrics.addHistogram(sums, equal);
+         assertTrue(result == sums); // Check that we did not create a new array
+         assertArrayEquals(new long[]{6, 8, 7}, result);
+ 
+         long[] empty = new long[0];
+ 
+         result = TableMetrics.addHistogram(sums, empty);
+         assertTrue(result == sums); // Check that we did not create a new array
+         assertArrayEquals(new long[]{6, 8, 7}, result);
+ 
+         long[] greater = new long[] {4, 3, 2, 1};
+         result = TableMetrics.addHistogram(sums, greater);
+         assertFalse(result == sums); // Check that we created a new array
+         assertArrayEquals(new long[]{10, 11, 9, 1}, result);
+     }
+ 
 -    private static void applyMutation(CFMetaData metadata, Object pk, ByteBuffer value, long timestamp)
++    private static void applyMutation(TableMetadata metadata, Object pk, ByteBuffer value, long timestamp)
+     {
+         new RowUpdateBuilder(metadata, timestamp, pk).clustering("0")
+                                                      .add("val", value)
+                                                      .build()
+                                                      .applyUnsafe();
+     }
+ 
+     private static void assertNumberOfNonZeroValue(long[] array, long expectedCount)
+     {
+         long actualCount = Arrays.stream(array).filter(v -> v != 0).count();
+         if (expectedCount != actualCount)
+             fail("Unexpected number of non zero values. (expected: " + expectedCount + ", actual: " + actualCount + " array: " + Arrays.toString(array)+ " )");
+     }
+ 
+     private static long sumValues(long[] array)
+     {
+         return Arrays.stream(array).sum();
+     }
+ 
      private static class TestBase extends MetricRegistryListener.Base
      {
          @Override


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org