You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2020/09/15 04:51:47 UTC

[GitHub] [druid] jon-wei commented on a change in pull request #10371: Auto-compaction snapshot status API

jon-wei commented on a change in pull request #10371:
URL: https://github.com/apache/druid/pull/10371#discussion_r488322171



##########
File path: server/src/main/java/org/apache/druid/server/coordinator/AutoCompactionSnapshot.java
##########
@@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.server.coordinator;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import javax.annotation.Nullable;
+import javax.validation.constraints.NotNull;
+import java.util.Objects;
+
+public class AutoCompactionSnapshot
+{
+  public enum AutoCompactionScheduleStatus
+  {
+    NOT_ENABLED,
+    RUNNING
+  }
+
+  @JsonProperty
+  private String dataSource;
+  @JsonProperty
+  private AutoCompactionScheduleStatus scheduleStatus;
+  @JsonProperty
+  private String latestScheduledTaskId;
+  @JsonProperty
+  private long byteCountAwaitingCompaction;

Review comment:
       suggest just "bytes" instead of "byteCount" for the property name

##########
File path: server/src/main/java/org/apache/druid/server/coordinator/duty/NewestSegmentFirstIterator.java
##########
@@ -112,27 +114,38 @@
   }
 
   @Override
-  public Object2LongOpenHashMap<String> totalRemainingSegmentsSizeBytes()
+  public Map<String, CompactionStatistics> totalRemainingStatistics()
   {
-    final Object2LongOpenHashMap<String> resultMap = new Object2LongOpenHashMap<>();
-    resultMap.defaultReturnValue(UNKNOWN_TOTAL_REMAINING_SEGMENTS_SIZE);
-    for (QueueEntry entry : queue) {
-      final VersionedIntervalTimeline<String, DataSegment> timeline = dataSources.get(entry.getDataSource());
-      final Interval interval = new Interval(timeline.first().getInterval().getStart(), entry.interval.getEnd());
-
-      final List<TimelineObjectHolder<String, DataSegment>> holders = timeline.lookup(interval);
-
-      long size = 0;
-      for (DataSegment segment : FluentIterable
-          .from(holders)
-          .transformAndConcat(TimelineObjectHolder::getObject)
-          .transform(PartitionChunk::getObject)) {
-        size += segment.getSize();
-      }
+    return remainingSegments;
+  }
+
+  @Override
+  public Map<String, CompactionStatistics> totalProcessedStatistics()
+  {
+    return processedSegments;
+  }
 
-      resultMap.put(entry.getDataSource(), size);
+  @Override
+  public void flushAllSegments()
+  {
+    if (queue.isEmpty()) {
+      return;
+    }
+    QueueEntry entry;
+    while ((entry = queue.poll()) != null) {
+      final List<DataSegment> resultSegments = entry.segments;
+      final String dataSourceName = resultSegments.get(0).getDataSource();
+      // This entry was in the queue, meaning that it was not processed. Hence, also aggregates it's
+      // statistic to the remaining segments counts.
+      collectSegmentStatistics(remainingSegments, dataSourceName, new SegmentsToCompact(entry.segments));
+      final CompactibleTimelineObjectHolderCursor compactibleTimelineObjectHolderCursor = timelineIterators.get(
+          dataSourceName
+      );
+      // WARNING: This iterates the compactibleTimelineObjectHolderCursor.
+      // Since this method is intended to only be use after all necessary iteration is done on this iterator

Review comment:
       I don't think I understand this comment, if all iteration is done (by that do you mean `compactibleTimelineObjectHolderCursor.hasNext` returns false?), then iterateAllSegments would do nothing.

##########
File path: server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
##########
@@ -238,25 +272,102 @@ private CoordinatorStats makeStats(int numCompactionTasks, CompactionSegmentIter
   {
     final CoordinatorStats stats = new CoordinatorStats();
     stats.addToGlobalStat(COMPACTION_TASK_COUNT, numCompactionTasks);
-    totalSizesOfSegmentsAwaitingCompactionPerDataSource = iterator.totalRemainingSegmentsSizeBytes();
-    totalSizesOfSegmentsAwaitingCompactionPerDataSource.object2LongEntrySet().fastForEach(
-        entry -> {
-          final String dataSource = entry.getKey();
-          final long totalSizeOfSegmentsAwaitingCompaction = entry.getLongValue();
-          stats.addToDataSourceStat(
-              TOTAL_SIZE_OF_SEGMENTS_AWAITING_COMPACTION,
-              dataSource,
-              totalSizeOfSegmentsAwaitingCompaction
-          );
-        }
-    );
+
+    // Make sure that the iterator iterate through all the remaining segments so that we can get accurate and correct
+    // statistics (remaining, skipped, processed, etc.). The reason we have to do this explicitly here is because
+    // earlier (when we are iterating to submit compaction tasks) we may have ran out of task slot and were not able
+    // to iterate to the first segment that needs compaction for some datasource.
+    iterator.flushAllSegments();

Review comment:
       Are there any concerns with performance overhead from this?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org