You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2018/12/17 19:44:36 UTC

[GitHub] fjy closed pull request #6749: Fix doc for automatic compaction

fjy closed pull request #6749: Fix doc for automatic compaction
URL: https://github.com/apache/incubator-druid/pull/6749
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/content/design/coordinator.md b/docs/content/design/coordinator.md
index a8580254cbd..90f5e28178d 100644
--- a/docs/content/design/coordinator.md
+++ b/docs/content/design/coordinator.md
@@ -66,14 +66,14 @@ To ensure an even distribution of segments across historical nodes in the cluste
 ### Compacting Segments
 
 Each run, the Druid coordinator compacts small segments abutting each other. This is useful when you have a lot of small
-segments which may degrade the query performance as well as increasing the disk usage. Note that the data for an interval
-cannot be compacted across the segments.
+segments which may degrade the query performance as well as increasing the disk space usage.
 
 The coordinator first finds the segments to compact together based on the [segment search policy](#segment-search-policy).
-Once it finds some segments, it launches a [compact task](../ingestion/tasks.html#compaction-task) to compact those segments.
-The maximum number of running compact tasks is `max(sum of worker capacity * slotRatio, maxSlots)`.
-Note that even though `max(sum of worker capacity * slotRatio, maxSlots)` = 1, at least one compact task is always submitted
-once a compaction is configured for a dataSource. See [Compaction Configuration API](../operations/api-reference.html#compaction-configuration) to set those values.
+Once some segments are found, it launches a [compact task](../ingestion/tasks.html#compaction-task) to compact those segments.
+The maximum number of running compact tasks is `min(sum of worker capacity * slotRatio, maxSlots)`.
+Note that even though `min(sum of worker capacity * slotRatio, maxSlots)` = 0, at least one compact task is always submitted
+if the compaction is enabled for a dataSource.
+See [Compaction Configuration API](../operations/api-reference.html#compaction-configuration) and [Compaction Configuration](../configuration/index.html#compaction-dynamic-configuration) to enable the compaction.
 
 Compact tasks might fail due to some reasons.
 
@@ -82,9 +82,6 @@ Compact tasks might fail due to some reasons.
 
 Once a compact task fails, the coordinator simply finds the segments for the interval of the failed task again, and launches a new compact task in the next run.
 
-To use this feature, you need to set some configurations for dataSources you want to compact.
-Please see [Compaction Configuration](../configuration/index.html#compaction-dynamic-configuration) for more details.
-
 ### Segment Search Policy
 
 #### Newest Segment First Policy


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org