You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2021/03/02 18:28:47 UTC

[GitHub] [druid] techdocsmith opened a new pull request #10935: First refactor of compaction

techdocsmith opened a new pull request #10935:
URL: https://github.com/apache/druid/pull/10935


   #10897
   
   ### First pass refactor / update of compaction docs
   
   <!-- Describe the goal of this PR, what problem are you fixing. If there is a corresponding issue (referenced above), it's not necessary to repeat the description here, however, you may choose to keep one summary sentence. -->
   
   Updates to "Data management" topic as follows:
   - Adds an introduction that describes the content in the topic.
   - Removes a duplicated section about "Schema changes" and leaves it in design/segments.md
   
   Adds a new topic "Compaction" that defines compaction and automatic compaction as a strategy for segment optimization. - - Use cases for compaction. Data handling with compaction. Describes compaction task configuration."
   - This includes the Add behavior for #10856
   
   Repairs links for the refactor above.
   
   This PR doesn't handle the remaining task of identifying reindexing and compaction as data management tasks for existing data and comparing the use cases between the two. This should come in a subsequent PR.
   
   @maytasm , I need help with the TBD on compaction about use cases when automatic compaction is not an option.
   
   cc: @suneet-s , @loquisgon , @sthetland 
   
   
   <hr>
   
   This PR has:
   - [x] been self-reviewed.
   - [x] added documentation for new or modified features or behaviors.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592064655



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.

Review comment:
       This is slightly more nuanced. There are several reasons why you might find compaction useful, even when data does arrive in chronological order - like the parallelism in your parallel ingest task caused Druid to create many small segments. Also, the way this is worded makes me think only of streaming data (this might just be me), but this should also apply to batch ingestion when you "append" data
   
   Perhaps a section that talks about all the reasons why someone would want to enabled compaction would be helpful.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] 2bethere commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
2bethere commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r588735377



##########
File path: docs/configuration/index.md
##########
@@ -820,24 +820,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|
 
 An example of compaction config is:
 
 ```json
 {
-  "dataSource": "wikiticker"
+  "dataSource": "wikiticker",
+  "granularitySpec" : {
+    "segmentGranularity : "none"
+  }
 }
 ```
 
-Note that compaction tasks can fail if their locks are revoked by other tasks of higher priorities.
-Since realtime tasks have a higher priority than compaction task by default,
-it can be problematic if there are frequent conflicts between compaction tasks and realtime tasks.
-If this is the case, the coordinator's automatic compaction might get stuck because of frequent compaction task failures.
-This kind of problem may happen especially in Kafka/Kinesis indexing systems which allow late data arrival.
-If you see this problem, it's recommended to set `skipOffsetFromLatest` to some large enough value to avoid such conflicts between compaction tasks and realtime tasks.
+Compaction tasks fail when higher priority tasks cause Druid to revokes their locks. By default, realtime tasks like ingestion have a higher priority than compaction tasks. Therefore frequent conflicts between compaction tasks and realtime tasks can cause the coordinator's automatic compaction to get stuck.

Review comment:
       Nice! This is way clearer.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596194527



##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.
 
-
-
-
-## Schema changes

Review comment:
       it is 100% duplicated from ../design/segments.md




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596249842



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.

Review comment:
       broke this paragraph into a section. made sure to mention parallel task, append to existing, and changing partitioning. This may need a little bit of revision.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592067346



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+

Review comment:
       I think each one of these sections is currently describing the "what". Do you have a plan to talk about the "why"?
   
   For example - You should change segmentGranularity in compaction if you ingested data and realized data for that time interval is sparse so having a larger segmentGranularity will result in better performance.
   
   Similarly for queryGranularity - the why would be that you no longer need fine grained resolution in your data. There should be a big warning that tells users they can't go from coarser to finer granularity.
   
   For dimension order - you'd want to change it to get better sorting and smaller segments.
   
   I don't know what rollup does on it's own...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596236502



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction

Review comment:
       removed `segment`. I don't know that I had a specific reason in this case.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592571393



##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.

Review comment:
       @suneet-s , that means adding new data to existing data sources. I will change that heading.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592060243



##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.

Review comment:
       nit: this line mentions "existing datasources" and the first topic in this page is `Adding new data`. I don't have a good idea yet on what the correct split should be yet.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on pull request #10935:
URL: https://github.com/apache/druid/pull/10935#issuecomment-802885663


   Docs failure looks legit
   
   ```
   Could not find self anchor '#compaction-tuningconfig' in './build/ApacheDruid/docs/configuration/index.html'
   Could not find './native_batch.md' linked from './build/ApacheDruid/docs/ingestion/compaction.html'
   Could not find '../native-batch.md' linked from './build/ApacheDruid/docs/ingestion/compaction.html'
   Could not find '../data-management.md' linked from './build/ApacheDruid/docs/ingestion/index.html'
   Could not find '../compaction.md' linked from './build/ApacheDruid/docs/ingestion/index.html'
   There are 5 issues
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] maytasm merged pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
maytasm merged pull request #10935:
URL: https://github.com/apache/druid/pull/10935


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on pull request #10935:
URL: https://github.com/apache/druid/pull/10935#issuecomment-796461843


   I echo others comments on this PR. This is a huge improvement - thank you @techdocsmith ! I haven't verified the correctness of how exactly compaction works, or the details of the different tuning knobs
   
   Some overall structural feedback (doesn't need to be addressed in this PR):
   - I think the data management doc should be broken into a few separate docs. Seeing compaction pulled out of there - it feels like this would be a good landing page - that then points you to "getting data in", "Optimizing data", "Updating data"(maybe) and "Deleting data" This is obviously beyond the scope of this PR, but I think it's worth mentioning because it adds structure around how to think about data and managing data in Druid.
   - Data management also talks about lookups, while the rest of the doc talks about datasources. This seemed a little out of place when I was reading locally. I don't have a suggestion for how to structure this right now, but wanted to surface it in case you had better ideas.
   - The compaction page currently talks about the what. I wonder if it needs to be split into 2 pages (or sections), one that spells out the "why should I care/ I want to do..." a little bit more, and another that spells out "how do I do that". Maybe it can be intertwined in the same page?
   - I really like the distinction between auto-compaction and manual compaction. However the page doesn't link to anything that tells me how to use auto-compaction, but it does link to something about manual compaction. Are there instructions for auto-compaction elsewhere?
   - There are some known differences between auto-compaction and manual compaction. Support for queryGranularity is one right now. Do you think we should call this out in the section that talks about the differences between the 2. This is tricky, because it's like a gap in functionality - but it's a gotcha I think users will want to know about.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596180429



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.

Review comment:
       Thanks @maytasm , I will add your example here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] maytasm commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
maytasm commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r597879612



##########
File path: docs/configuration/index.md
##########
@@ -886,6 +886,17 @@ The below is a list of the supported configurations for auto compaction.
 |`chatHandlerTimeout`|Timeout for reporting the pushed segments in worker tasks.|no (default = PT10S)|
 |`chatHandlerNumRetries`|Retries for reporting the pushed segments in worker tasks.|no (default = 5)|
 
+###### Automatic compaction TuningConfig

Review comment:
       Should this says Automatic compaction Granularity Spec?

##########
File path: docs/ingestion/compaction.md
##########
@@ -22,36 +22,45 @@ description: "Defines compaction and automatic compaction (auto-compaction or au
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
-Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. In some cases the compacted segments are larger, but there are fewer of them. In other cases the compacted segments may be smaller. Compaction tends to increase performance because optimized segments require less per-segment processing and less memory overhead for ingestion and for querying paths.
 
-As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+## Compaction strategies
+There are several cases to consider compaction for segment optimization:
+- With streaming ingestion, data can arrive out of chronological order creating lots of small segments.
+- If you append data using `appendToExisting` for [native batch](native-batch.md) ingestion creating suboptimal segments.
+- When you use `index_parallel` for parallel batch indexing and the parallel ingestion tasks create many small segments.
+- When a misconfigured ingestion task creates oversized segments.
 
-In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+By default, compaction does not modify the underlying data of the segments. However, there are cases when you may want to modify data during compaction to improve query performance:
+- If, after ingestion, you realize realize that data for the time interval is sparse, you can use compaction to increase the segment granularity.
+- Over time you don't need fine-grained granularity for older data so you use compaction to change older segments to a coarser query granularity. For example from `minute` to `hour`, or `hour` to `day`. You cannot go from coarser granularity to finer granularity.

Review comment:
       Should mentioned that the reason for this is for more rollup which result in less storage space

##########
File path: docs/configuration/index.md
##########
@@ -843,7 +843,7 @@ A description of the compaction config is:
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
 |`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
-|`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
+|`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#automatic-compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
 |`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|

Review comment:
       Should this link to the newly added Automatic compaction granularity spec section?

##########
File path: docs/ingestion/compaction.md
##########
@@ -22,36 +22,45 @@ description: "Defines compaction and automatic compaction (auto-compaction or au
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
-Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. In some cases the compacted segments are larger, but there are fewer of them. In other cases the compacted segments may be smaller. Compaction tends to increase performance because optimized segments require less per-segment processing and less memory overhead for ingestion and for querying paths.
 
-As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+## Compaction strategies
+There are several cases to consider compaction for segment optimization:
+- With streaming ingestion, data can arrive out of chronological order creating lots of small segments.
+- If you append data using `appendToExisting` for [native batch](native-batch.md) ingestion creating suboptimal segments.
+- When you use `index_parallel` for parallel batch indexing and the parallel ingestion tasks create many small segments.
+- When a misconfigured ingestion task creates oversized segments.
 
-In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+By default, compaction does not modify the underlying data of the segments. However, there are cases when you may want to modify data during compaction to improve query performance:
+- If, after ingestion, you realize realize that data for the time interval is sparse, you can use compaction to increase the segment granularity.

Review comment:
       typo. realize twice

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json

Review comment:
       Can you add granularitySpec to this json example

##########
File path: docs/ingestion/compaction.md
##########
@@ -62,16 +71,12 @@ Unless you modify the query granularity in the [granularity spec](#compaction-gr
 If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
 
 ### Dimension handling
-Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment include all dimensions of the input segments. 
 
 Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
 
 If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
 
-### Rollup

Review comment:
       Why is this removed?

##########
File path: docs/ingestion/compaction.md
##########
@@ -62,16 +71,12 @@ Unless you modify the query granularity in the [granularity spec](#compaction-gr
 If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
 
 ### Dimension handling
-Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment include all dimensions of the input segments. 
 
 Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
 
 If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
 
-### Rollup

Review comment:
       This is still true. What I meant to say is that compaction cannot change the `rollup` setting but the `rollup` setting on the original segments is important and determines how compaction works




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596189890



##########
File path: docs/configuration/index.md
##########
@@ -842,24 +842,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|
 
 An example of compaction config is:
 
 ```json
 {
-  "dataSource": "wikiticker"
+  "dataSource": "wikiticker",

Review comment:
       you are right, @maytasm . Filed a task to address this in another PR>




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592068467



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.

Review comment:
       Also, maybe we want to be more specific about what we mean by `modifying the underlying data`
   
   What I mean by this is, compaction allows you to change the partitioning scheme of the underlying data, but for someone outside the database, they don't really notice this difference when querying the data.
   
   However if queryGranularity is changed, they would notice the difference, because the timestamp column is changed. Sorry if this seems like a ramble, hopefully it helps explain what I'm thinking




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] maytasm commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
maytasm commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592796705



##########
File path: docs/configuration/index.md
##########
@@ -842,24 +842,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|

Review comment:
       Is there a section describing the keys (and what the keys are for) of this custom `granularitySpec`? Would be good to explicitly point out what is and isn't supported compare to a regular `granularitySpec`

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.

Review comment:
       Actually I think @techdocsmith clarification here is nice and clear.
   1. segment 1 with DAY granularity interval of 01/01/2020-01/02/2020, segment 2 with MONTH granularity interval of 01/01/2020-02/01/2020. The two segments overlapped.
   2. Druid attempt to combine and compacted the overlapped segments. For the case above, the earliest start time of the two segments above is 01/01/2020 and the latest end time of the two segments above is 02/01/2020. Druid will then compacted these two segments together despite they have different segment granularity. 
   3. For the example above, since the earliest start time of the two segments above is 01/01/2020 and  the latest end time of the two segments above is 02/01/2020, Druid will determine that the segmentGranularity it should use is MONTH although segment 1's original segmentGranularity is DAY

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.

Review comment:
       This can goes both ways. Too larger to smaller and too small to larger. 

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.

Review comment:
       I would say "If you want to force the ordering and types" or "If you want to ensure ordering and types is of a certain values" instead

##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.
 
-
-
-
-## Schema changes
-
-Schemas for datasources can change at any time and Apache Druid supports different schemas among segments.
-
-### Replacing segments

Review comment:
       Why is this section removed? 

##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.
 
-
-
-
-## Schema changes

Review comment:
       Why is this section removed? 

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.

Review comment:
       You can also change the priority for compaction task so that the compaction task supersedes the ingestion task

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks in parallel. For example, if you want to compact the data for a year, you are not limited to running a single task for the entire year. You can run 12 compaction tasks with month-long intervals.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](../native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec` include all dimensions and metrics of the input segments by default.
+
+Compaction tasks would exit without doing anything and issue a failure status code:
+- if the interval you specify has no data segments loaded<br>
+OR
+- if the interval you specify is empty.
+
+Note that the metadata between input segments and the resulting compacted segments may differ if the metadata among the input segments differs as well. If all input segments have the same metadata, however, the resulting output segment will have the same metadata as all input segments.
+
+
+### Example compaction task
+The following JSON illustrates a compaction task to compact _all segments_ within the interval `2017-01-01/2018-01-01` and create new segments:
+
+```json
+{
+  "type" : "compact",
+  "dataSource" : "wikipedia",
+  "ioConfig" : {
+    "type": "compact",
+    "inputSpec": {
+      "type": "interval",
+      "interval": "2017-01-01/2018-01-01",
+    }
+  }
+}
+```
+
+This task doesn't specify a `granularitySpec` so Druid retains the original segment granularity unchanged when compaction is complete.
+
+### Compaction I/O configuration
+
+The compaction `ioConfig` requires specifying `inputSpec` as follows:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`inputSpec`|Input specification|Yes|
+
+There are two supported `inputSpec`s for now.
+
+The interval `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `interval`|Yes|
+|`interval`|Interval to compact|Yes|
+
+The segments `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `segments`|Yes|
+|`segments`|A list of segment IDs|Yes|
+
+### Compaction granularity spec
+
+You can optionally use the `granularitySpec` object to configure the segment granularity and the query granularity of the compacted segments. Their syntax is as follows:
+```json
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    ...
+    ,
+    "granularitySpec": {
+      "segmentGranularity": <time_period>,
+      "queryGranularity": <time_period>
+    }
+    ...
+```
+
+`granularitySpec` takes the following keys:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`segmentGranularity`|Time chunking period for the segment granularity. Defaults to 'null'. Accepts all [Query granularities](../querying/granularities.md).|No|

Review comment:
       Maybe explain the behavior of null here too

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.

Review comment:
       The speed of a compaction task submitted manually and via auto compaction completing will be the same. I think what we need to make clear here is that automatic compaction has an assigned number of task slot it can used. If this is reached, then it will wait until previously submitted auto compaction tasks are done. Manual compaction can use all available task slots and you can get "faster" by manually submitting more tasks (for more intervals) concurrently. 

##########
File path: docs/configuration/index.md
##########
@@ -842,24 +842,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|
 
 An example of compaction config is:
 
 ```json
 {
-  "dataSource": "wikiticker"
+  "dataSource": "wikiticker",

Review comment:
       Not have to be in this PR, but we should have a separate page dedicated to auto-compaction. This will allow us to go into more detail on why/how to use auto-compaction.
   
   

##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.
 
-
-
-
-## Schema changes
-
-Schemas for datasources can change at any time and Apache Druid supports different schemas among segments.
-
-### Replacing segments
-
-Druid uniquely
-identifies segments using the datasource, interval, version, and partition number. The partition number is only visible in the segment id if
-there are multiple segments created for some granularity of time. For example, if you have hourly segments, but you
-have more data in an hour than a single segment can hold, you can create multiple segments for the same hour. These segments will share
-the same datasource, interval, and version, but have linearly increasing partition numbers.
-
-```
-foo_2015-01-01/2015-01-02_v1_0
-foo_2015-01-01/2015-01-02_v1_1
-foo_2015-01-01/2015-01-02_v1_2
-```
-
-In the example segments above, the dataSource = foo, interval = 2015-01-01/2015-01-02, version = v1, partitionNum = 0.
-If at some later point in time, you reindex the data with a new schema, the newly created segments will have a higher version id.
-
-```
-foo_2015-01-01/2015-01-02_v2_0
-foo_2015-01-01/2015-01-02_v2_1
-foo_2015-01-01/2015-01-02_v2_2
-```
-
-Druid batch indexing (either Hadoop-based or IndexTask-based) guarantees atomic updates on an interval-by-interval basis.
-In our example, until all `v2` segments for `2015-01-01/2015-01-02` are loaded in a Druid cluster, queries exclusively use `v1` segments.
-Once all `v2` segments are loaded and queryable, all queries ignore `v1` segments and switch to the `v2` segments.
-Shortly afterwards, the `v1` segments are unloaded from the cluster.
-
-Note that updates that span multiple segment intervals are only atomic within each interval. They are not atomic across the entire update.
-For example, you have segments such as the following:
-
-```
-foo_2015-01-01/2015-01-02_v1_0
-foo_2015-01-02/2015-01-03_v1_1
-foo_2015-01-03/2015-01-04_v1_2
-```
-
-`v2` segments will be loaded into the cluster as soon as they are built and replace `v1` segments for the period of time the
-segments overlap. Before v2 segments are completely loaded, your cluster may have a mixture of `v1` and `v2` segments.
-
-```
-foo_2015-01-01/2015-01-02_v1_0
-foo_2015-01-02/2015-01-03_v2_1
-foo_2015-01-03/2015-01-04_v1_2
-```
-
-In this case, queries may hit a mixture of `v1` and `v2` segments.
-
-### Different schemas among segments

Review comment:
       Why is this section removed? 

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 

Review comment:
       "When segments can benefit from compaction" -> is a little vague. Auto compaction looks for segments that is either not compacted or was compacted with a different spec (there was a change in spec i.e. change in partitioning) and submit compaction task for those and only those. 

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json

Review comment:
       granularitySpec is missing from this example 

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.

Review comment:
       You can also add and remove dimensions with compaction task. That can change your data too. 

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks in parallel. For example, if you want to compact the data for a year, you are not limited to running a single task for the entire year. You can run 12 compaction tasks with month-long intervals.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](../native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec` include all dimensions and metrics of the input segments by default.

Review comment:
       I would change from "...dimensions and metrics of the input segments by default." to something like "...dimensions and metrics of the input segments when not given in the compaction spec".

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+

Review comment:
       You can't change rollup via compaction task right now.
   
   To add to Suneet comment, for dimension order - you can change it to add or remove columns too. For example, for old data, you want no longer need x,y,z columns and wish to remove them or maybe you want to add a new aggregation on old data.

##########
File path: docs/configuration/index.md
##########
@@ -842,24 +842,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|

Review comment:
       Note that this is also different from manual compaction's granularitySpec described in #compaction-granularity-spec since the one here does not support `queryGranularity`

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.

Review comment:
       You may also want to use compaction to change from dynamic partitioning (best effort roll up) to hash/range (perfect roll up)

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks in parallel. For example, if you want to compact the data for a year, you are not limited to running a single task for the entire year. You can run 12 compaction tasks with month-long intervals.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](../native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec` include all dimensions and metrics of the input segments by default.
+
+Compaction tasks would exit without doing anything and issue a failure status code:
+- if the interval you specify has no data segments loaded<br>
+OR
+- if the interval you specify is empty.
+
+Note that the metadata between input segments and the resulting compacted segments may differ if the metadata among the input segments differs as well. If all input segments have the same metadata, however, the resulting output segment will have the same metadata as all input segments.
+
+
+### Example compaction task
+The following JSON illustrates a compaction task to compact _all segments_ within the interval `2017-01-01/2018-01-01` and create new segments:
+
+```json
+{
+  "type" : "compact",
+  "dataSource" : "wikipedia",
+  "ioConfig" : {
+    "type": "compact",
+    "inputSpec": {
+      "type": "interval",
+      "interval": "2017-01-01/2018-01-01",
+    }
+  }
+}
+```
+
+This task doesn't specify a `granularitySpec` so Druid retains the original segment granularity unchanged when compaction is complete.
+
+### Compaction I/O configuration
+
+The compaction `ioConfig` requires specifying `inputSpec` as follows:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`inputSpec`|Input specification|Yes|
+
+There are two supported `inputSpec`s for now.
+
+The interval `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `interval`|Yes|
+|`interval`|Interval to compact|Yes|
+
+The segments `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `segments`|Yes|
+|`segments`|A list of segment IDs|Yes|
+
+### Compaction granularity spec
+
+You can optionally use the `granularitySpec` object to configure the segment granularity and the query granularity of the compacted segments. Their syntax is as follows:
+```json
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    ...
+    ,
+    "granularitySpec": {
+      "segmentGranularity": <time_period>,
+      "queryGranularity": <time_period>
+    }
+    ...
+```
+
+`granularitySpec` takes the following keys:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`segmentGranularity`|Time chunking period for the segment granularity. Defaults to 'null'. Accepts all [Query granularities](../querying/granularities.md).|No|
+|`queryGranularity`|Time chunking period for the query granularity. Defaults to 'null'. Accepts all [Query granularities](../querying/granularities.md). Not supported for automatic compaction.|No|

Review comment:
       Maybe explain the behavior of null here too




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] sthetland commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
sthetland commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r589819478



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year.

Review comment:
       This note isn't clear to me. Perhaps "You can configure multiple compaction tasks.. "? And then perhaps it means to say something like "... 12 compaction tasks -- one per month -- instead of running..." 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596182816



##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.
 
-
-
-
-## Schema changes
-
-Schemas for datasources can change at any time and Apache Druid supports different schemas among segments.
-
-### Replacing segments
-
-Druid uniquely
-identifies segments using the datasource, interval, version, and partition number. The partition number is only visible in the segment id if
-there are multiple segments created for some granularity of time. For example, if you have hourly segments, but you
-have more data in an hour than a single segment can hold, you can create multiple segments for the same hour. These segments will share
-the same datasource, interval, and version, but have linearly increasing partition numbers.
-
-```
-foo_2015-01-01/2015-01-02_v1_0
-foo_2015-01-01/2015-01-02_v1_1
-foo_2015-01-01/2015-01-02_v1_2
-```
-
-In the example segments above, the dataSource = foo, interval = 2015-01-01/2015-01-02, version = v1, partitionNum = 0.
-If at some later point in time, you reindex the data with a new schema, the newly created segments will have a higher version id.
-
-```
-foo_2015-01-01/2015-01-02_v2_0
-foo_2015-01-01/2015-01-02_v2_1
-foo_2015-01-01/2015-01-02_v2_2
-```
-
-Druid batch indexing (either Hadoop-based or IndexTask-based) guarantees atomic updates on an interval-by-interval basis.
-In our example, until all `v2` segments for `2015-01-01/2015-01-02` are loaded in a Druid cluster, queries exclusively use `v1` segments.
-Once all `v2` segments are loaded and queryable, all queries ignore `v1` segments and switch to the `v2` segments.
-Shortly afterwards, the `v1` segments are unloaded from the cluster.
-
-Note that updates that span multiple segment intervals are only atomic within each interval. They are not atomic across the entire update.
-For example, you have segments such as the following:
-
-```
-foo_2015-01-01/2015-01-02_v1_0
-foo_2015-01-02/2015-01-03_v1_1
-foo_2015-01-03/2015-01-04_v1_2
-```
-
-`v2` segments will be loaded into the cluster as soon as they are built and replace `v1` segments for the period of time the
-segments overlap. Before v2 segments are completely loaded, your cluster may have a mixture of `v1` and `v2` segments.
-
-```
-foo_2015-01-01/2015-01-02_v1_0
-foo_2015-01-02/2015-01-03_v2_1
-foo_2015-01-03/2015-01-04_v1_2
-```
-
-In this case, queries may hit a mixture of `v1` and `v2` segments.
-
-### Different schemas among segments

Review comment:
       it is 100% duplicated from ../design/segments.md




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592063449



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction

Review comment:
       why did you choose to call this `segment compaction` ? Don't have strong feelings right now, but just curious about the thought behind it




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r590830543



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.

Review comment:
       @maytasm , I think by trying to clarify the behavior here, I made it more confusing. Can you help me with @2bethere 's questions?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r592065414



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.

Review comment:
       `without modifying the data.`
   
   ^ This used to be true till we added the ability to change queryGranularity with manual compaction. At the current time, this is the only way compaction will change the underlying data AFAIK




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596269849



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+

Review comment:
       I removed this rollup section since it seemed to not matter. I am adding the "why" into the Compaction Strategies section.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r595331291



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Here compaction increases performance because fewer segments require less the per-segment processing and the memory overhead for ingestion and for querying paths.
+
+As a general strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a misconfigured ingestion task creates oversized segments, you can create a compaction task to split the segment files into smaller, more optimally sized ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|

Review comment:
       #10996 fails the compaction task if there is a conflict between the segmentGranularity specified in `granularitySpec` and `segmentGranularity`
   
   I'm not sure what the best way to document that is in my PR, but since you are already refactoring this part of the docs, could you add something in this section surfacing that? Thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on pull request #10935:
URL: https://github.com/apache/druid/pull/10935#issuecomment-802992915


   Apologize for that @suneet-s , I fixed links and spelling in a later commit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] techdocsmith commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
techdocsmith commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r596182625



##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.
 
-
-
-
-## Schema changes
-
-Schemas for datasources can change at any time and Apache Druid supports different schemas among segments.
-
-### Replacing segments

Review comment:
       it is 100% duplicated from ../design/segments.md

##########
File path: docs/ingestion/data-management.md
##########
@@ -21,173 +21,9 @@ title: "Data management"
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
+Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data.
 
-
-
-
-## Schema changes

Review comment:
       it is 100% duplicated from ../design/segments.md




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] suneet-s edited a comment on pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
suneet-s edited a comment on pull request #10935:
URL: https://github.com/apache/druid/pull/10935#issuecomment-796461843


   I echo others comments on this PR. This is a huge improvement - thank you @techdocsmith ! I haven't verified the correctness of how exactly compaction works, or the details of the different tuning knobs
   
   Some overall structural feedback (doesn't need to be addressed in this PR):
   - I think the data management doc should be broken into a few separate docs. Seeing compaction pulled out of there - it feels like `data management` would be a good landing page - that then points you to "getting data in", "Optimizing data", "Updating data"(maybe) and "Deleting data" This is obviously beyond the scope of this PR, but I think it's worth mentioning because it adds structure around how to think about data and managing data in Druid.
   - Data management also talks about lookups, while the rest of the doc talks about datasources. This seemed a little out of place when I was reading locally. I don't have a suggestion for how to structure this right now, but wanted to surface it in case you had better ideas.
   - The compaction page currently talks about the what. I wonder if it needs to be split into 2 pages (or sections), one that spells out the "why should I care/ I want to do..." a little bit more, and another that spells out "how do I do that". Maybe it can be intertwined in the same page?
   - I really like the distinction between auto-compaction and manual compaction. However the page doesn't link to anything that tells me how to use auto-compaction, but it does link to something about manual compaction. Are there instructions for auto-compaction elsewhere?
   - There are some known differences between auto-compaction and manual compaction. Support for queryGranularity is one right now. Do you think we should call this out in the section that talks about the differences between the 2. This is tricky, because it's like a gap in functionality - but it's a gotcha I think users will want to know about.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] sthetland commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
sthetland commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r589825536



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec`include all dimensions and metrics of the input segments by default.
+
+Compaction tasks exit without doing anything and issue a failure status code:
+- if the interval you specify has no data segments loaded<br>
+OR
+- if the interval you specify is empty.
+
+The output segment can have different metadata from the input segments unless all input segments have the same metadata.
+
+
+### Example compaction task
+The following JSON illustrates a compaction task to compact _all segments_ within the interval `2017-01-01/2018-01-01` and create new segments:
+
+```json
+{
+  "type" : "compact",
+  "dataSource" : "wikipedia",
+  "ioConfig" : {
+    "type": "compact",
+    "inputSpec": {
+      "type": "interval",
+      "interval": "2017-01-01/2018-01-01",
+    }
+  }
+}
+```
+
+This task doesn't specify a `granularitySpec` so Druid retains the original segment granularity unchanged when compaction is complete.
+
+### Compaction I/O configuration
+
+The compaction `ioConfig` requires specifying `inputSpec` as follows:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`inputSpec`|Input specification|Yes|
+
+There are two supported `inputSpec`s for now.
+
+The interval `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `interval`|Yes|
+|`interval`|Interval to compact|Yes|
+
+The segments `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `segments`|Yes|
+|`segments`|A list of segment IDs|Yes|
+
+### Compaction granularity spec
+
+You can optionally use the `granularitySpec` object to configure the segment granularity and the query granularity of the compacted segments. They syntax is as follows:

Review comment:
       ```suggestion
   You can optionally use the `granularitySpec` object to configure the segment granularity and the query granularity of the compacted segments. Their syntax is as follows:
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec`include all dimensions and metrics of the input segments by default.

Review comment:
       ```suggestion
   A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](../native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec` include all dimensions and metrics of the input segments by default.
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year.

Review comment:
       This note isn't clear to me. Perhaps "You can schedule multiple compaction tasks.. "? And then perhaps it means to say something like "... 12 compaction tasks -- one per month -- instead of running..." 

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.

Review comment:
       
   ```suggestion
   As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. This often happens, for example, if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.

Review comment:
       ```suggestion
   See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks.
   ```

##########
File path: docs/configuration/index.md
##########
@@ -820,24 +820,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|
 
 An example of compaction config is:
 
 ```json
 {
-  "dataSource": "wikiticker"
+  "dataSource": "wikiticker",
+  "granularitySpec" : {
+    "segmentGranularity : "none"
+  }
 }
 ```
 
-Note that compaction tasks can fail if their locks are revoked by other tasks of higher priorities.
-Since realtime tasks have a higher priority than compaction task by default,
-it can be problematic if there are frequent conflicts between compaction tasks and realtime tasks.
-If this is the case, the coordinator's automatic compaction might get stuck because of frequent compaction task failures.
-This kind of problem may happen especially in Kafka/Kinesis indexing systems which allow late data arrival.
-If you see this problem, it's recommended to set `skipOffsetFromLatest` to some large enough value to avoid such conflicts between compaction tasks and realtime tasks.
+Compaction tasks fail when higher priority tasks cause Druid to revokes their locks. By default, realtime tasks like ingestion have a higher priority than compaction tasks. Therefore frequent conflicts between compaction tasks and realtime tasks can cause the coordinator's automatic compaction to get stuck.
+You may see this issue with streaming ingestion from Kafka and Kinesis that ingest late arriving data. To mitigate this problem, set `skipOffsetFromLatest` to a value large enough to avoid conflicts between compaction tasks and realtime ingestion tasks.

Review comment:
       I think the first "that" should be a "which"? i.e.: 
    
   "You may see this issue with streaming ingestion sources such as Kafka and Kinesis, which require ingestion of late-arriving data."
   
   This is a clarifying rewrite in general though. I feel as though I now understand `skipOffsetFromLatest`! 

##########
File path: docs/ingestion/index.md
##########
@@ -196,7 +196,7 @@ that datasource leads to much faster query times. This can often be done with ju
 footprint, since abbreviated datasources tend to be substantially smaller.
 - If you are using a [best-effort rollup](#perfect-rollup-vs-best-effort-rollup) ingestion configuration that does not guarantee perfect
 rollup, you can potentially improve your rollup ratio by switching to a guaranteed perfect rollup option, or by
-[reindexing](data-management.md#compaction-and-reindexing) your data in the background after initial ingestion.
+[reindexing](data-management.md#reingesting-data) or [compacting](./compaction.md) your data in the background after initial ingestion.

Review comment:
       ```suggestion
   [reindexing](../data-management.md#reingesting-data) or [compacting](../compaction.md) your data in the background after initial ingestion.
   ```

##########
File path: docs/configuration/index.md
##########
@@ -820,24 +820,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|
 
 An example of compaction config is:
 
 ```json
 {
-  "dataSource": "wikiticker"
+  "dataSource": "wikiticker",
+  "granularitySpec" : {
+    "segmentGranularity : "none"
+  }
 }
 ```
 
-Note that compaction tasks can fail if their locks are revoked by other tasks of higher priorities.
-Since realtime tasks have a higher priority than compaction task by default,
-it can be problematic if there are frequent conflicts between compaction tasks and realtime tasks.
-If this is the case, the coordinator's automatic compaction might get stuck because of frequent compaction task failures.
-This kind of problem may happen especially in Kafka/Kinesis indexing systems which allow late data arrival.
-If you see this problem, it's recommended to set `skipOffsetFromLatest` to some large enough value to avoid such conflicts between compaction tasks and realtime tasks.
+Compaction tasks fail when higher priority tasks cause Druid to revokes their locks. By default, realtime tasks like ingestion have a higher priority than compaction tasks. Therefore frequent conflicts between compaction tasks and realtime tasks can cause the coordinator's automatic compaction to get stuck.

Review comment:
       ```suggestion
   Compaction tasks fail when higher priority tasks cause Druid to revoke their locks. By default, realtime tasks like ingestion have a higher priority than compaction tasks. Therefore frequent conflicts between compaction tasks and realtime tasks can cause the coordinator's automatic compaction to get stuck.
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec`include all dimensions and metrics of the input segments by default.
+
+Compaction tasks exit without doing anything and issue a failure status code:
+- if the interval you specify has no data segments loaded<br>
+OR
+- if the interval you specify is empty.
+
+The output segment can have different metadata from the input segments unless all input segments have the same metadata.

Review comment:
       I might slow this down a bit, and make it clear that (I think) it's a caveat or note (not a capability). Maybe something like:
   
   ```suggestion
   Note that the metadata between compaction input segments and the resulting output segment may differ if the metadata among the input segments differs as well. If all input segments have the same metadata, however, the resulting output segment will have the same metadata as all input segments.
   ```
   
   ... or something like that.

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec`include all dimensions and metrics of the input segments by default.
+
+Compaction tasks exit without doing anything and issue a failure status code:
+- if the interval you specify has no data segments loaded<br>
+OR
+- if the interval you specify is empty.
+
+The output segment can have different metadata from the input segments unless all input segments have the same metadata.
+
+
+### Example compaction task
+The following JSON illustrates a compaction task to compact _all segments_ within the interval `2017-01-01/2018-01-01` and create new segments:
+
+```json
+{
+  "type" : "compact",
+  "dataSource" : "wikipedia",
+  "ioConfig" : {
+    "type": "compact",
+    "inputSpec": {
+      "type": "interval",
+      "interval": "2017-01-01/2018-01-01",
+    }
+  }
+}
+```
+
+This task doesn't specify a `granularitySpec` so Druid retains the original segment granularity unchanged when compaction is complete.
+
+### Compaction I/O configuration
+
+The compaction `ioConfig` requires specifying `inputSpec` as follows:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`inputSpec`|Input specification|Yes|
+
+There are two supported `inputSpec`s for now.
+
+The interval `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `interval`|Yes|
+|`interval`|Interval to compact|Yes|
+
+The segments `inputSpec` is:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `segments`|Yes|
+|`segments`|A list of segment IDs|Yes|
+
+### Compaction granularity spec
+
+You can optionally use the `granularitySpec` object to configure the segment granularity and the query granularity of the compacted segments. They syntax is as follows:
+```json
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    ...
+    ,
+    "granularitySpec": {
+      "segmentGranularity": <time_period>,
+      "queryGranularity": <time_period>
+    }
+    ...
+```
+
+`granularitySpec` takes the following keys:
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`segmentGranularity`|Time chunking period for the segment granularity. Defaults to 'null'. Accepts all [Query granularities](../querying/granularities.md).|No|
+|`queryGranularity`|Time chunking period for the query granularity. Defaults to 'null'. Accepts all [Query granularities](../querying/granularities.md). Not supported for automatic compaction.|No|
+
+For example, to set the set the segment granularity to "day" and the query granularity to "hour":

Review comment:
       ```suggestion
   For example, to set the segment granularity to "day" and the query granularity to "hour":
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] 2bethere commented on a change in pull request #10935: First refactor of compaction

Posted by GitBox <gi...@apache.org>.
2bethere commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r588738799



##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.

Review comment:
       ```suggestion
   Good query performance depends on optimally sized segments. Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. In cases where initial ingestion has misconfigured that resulted in oversized segments, compaction can also be used to split large segment files into smaller, more optimally sized ones. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.

Review comment:
       This is unclear to me. Specifically, 
   1. "segments have different segment granularities before compaction but there is some overlap in interval", I think an example can help.
   2. "Druid attempts find start and end of the overlapping interval", I think more detail about why and how Druid do this attempt can help.
   3. " uses the closest segment granularity level". Maybe an example here as well?

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
+
+If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax:
+
+```json
+{
+    "type": "compact",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "ioConfig": <IO config>,
+    "dimensionsSpec" <custom dimensionsSpec>,
+    "metricsSpec" <custom metricsSpec>,
+    "tuningConfig" <parallel indexing task tuningConfig>,
+    "context": <task context>
+}
+```
+
+|Field|Description|Required|
+|-----|-----------|--------|
+|`type`|Task type. Should be `compact`|Yes|
+|`id`|Task id|No|
+|`dataSource`|Data source name to compact|Yes|
+|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes|
+|`dimensionsSpec`|Custom dimensions spec. The compaction task uses the specified dimensions spec if it exists instead of generating one.|No|
+|`metricsSpec`|Custom metrics spec. The compaction task uses the specified metrics spec rather than generating one.|No|
+|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval.  Deprecated. Use `granularitySpec`. |No.|
+|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No|
+|`context`|[Task context](./tasks.md#context)|No|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec).|No|
+
+To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig).
+
+> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year.
+
+A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec`include all dimensions and metrics of the input segments by default.
+
+Compaction tasks exit without doing anything and issue a failure status code:

Review comment:
       ```suggestion
   Compaction tasks would exit without doing anything and issue a failure status code:
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.

Review comment:
       ```suggestion
   - You want to compact data in non-chronological order.
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
+
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same data source. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the resulting compacted segment includes all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.

Review comment:
       ```suggestion
   Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. The dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the preferred order and data types.
   ```
   I think maybe deleting that sentence actually make it more clear...

##########
File path: docs/configuration/index.md
##########
@@ -820,24 +820,24 @@ A description of the compaction config is:
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
-|`skipOffsetFromLatest`|The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
+|`skipOffsetFromLatest`|The offset for searching segments to be compacted in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly recommended to set for realtime dataSources. See [Data handling with compaction](../ingestion/compaction.md#data-handling-with-compaction)|no (default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task TuningConfig](#compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction tasks.|no|
+|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` for the compacted segments.|No|
 
 An example of compaction config is:
 
 ```json
 {
-  "dataSource": "wikiticker"
+  "dataSource": "wikiticker",
+  "granularitySpec" : {
+    "segmentGranularity : "none"
+  }
 }
 ```
 
-Note that compaction tasks can fail if their locks are revoked by other tasks of higher priorities.
-Since realtime tasks have a higher priority than compaction task by default,
-it can be problematic if there are frequent conflicts between compaction tasks and realtime tasks.
-If this is the case, the coordinator's automatic compaction might get stuck because of frequent compaction task failures.
-This kind of problem may happen especially in Kafka/Kinesis indexing systems which allow late data arrival.
-If you see this problem, it's recommended to set `skipOffsetFromLatest` to some large enough value to avoid such conflicts between compaction tasks and realtime tasks.
+Compaction tasks fail when higher priority tasks cause Druid to revokes their locks. By default, realtime tasks like ingestion have a higher priority than compaction tasks. Therefore frequent conflicts between compaction tasks and realtime tasks can cause the coordinator's automatic compaction to get stuck.
+You may see this issue with streaming ingestion from Kafka and Kinesis that ingest late arriving data. To mitigate this problem, set `skipOffsetFromLatest` to a value large enough to avoid conflicts between compaction tasks and realtime ingestion tasks.

Review comment:
       ```suggestion
   You may see this issue with streaming ingestion from Kafka and Kinesis that ingest late-arriving data. To mitigate this problem, set `skipOffsetFromLatest` to a value large enough, so that data rarely arrives earlier than this Offset to avoid conflicts between compaction tasks and realtime ingestion tasks.
   ```

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,207 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. Compaction can sometimes increase performance because it reduces the number of segments and, consequently, the per-segment processing and the memory overhead required for ingestion and for querying paths.
+
+As a strategy, compaction is effective when you have data arriving out of chronological order resulting in lots of small segments. For example if you are appending data using `appendToExisting` for [native batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case.
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using a segment search policy, the coordinator periodically identifies segments for compaction starting with the newest to oldest. When segments can benefit from compaction, the coordinator automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- Compacting recent data before older data suboptimal is suboptimal in your environment.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. During compaction Druid locks the segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval locked for compaction, the ingestion task supersedes the compaction task and the compaction task fails without finishing. For manual compaction tasks you can adjust the input spec interval to avoid conflicts between ingestion and compaction. For automatic compaction, you can set the `skipOffsetFromLatest` key to adjustment the auto compaction starting point from the current time to reduce the chance of conflicts between ingestion and compaction. See [Compaction dynamic configuration](../configuration/index.md#compaction-dynamic-configuration) for more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity spec](#compaction-granularity-spec), Druid attempts to retain the granularity for the compacted segments. When segments have different segment granularities with no overlap in interval Druid creates a separate compaction task for each to retain the segment granularity in the compacted segment. If segments have different segment granularities before compaction but there is some overlap in interval, Druid attempts find start and end of the overlapping interval and uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity spec](#compaction-granularity-spec), Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction task combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with finer granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.

Review comment:
       ```suggestion
   If you configure query granularity in compaction to go from a finer granularity like month to a coarser query granularity like year, then Druid overshadows the original segment with coarser granularity. Because the new segments have a coarser granularity, running a kill task to remove the overshadowed segments for those intervals will cause you to permanently lose the finer granularity data.
   ```
   
   Not sure if this is what you meant




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org