You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2020/02/19 20:01:17 UTC

[GitHub] [druid] sthetland commented on a change in pull request #9360: Create splits of multiple files for parallel indexing

sthetland commented on a change in pull request #9360: Create splits of multiple files for parallel indexing
URL: https://github.com/apache/druid/pull/9360#discussion_r381508140
 
 

 ##########
 File path: docs/ingestion/native-batch.md
 ##########
 @@ -42,11 +42,12 @@ demonstrates the "simple" (single-task) mode.
 ## Parallel task
 
 The Parallel task (type `index_parallel`) is a task for parallel batch indexing. This task only uses Druid's resource and
-doesn't depend on other external systems like Hadoop. `index_parallel` task is a supervisor task which basically creates
-multiple worker tasks and submits them to the Overlord. Each worker task reads input data and creates segments. Once they
-successfully generate segments for all input data, they report the generated segment list to the supervisor task. 
+doesn't depend on other external systems like Hadoop. The `index_parallel` task is a supervisor task which orchestrates
+the whole indexing process. It splits the input data and and issues worker tasks
+to the Overlord which actually process the assigned input split and create segments.
+Once a worker task successfully processes all assigned input split, it reports the generated segment list to the supervisor task. 
 The supervisor task periodically checks the status of worker tasks. If one of them fails, it retries the failed task
-until the number of retries reaches to the configured limit. If all worker tasks succeed, then it publishes the reported segments at once and finalize the ingestion.
+until the number of retries reaches to the configured limit. If all worker tasks succeed, it publishes the reported segments at once and finalize the ingestion.
 
 Review comment:
   light edit: "...until the number of retries reaches the configured limit. If all worker tasks succeed, it publishes the reported segments at once and finalizes the ingestion."

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org