You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/01/21 23:47:07 UTC

[GitHub] [incubator-hudi] bvaradar opened a new pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

bvaradar opened a new pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267
 
 
   
   ## What is the purpose of the pull request
   Update documentation to add deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
   
   ## Brief change log
   [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r370397015
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
+ - **Continuous Mode** :  Here, deltastreamer runs an infinite loop with each round performing one ingestion round as described in **Run Once Mode**. For Merge-On-Read tables, Compaction is run in asynchronous fashion concurrently with ingestion unless disabled by passing the flag "--disable-compaction". Every ingestion run triggers a compaction request asynchronously and this frequency can be changed by setting the property "hoodie.compact.inline.max.delta.commits". As both ingestion and compaction is running in the same spark context, you can use resource allocation configuration in DeltaStreamer CLI such as ("--delta-sync-scheduling-weight", "--compact-scheduling-weight", ""--delta-sync-scheduling-minshare", and "--compact-scheduling-minshare") to control executor allocation between ingestion and compaction.
 
 Review comment:
   Done

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar merged pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar merged pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r369646639
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
 Review comment:
   can you add a sentence above, first introducing the two aspects : sync vs async compaction, continuous-vs-non continuous writing, so it flows well ? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar edited a comment on issue #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar edited a comment on issue #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#issuecomment-576940505
 
 
   @vinothchandar @bhasudha @lamber-ken :  Please take a look when you get a chance.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r369647404
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
 
 Review comment:
   >You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
   
   link to running this spark application and some commands to do so? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r370413443
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
 
 Review comment:
   Done.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r370396331
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
 Review comment:
   Done.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar commented on issue #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#issuecomment-576940505
 
 
   @vinothchandar @bhasudha : Please take a look when you get a chance.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r369646846
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
 
 Review comment:
   call out this is the default? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r370396129
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
+ - **Continuous Mode** :  Here, deltastreamer runs an infinite loop with each round performing one ingestion round as described in **Run Once Mode**. For Merge-On-Read tables, Compaction is run in asynchronous fashion concurrently with ingestion unless disabled by passing the flag "--disable-compaction". Every ingestion run triggers a compaction request asynchronously and this frequency can be changed by setting the property "hoodie.compact.inline.max.delta.commits". As both ingestion and compaction is running in the same spark context, you can use resource allocation configuration in DeltaStreamer CLI such as ("--delta-sync-scheduling-weight", "--compact-scheduling-weight", ""--delta-sync-scheduling-minshare", and "--compact-scheduling-minshare") to control executor allocation between ingestion and compaction.
+
+### Spark Datasource Writer Jobs
+
+As described in [Writing Data](/docs/writing_data.html#datasource-writer), you can use spark datasource to ingest to hudi table. This mechanism allows you to ingest any spark dataframe in Hudi format. Hudi Spark DataSource also supports spark streaming to ingest a streaming source to Hudi table. For Merge On Read table types, inline compaction is turned on by default which runs after every ingestion run. The compaction frequency can be changed by setting the property "hoodie.compact.inline.max.delta.commits". 
 
 Review comment:
   Agree. https://jira.apache.org/jira/browse/HUDI-575

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r370396168
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
 
 Review comment:
   Done.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r369648196
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
+ - **Continuous Mode** :  Here, deltastreamer runs an infinite loop with each round performing one ingestion round as described in **Run Once Mode**. For Merge-On-Read tables, Compaction is run in asynchronous fashion concurrently with ingestion unless disabled by passing the flag "--disable-compaction". Every ingestion run triggers a compaction request asynchronously and this frequency can be changed by setting the property "hoodie.compact.inline.max.delta.commits". As both ingestion and compaction is running in the same spark context, you can use resource allocation configuration in DeltaStreamer CLI such as ("--delta-sync-scheduling-weight", "--compact-scheduling-weight", ""--delta-sync-scheduling-minshare", and "--compact-scheduling-minshare") to control executor allocation between ingestion and compaction.
+
+### Spark Datasource Writer Jobs
+
+As described in [Writing Data](/docs/writing_data.html#datasource-writer), you can use spark datasource to ingest to hudi table. This mechanism allows you to ingest any spark dataframe in Hudi format. Hudi Spark DataSource also supports spark streaming to ingest a streaming source to Hudi table. For Merge On Read table types, inline compaction is turned on by default which runs after every ingestion run. The compaction frequency can be changed by setting the property "hoodie.compact.inline.max.delta.commits". 
 
 Review comment:
   reminds me, that we should have async compaction working for spark streaming as well? may be file a JIRA if you agree? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
bvaradar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r370396168
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
 
 Review comment:
   Done.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-hudi] vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source

Posted by GitBox <gi...@apache.org>.
vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r369649210
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits". You can either manually run this spark application or use any cron trigger or workflow orchestrator such as Apache Airflow to spawn this application.
+ - **Continuous Mode** :  Here, deltastreamer runs an infinite loop with each round performing one ingestion round as described in **Run Once Mode**. For Merge-On-Read tables, Compaction is run in asynchronous fashion concurrently with ingestion unless disabled by passing the flag "--disable-compaction". Every ingestion run triggers a compaction request asynchronously and this frequency can be changed by setting the property "hoodie.compact.inline.max.delta.commits". As both ingestion and compaction is running in the same spark context, you can use resource allocation configuration in DeltaStreamer CLI such as ("--delta-sync-scheduling-weight", "--compact-scheduling-weight", ""--delta-sync-scheduling-minshare", and "--compact-scheduling-minshare") to control executor allocation between ingestion and compaction.
 
 Review comment:
   also worth noting here is the config for controlling the sync frequency.. `--min-sync-interval-seconds` 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services