You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hop.apache.org by ha...@apache.org on 2021/07/09 12:37:08 UTC

[incubator-hop] branch master updated: HOP-3032: fix table layout (#922)

This is an automated email from the ASF dual-hosted git repository.

hansva pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-hop.git


The following commit(s) were added to refs/heads/master by this push:
     new e6ce463  HOP-3032: fix table layout (#922)
e6ce463 is described below

commit e6ce4639834ba896817dc581a995423ab1782f16
Author: Hans Van Akelyen <ha...@gmail.com>
AuthorDate: Fri Jul 9 14:36:59 2021 +0200

    HOP-3032: fix table layout (#922)
    
    * HOP-3032: fix table layout
---
 .../beam-spark-pipeline-engine.adoc                | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipeline-run-configurations/beam-spark-pipeline-engine.adoc b/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipeline-run-configurations/beam-spark-pipeline-engine.adoc
index 9652b71..c29159c 100644
--- a/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipeline-run-configurations/beam-spark-pipeline-engine.adoc
+++ b/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipeline-run-configurations/beam-spark-pipeline-engine.adoc
@@ -42,18 +42,18 @@ Check the https://beam.apache.org/documentation/runners/spark/[Apache Beam Spark
 |Streaming: batch interval (ms)|The StreamingContext's batchDuration - setting Spark's batch interval.|1000
 |Streaming: checkpoint directory|	A checkpoint directory for streaming resilience, ignored in batch. For durability, a reliable filesystem such as HDFS/S3/GS is necessary.|local dir in /tmp
 |Streaming: checkpoint duration (ms)||
-|Enable Metrics sink|A servlet within the existing Spark UI to serve metrics data as JSON data.
-|Streaming: maximum records per batch|The maximum records per batch interval.
-|Streaming: minimum read time (ms)|Mimimum elapsed read time.
-|Bundle size|The maximum number of elements in a bundle.
+|Enable Metrics sink|A servlet within the existing Spark UI to serve metrics data as JSON data.|
+|Streaming: maximum records per batch|The maximum records per batch interval.|
+|Streaming: minimum read time (ms)|Mimimum elapsed read time.|
+|Bundle size|The maximum number of elements in a bundle.|
 |User agent|A user agent string as per https://tools.ietf.org/html/rfc2616[RFC2616], describing the pipeline to external services.
-|Temp location|Path for temporary files.
-|Plugins to stage (, delimited)|Comma separated list of plugins.
-|Transform plugin classes|List of transform plugin classes.
-|XP plugin classes|List of extensions point plugins.
-|Streaming Hop transforms flush interval (ms)|The amount of time after which the internal buffer is sent completely over the network and emptied.
-|Hop streaming transforms buffer size|The internal buffer size to use.
-|Fat jar file location|Fat jar location.
+|Temp location|Path for temporary files.|
+|Plugins to stage (, delimited)|Comma separated list of plugins.|
+|Transform plugin classes|List of transform plugin classes.|
+|XP plugin classes|List of extensions point plugins.|
+|Streaming Hop transforms flush interval (ms)|The amount of time after which the internal buffer is sent completely over the network and emptied.|
+|Hop streaming transforms buffer size|The internal buffer size to use.|
+|Fat jar file location|Fat jar location.|
 |===
 
 == Running remotely