You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by et...@apache.org on 2020/09/02 14:17:06 UTC

[storm] branch master updated: [MINOR] Typo/Grammatical corrections (#3327)

This is an automated email from the ASF dual-hosted git repository.

ethanli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/storm.git


The following commit(s) were added to refs/heads/master by this push:
     new b1e2d66  [MINOR] Typo/Grammatical corrections (#3327)
b1e2d66 is described below

commit b1e2d669035875de74e51ee65febdc067c504828
Author: PoojaChandak <po...@gmail.com>
AuthorDate: Wed Sep 2 19:46:48 2020 +0530

    [MINOR] Typo/Grammatical corrections (#3327)
---
 docs/Configuration.md |  6 +++---
 docs/Logs.md          |  2 +-
 docs/README.md        | 22 +++++++++++-----------
 docs/Tutorial.md      | 20 ++++++++++----------
 4 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/docs/Configuration.md b/docs/Configuration.md
index 83cb28a..5b980d0 100644
--- a/docs/Configuration.md
+++ b/docs/Configuration.md
@@ -3,11 +3,11 @@ title: Configuration
 layout: documentation
 documentation: true
 ---
-Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. 
+Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on topology by topology basis, whereas other configurations can be modified per topology. 
 
 Every configuration has a default value defined in [defaults.yaml]({{page.git-blob-base}}/conf/defaults.yaml) in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using [StormSubmitter](javadocs/org/apache/storm/StormSubmitter.html). However, the topology-specific configuration can only override configs pr [...]
 
-Storm 0.7.0 and onwards lets you override configuration on a per-bolt/per-spout basis. The only configurations that can be overriden this way are:
+Storm 0.7.0 and onwards lets you override configuration on a per-bolt/per-spout basis. The only configurations that can be overridden this way are:
 
 1. "topology.debug"
 2. "topology.max.spout.pending"
@@ -22,7 +22,7 @@ The Java API lets you specify component specific configurations in two ways:
 The preference order for configuration values is defaults.yaml < storm.yaml < topology specific configuration < internal component specific configuration < external component specific configuration. 
 
 # Bolts, Spouts, and Plugins
-In almost all cases configuration for a bolt or a spout should be done though setters on the bolt or spout implementation and not the topology conf.  In some rare cases it may make since to
+In almost all cases configuration for a bolt or a spout should be done through setters on the bolt or spout implementation and not the topology conf.  In some rare cases, it may make sense to
 expose topology wide configurations that are not currently a part of [Config](javadocs/org/apache/storm/Config.html) or [DaemonConfig](javadocs/org/apache/storm/DaemonConfig.html) such as
 when writing a custom scheduler or a plugin to some part of storm.  In those
 cases you can create your own class like Config but implements [Validated](javadocs/org/apache/storm/validation/Validated.html). Any `public static final String` field declared in this
diff --git a/docs/Logs.md b/docs/Logs.md
index 28e6693..46cc922 100644
--- a/docs/Logs.md
+++ b/docs/Logs.md
@@ -25,6 +25,6 @@ String search in a log file: In the log page for a worker, a user can search a c
 
 ![Search in a log](images/search-for-a-single-worker-log.png "Search in a log")
 
-Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the "Search archived logs:" box. Then the matched results can be shown on the UI with url [...]
+Search in a topology: a user can also search a string for a certain topology by clicking the icon of the magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the "Search archived logs:" box. Then the matched results can be shown on the UI with [...]
 
 ![Search in a topology](images/search-a-topology.png "Search in a topology")
diff --git a/docs/README.md b/docs/README.md
index e6d939b..e92193a 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -3,14 +3,14 @@ This is the source for the Release specific part of the Apache Storm website and
 
 ## Generate Javadoc
 
-You have to generate javadoc on project root before generating document site.
+You have to generate javadoc on the project root before generating the document site.
 
 ```
 mvn javadoc:javadoc -Dnotimestamp=true
 mvn javadoc:aggregate -DreportOutputDirectory=./docs/ -DdestDir=javadocs -Dnotimestamp=true
 ```
 
-You need to create distribution package with gpg certificate. Please refer [here](https://github.com/apache/storm/blob/master/DEVELOPER.md#packaging).
+You need to create a distribution package with gpg certificate. Please refer [here](https://github.com/apache/storm/blob/master/DEVELOPER.md#packaging).
 
 ## Site Generation
 First install jekyll (assuming you have ruby installed):
@@ -34,9 +34,9 @@ By default, jekyll will generate the site in a `_site` directory.
 This will only show the portion of the documentation that is specific to this release.
 
 ## Adding a new release to the website
-In order to add a new relase, you must have committer access to Storm's subversion repository at https://svn.apache.org/repos/asf/storm/site.
+In order to add a new release, you must have committer access to Storm's subversion repository at https://svn.apache.org/repos/asf/storm/site.
 
-Release documentation is placed under the releases directory named after the release version.  Most metadata about the release will be generated automatically from the name using a jekyll plugin.  Or by plaing them in the _data/releases.yml file.
+Release documentation is placed under the releases directory named after the release version.  Most metadata about the release will be generated automatically from the name using a jekyll plugin.  Or by placing them in the _data/releases.yml file.
 
 To create a new release run the following from the main git directory
 
@@ -62,32 +62,32 @@ svn add publish/ #Add any new files
 svn commit
 ```
 
-## How release specific docs work
+## How to release specific docs work
 
 Release specific documentation is controlled by a jekyll plugin [releases.rb](./_plugins/releases.rb)
 
-If the plugin is running from the git repo the config `storm_release_only` is set and teh plugin will treat all of the markdown files as release sepcific file.
+If the plugin is running from the git repo the config `storm_release_only` is set and the plugin will treat all of the markdown files as release specific file.
 
-If it is running from the subversion repositiory it will look in the releases driectory for release sepcific docs.
+If it is running from the subversion repository it will look in the releases directory for release specific docs.
 
 http://svn.apache.org/viewvc/storm/site/releases/
 
-Each sub directory named after the release in question. The "current" release is pointed to by a symlink in that directory called `current`.
+Each subdirectory named after the release in question. The "current" release is pointed to by a symlink in that directory called `current`.
 
 The plugin sets three configs for each release page.
 
  * version - the version number of the release/directory
  * git-tree-base - a link to a directory in github that this version is on
- * git-blob-base - a link to to where on github that this version is on, but should be used when pointing to files.
+ * git-blob-base - a link to where on github that this version is on, but should be used when pointing to files.
 
-If `storm_release_only` is set for the project the version is determined from the maven pom.xml and the branch is the current branch in git.  If it is not set the version is determined by the name of the sub-directory and branch is assumed to be a `"v#{version}"` which corresponds with our naming conventions.  For SNAPSHOT releases you will need to override this in `_data/releases.yml`
+If `storm_release_only` is set for the project the version is determined from the maven pom.xml and the branch is the current branch in git.  If it is not set the version is determined by the name of the sub-directory and the branch is assumed to be a `"v#{version}"` which corresponds with our naming conventions.  For SNAPSHOT releases you will need to override this in `_data/releases.yml`
 
 The plugin also augments the `site.data.releases` dataset.
 Each release in the list includes the following, and each can be set in `_data/releases.yml` to override what is automatically generated by the plugin.
 
  * git-tag-or-branch - tag or branch name on github/apache/storm
  * git-tree-base - a link to a directory in github that this version is on
- * git-blob-base - a link to to where on github that this version is on, but should be used when pointing to files.
+ * git-blob-base - a link to where on github that this version is on, but should be used when pointing to files.
  * base-name - name of the release files to download, without the .tar.gz
  * has-download - if this is an official release and a download link should be created.
 
diff --git a/docs/Tutorial.md b/docs/Tutorial.md
index ccc415d..1dd891b 100644
--- a/docs/Tutorial.md
+++ b/docs/Tutorial.md
@@ -19,7 +19,7 @@ Each worker node runs a daemon called the "Supervisor". The supervisor listens f
 
 ![Storm cluster](images/storm-cluster.png)
 
-All coordination between Nimbus and the Supervisors is done through a [Zookeeper](http://zookeeper.apache.org/) cluster. Additionally, the Nimbus daemon and Supervisor daemons are fail-fast and stateless; all state is kept in Zookeeper or on local disk. This means you can kill -9 Nimbus or the Supervisors and they'll start back up like nothing happened. This design leads to Storm clusters being incredibly stable.
+All coordination between Nimbus and the Supervisors is done through a [Zookeeper](http://zookeeper.apache.org/) cluster. Additionally, the Nimbus daemon and Supervisor daemons are fail-fast and stateless; all state is kept in Zookeeper or on a local disk. This means you can kill -9 Nimbus or the Supervisors and they'll start back up as nothing happened. This design leads to Storm clusters being incredibly stable.
 
 ## Topologies
 
@@ -49,7 +49,7 @@ Networks of spouts and bolts are packaged into a "topology" which is the top-lev
 
 ![A Storm topology](images/topology.png)
 
-Links between nodes in your topology indicate how tuples should be passed around. For example, if there is a link between Spout A and Bolt B, a link from Spout A to Bolt C, and a link from Bolt B to Bolt C, then everytime Spout A emits a tuple, it will send the tuple to both Bolt B and Bolt C. All of Bolt B's output tuples will go to Bolt C as well.
+Links between nodes in your topology indicate how tuples should be passed around. For example, if there is a link between Spout A and Bolt B, a link from Spout A to Bolt C, and a link from Bolt B to Bolt C, then every time Spout A emits a tuple, it will send the tuple to both Bolt B and Bolt C. All of Bolt B's output tuples will go to Bolt C as well.
 
 Each node in a Storm topology executes in parallel. In your topology, you can specify how much parallelism you want for each node, and then Storm will spawn that number of threads across the cluster to do the execution.
 
@@ -166,13 +166,13 @@ public static class ExclamationBolt implements IRichBolt {
 }
 ```
 
-The `prepare` method provides the bolt with an `OutputCollector` that is used for emitting tuples from this bolt. Tuples can be emitted at anytime from the bolt -- in the `prepare`, `execute`, or `cleanup` methods, or even asynchronously in another thread. This `prepare` implementation simply saves the `OutputCollector` as an instance variable to be used later on in the `execute` method.
+The `prepare` method provides the bolt with an `OutputCollector` that is used for emitting tuples from this bolt. Tuples can be emitted at any time from the bolt -- in the `prepare`, `execute`, or `cleanup` methods, or even asynchronously in another thread. This `prepare` implementation simply saves the `OutputCollector` as an instance variable to be used later on in the `execute` method.
 
 The `execute` method receives a tuple from one of the bolt's inputs. The `ExclamationBolt` grabs the first field from the tuple and emits a new tuple with the string "!!!" appended to it. If you implement a bolt that subscribes to multiple input sources, you can find out which component the [Tuple](/javadoc/apidocs/org/apache/storm/tuple/Tuple.html) came from by using the `Tuple#getSourceComponent` method.
 
-There's a few other things going on in the `execute` method, namely that the input tuple is passed as the first argument to `emit` and the input tuple is acked on the final line. These are part of Storm's reliability API for guaranteeing no data loss and will be explained later in this tutorial. 
+There are a few other things going on in the `execute` method, namely that the input tuple is passed as the first argument to `emit` and the input tuple is acked on the final line. These are part of Storm's reliability API for guaranteeing no data loss and will be explained later in this tutorial. 
 
-The `cleanup` method is called when a Bolt is being shutdown and should cleanup any resources that were opened. There's no guarantee that this method will be called on the cluster: for example, if the machine the task is running on blows up, there's no way to invoke the method. The `cleanup` method is intended for when you run topologies in [local mode](Local-mode.html) (where a Storm cluster is simulated in process), and you want to be able to run and kill many topologies without suffer [...]
+The `cleanup` method is called when a Bolt is being shutdown and should cleanup any resources that were opened. There's no guarantee that this method will be called on the cluster: for example, if the machine the task is running on blows up, there's no way to invoke the method. The `cleanup` method is intended for when you run topologies in [local mode](Local-mode.html) (where a Storm cluster is simulated in a process), and you want to be able to run and kill many topologies without suff [...]
 
 The `declareOutputFields` method declares that the `ExclamationBolt` emits 1-tuples with one field called "word".
 
@@ -206,7 +206,7 @@ public static class ExclamationBolt extends BaseRichBolt {
 
 Let's see how to run the `ExclamationTopology` in local mode and see that it's working.
 
-Storm has two modes of operation: local mode and distributed mode. In local mode, Storm executes completely in process by simulating worker nodes with threads. Local mode is useful for testing and development of topologies. You can read more about running topologies in local mode on [Local mode](Local-mode.html).
+Storm has two modes of operation: local mode and distributed mode. In local mode, Storm executes completely in a process by simulating worker nodes with threads. Local mode is useful for testing and development of topologies. You can read more about running topologies in local mode on [Local mode](Local-mode.html).
 
 To run a topology in local mode run the command `storm local` instead of `storm jar`.
 
@@ -232,15 +232,15 @@ builder.setBolt("count", new WordCount(), 12)
 
 `SplitSentence` emits a tuple for each word in each sentence it receives, and `WordCount` keeps a map in memory from word to count. Each time `WordCount` receives a word, it updates its state and emits the new word count.
 
-There's a few different kinds of stream groupings.
+There are a few different kinds of stream groupings.
 
 The simplest kind of grouping is called a "shuffle grouping" which sends the tuple to a random task. A shuffle grouping is used in the `WordCountTopology` to send tuples from `RandomSentenceSpout` to the `SplitSentence` bolt. It has the effect of evenly distributing the work of processing the tuples across all of `SplitSentence` bolt's tasks.
 
-A more interesting kind of grouping is the "fields grouping". A fields grouping is used between the `SplitSentence` bolt and the `WordCount` bolt. It is critical for the functioning of the `WordCount` bolt that the same word always go to the same task. Otherwise, more than one task will see the same word, and they'll each emit incorrect values for the count since each has incomplete information. A fields grouping lets you group a stream by a subset of its fields. This causes equal values [...]
+A more interesting kind of grouping is the "fields grouping". A fields grouping is used between the `SplitSentence` bolt and the `WordCount` bolt. It is critical for the functioning of the `WordCount` bolt that the same word always goes to the same task. Otherwise, more than one task will see the same word, and they'll each emit incorrect values for the count since each has incomplete information. A fields grouping lets you group a stream by a subset of its fields. This causes equal valu [...]
 
 Fields groupings are the basis of implementing streaming joins and streaming aggregations as well as a plethora of other use cases. Underneath the hood, fields groupings are implemented using mod hashing.
 
-There's a few other kinds of stream groupings. You can read more about them on [Concepts](Concepts.html). 
+There are a few other kinds of stream groupings. You can read more about them on [Concepts](Concepts.html). 
 
 ## Defining Bolts in other languages
 
@@ -286,7 +286,7 @@ Storm guarantees that every message will be played through the topology at least
 
 ## Distributed RPC
 
-This tutorial showed how to do basic stream processing on top of Storm. There's lots more things you can do with Storm's primitives. One of the most interesting applications of Storm is Distributed RPC, where you parallelize the computation of intense functions on the fly. Read more about Distributed RPC [here](Distributed-RPC.html). 
+This tutorial showed how to do basic stream processing on top of Storm. There are lots more things you can do with Storm's primitives. One of the most interesting applications of Storm is Distributed RPC, where you parallelize the computation of intense functions on the fly. Read more about Distributed RPC [here](Distributed-RPC.html). 
 
 ## Conclusion