You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sedona.apache.org by ji...@apache.org on 2023/02/10 05:28:45 UTC

[sedona] branch master updated: [DOCS] Fix spelling in Markdown and Python files (#758)

This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git


The following commit(s) were added to refs/heads/master by this push:
     new ceb1724b [DOCS] Fix spelling in Markdown and Python files (#758)
ceb1724b is described below

commit ceb1724bb3b3deafbe3dcfa4509530ecfd2cb6c3
Author: John Bampton <jb...@users.noreply.github.com>
AuthorDate: Fri Feb 10 15:28:39 2023 +1000

    [DOCS] Fix spelling in Markdown and Python files (#758)
---
 docs/api/sql/Function.md         | 2 +-
 docs/api/sql/Raster-loader.md    | 2 +-
 docs/api/sql/Raster-operators.md | 2 +-
 docs/api/viz/sql.md              | 2 +-
 docs/community/contributor.md    | 2 +-
 docs/community/publish.md        | 4 ++--
 docs/community/rule.md           | 2 +-
 docs/community/vote.md           | 2 +-
 docs/setup/install-r.md          | 2 +-
 docs/setup/maven-coordinates.md  | 6 +++---
 docs/setup/release-notes.md      | 4 ++--
 docs/tutorial/core-python.md     | 2 +-
 docs/tutorial/flink/sql.md       | 2 +-
 docs/tutorial/rdd.md             | 6 +++---
 docs/tutorial/sql-r.md           | 2 +-
 docs/tutorial/sql.md             | 2 +-
 docs/tutorial/viz.md             | 6 +++---
 python/tests/core/test_rdd.py    | 2 +-
 18 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/docs/api/sql/Function.md b/docs/api/sql/Function.md
index 0ee584d2..08752278 100644
--- a/docs/api/sql/Function.md
+++ b/docs/api/sql/Function.md
@@ -839,7 +839,7 @@ Result:
 !!!note
     In Sedona up to and including version 1.2 the behaviour of ST_MakeValid was different.
 Be sure to check you code when upgrading. The previous implementation only worked for (multi)polygons and had a different interpretation of the second, boolean, argument.
-It would also sometimes return multiple geometries for a single geomtry input.
+It would also sometimes return multiple geometries for a single geometry input.
 
 ## ST_MinimumBoundingCircle
 
diff --git a/docs/api/sql/Raster-loader.md b/docs/api/sql/Raster-loader.md
index 39f65283..3c0425c2 100644
--- a/docs/api/sql/Raster-loader.md
+++ b/docs/api/sql/Raster-loader.md
@@ -33,7 +33,7 @@ There are three more optional parameters for reading GeoTiff:
 
 ```html
  |-- readfromCRS: Coordinate reference system of the geometry coordinates representing the location of the Geotiff. An example value of readfromCRS is EPSG:4326.
- |-- readToCRS: If you want to tranform the Geotiff location geometry coordinates to a different coordinate reference system, you can define the target coordinate reference system with this option.
+ |-- readToCRS: If you want to transform the Geotiff location geometry coordinates to a different coordinate reference system, you can define the target coordinate reference system with this option.
  |-- disableErrorInCRS: (Default value false) => Indicates whether to ignore errors in CRS transformation.
 ```
 
diff --git a/docs/api/sql/Raster-operators.md b/docs/api/sql/Raster-operators.md
index 1de345fb..dff8b2d4 100644
--- a/docs/api/sql/Raster-operators.md
+++ b/docs/api/sql/Raster-operators.md
@@ -92,7 +92,7 @@ val multiplyDF = spark.sql("select RS_Divide(band1, band2) as divideBands from d
 
 Introduction: Fetch a subset of region from given Geotiff image based on minimumX, minimumY, maximumX and maximumY index as well original height and width of image
 
-Format: `RS_FetchRegion (Band: Array[Double], coordinates: Array[Int], dimenstions: Array[Int])`
+Format: `RS_FetchRegion (Band: Array[Double], coordinates: Array[Int], dimensions: Array[Int])`
 
 Since: `v1.1.0`
 
diff --git a/docs/api/viz/sql.md b/docs/api/viz/sql.md
index df3209d0..cc72d0fb 100644
--- a/docs/api/viz/sql.md
+++ b/docs/api/viz/sql.md
@@ -41,7 +41,7 @@ FROM pixels
 
 #### Produce uniform colors - scatter plot
 
-If a mandatory color name is put as the third input argument, this function will directly ouput this color, without considering the weights. In this case, every pixel will possess the same color.
+If a mandatory color name is put as the third input argument, this function will directly output this color, without considering the weights. In this case, every pixel will possess the same color.
 
 Spark SQL example:
 ```SQL
diff --git a/docs/community/contributor.md b/docs/community/contributor.md
index af7f4cbe..a632e306 100644
--- a/docs/community/contributor.md
+++ b/docs/community/contributor.md
@@ -211,7 +211,7 @@ Once Sedona graduates, the PMC chair will make the request.
 
 Once the new PMC subscribes to the Sedona mailing lists using his/her ASF account, one of the PMC needs to add the new PMC to the Whimsy system (https://whimsy.apache.org/roster/pmc/sedona).
 
-### PMC annoucement
+### PMC announcement
 
 This is the email to announce the new committer to sedona-dev once the account has been created.
 
diff --git a/docs/community/publish.md b/docs/community/publish.md
index 7608862a..c9f3f261 100644
--- a/docs/community/publish.md
+++ b/docs/community/publish.md
@@ -229,7 +229,7 @@ No -1 votes
 
 The vote thread (Permalink from https://lists.apache.org/list.html):
 
-I will make an annoucement soon.
+I will make an announcement soon.
 
 ```
 
@@ -406,7 +406,7 @@ rm *.asc
 
 ## 9. Release Sedona Python and Zeppelin
 
-You must have the maintainer priviledge of `https://pypi.org/project/apache-sedona/` and `https://www.npmjs.com/package/apache-sedona`
+You must have the maintainer privilege of `https://pypi.org/project/apache-sedona/` and `https://www.npmjs.com/package/apache-sedona`
 
 ```bash
 #!/bin/bash
diff --git a/docs/community/rule.md b/docs/community/rule.md
index bd86a87b..4af30bd8 100644
--- a/docs/community/rule.md
+++ b/docs/community/rule.md
@@ -5,7 +5,7 @@ The project welcomes contributions. You can contribute to Sedona code or documen
 
 The following sections brief the workflow of how to complete a contribution.
 
-## Pick / Annouce a task using JIRA
+## Pick / Announce a task using JIRA
 
 It is important to confirm that your contribution is acceptable. You should create a JIRA ticket or pick an existing ticket. A new JIRA ticket will be automatically sent to `dev@sedona.apache.org`
 
diff --git a/docs/community/vote.md b/docs/community/vote.md
index 4aa32f82..11a7b15d 100644
--- a/docs/community/vote.md
+++ b/docs/community/vote.md
@@ -2,7 +2,7 @@
 
 This page is for Sedona community to vote a Sedona release. The script below is tested on MacOS.
 
-In order to vote a Sedona release, you must provide your checklist inlcuding the following minimum requirement:
+In order to vote a Sedona release, you must provide your checklist including the following minimum requirement:
 
 * Download links are valid
 * Checksums and PGP signatures are valid
diff --git a/docs/setup/install-r.md b/docs/setup/install-r.md
index 4d57fa09..48367eaa 100644
--- a/docs/setup/install-r.md
+++ b/docs/setup/install-r.md
@@ -48,7 +48,7 @@ At the moment `apache.sedona` consists of the following components:
 
 To ensure Sedona serialization routines, UDTs, and UDFs are properly
 registered when creating a Spark session, one simply needs to attach
-`apache.sedona` before instantiating a Spark conneciton. apache.sedona
+`apache.sedona` before instantiating a Spark connection. apache.sedona
 will take care of the rest. For example,
 
 ``` r
diff --git a/docs/setup/maven-coordinates.md b/docs/setup/maven-coordinates.md
index 75368c95..cbd48ded 100644
--- a/docs/setup/maven-coordinates.md
+++ b/docs/setup/maven-coordinates.md
@@ -8,9 +8,9 @@ Sedona Flink has four modules :`sedona-core, sedona-sql, sedona-python-adapter,
 ## Use Sedona fat jars
 
 !!!warning
-	For Scala/Java/Python/R users, this is the most common way to use Sedona in your environment. Do not use separate Sedona jars othwerwise you will get dependency conflicts. `sedona-python-adapter` already contains all you need.
+	For Scala/Java/Python/R users, this is the most common way to use Sedona in your environment. Do not use separate Sedona jars otherwise you will get dependency conflicts. `sedona-python-adapter` already contains all you need.
 
-The optional GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This libary is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
+The optional GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distribution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
 
 !!! abstract "Sedona with Apache Spark"
 
@@ -234,7 +234,7 @@ Under MIT License. Please make sure you exclude jts and jackson from this librar
 
 ### GeoTools 24.0+
 
-GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This libary is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
+GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
 
 ```xml
 <!-- https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
diff --git a/docs/setup/release-notes.md b/docs/setup/release-notes.md
index b5a2b1ff..c20c4558 100644
--- a/docs/setup/release-notes.md
+++ b/docs/setup/release-notes.md
@@ -320,11 +320,11 @@ This version is a maintenance release on Sedona 1.0.0 line. It includes bug fixe
 
 ### Known issue
 
-In Sedona v1.0.1 and eariler versions, the Spark dependency in setup.py was configured to be ==< v3.1.0== [by mistake](https://github.com/apache/sedona/blob/8235924ac80939cbf2ce562b0209b71833ed9429/python/setup.py#L39). When you install  Sedona Python (apache-sedona v1.0.1) from Pypi, pip might unstall PySpark 3.1.1 and install PySpark 3.0.2 on your machine.
+In Sedona v1.0.1 and earlier versions, the Spark dependency in setup.py was configured to be ==< v3.1.0== [by mistake](https://github.com/apache/sedona/blob/8235924ac80939cbf2ce562b0209b71833ed9429/python/setup.py#L39). When you install Sedona Python (apache-sedona v1.0.1) from Pypi, pip might uninstall PySpark 3.1.1 and install PySpark 3.0.2 on your machine.
 
 Three ways to fix this:
 
-1. After install apache-sedona v1.0.1, unstall PySpark 3.0.2 and reinstall PySpark 3.1.1
+1. After install apache-sedona v1.0.1, uninstall PySpark 3.0.2 and reinstall PySpark 3.1.1
 
 2. Ask pip not to install Sedona dependencies: `pip install --no-deps apache-sedona`
 
diff --git a/docs/tutorial/core-python.md b/docs/tutorial/core-python.md
index 15f5a7a6..d826471f 100644
--- a/docs/tutorial/core-python.md
+++ b/docs/tutorial/core-python.md
@@ -242,7 +242,7 @@ query_result = RangeQuery.SpatialRangeQuery(
 
 The output format of the spatial range query is another RDD which consists of GeoData objects.
 
-SpatialRangeQuery result can be used as RDD with map or other spark RDD funtions. Also it can be used as 
+SpatialRangeQuery result can be used as RDD with map or other spark RDD functions. Also it can be used as 
 Python objects when using collect method.
 Example:
 
diff --git a/docs/tutorial/flink/sql.md b/docs/tutorial/flink/sql.md
index 10157a24..909e9c70 100644
--- a/docs/tutorial/flink/sql.md
+++ b/docs/tutorial/flink/sql.md
@@ -122,7 +122,7 @@ The first EPSG code EPSG:4326 in `ST_Transform` is the source CRS of the geometr
 
 The second EPSG code EPSG:3857 in `ST_Transform` is the target CRS of the geometries. It is the most common meter-based CRS.
 
-This `ST_Transform` transform the CRS of these geomtries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)
+This `ST_Transform` transform the CRS of these geometries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)
 
 !!!note
 	Read [SedonaSQL ST_Transform API](../../../api/flink/Function/#st_transform) to learn different spatial query predicates.
diff --git a/docs/tutorial/rdd.md b/docs/tutorial/rdd.md
index acc9fa66..3df342f9 100644
--- a/docs/tutorial/rdd.md
+++ b/docs/tutorial/rdd.md
@@ -216,7 +216,7 @@ objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, false)
 `false` in CRSTransform(sourceCrsCode, targetCrsCode, false) means that it will not tolerate Datum shift. If you want it to be lenient, use `true` instead.
 
 !!!warning
-	CRS transformation should be done right after creating each SpatialRDD, otherwise it will lead to wrong query results. For instace, use something like this:
+	CRS transformation should be done right after creating each SpatialRDD, otherwise it will lead to wrong query results. For instance, use something like this:
 	```Scala
 	var objectRDD = new PointRDD(sc, pointRDDInputLocation, pointRDDOffset, pointRDDSplitter, carryOtherAttributes)
 	objectRDD.CRSTransform("epsg:4326", "epsg:3857", false)
@@ -410,7 +410,7 @@ val result = JoinQuery.SpatialJoinQuery(objectRDD, queryWindowRDD, usingIndex, s
 	FROM city, superhero
 	WHERE ST_Contains(city.geom, superhero.geom);
 	```
-	Find the super heros in each city
+	Find the superheroes in each city
 
 ### Use spatial partitioning
 
@@ -502,7 +502,7 @@ The output format of the distance join query is [here](#output-format_2).
 	FROM city, superhero
 	WHERE ST_Distance(city.geom, superhero.geom) <= 10;
 	```
-	Find the super heros within 10 miles of each city
+	Find the superheroes within 10 miles of each city
 	
 ## Save to permanent storage
 
diff --git a/docs/tutorial/sql-r.md b/docs/tutorial/sql-r.md
index 1af0cbb3..89e9cbe1 100644
--- a/docs/tutorial/sql-r.md
+++ b/docs/tutorial/sql-r.md
@@ -50,7 +50,7 @@ modified_polygon_sdf <- polygon_sdf %>%
 ```
 
 
-Notice that all of the above can open up many interesting possiblities. For
+Notice that all of the above can open up many interesting possibilities. For
 example, one can extract ML features from geospatial data in Spark
 dataframes, build a ML pipeline using `ml_*` family of functions in
 `sparklyr` to work with such features, and if the output of a ML model
diff --git a/docs/tutorial/sql.md b/docs/tutorial/sql.md
index 48015097..306255cb 100644
--- a/docs/tutorial/sql.md
+++ b/docs/tutorial/sql.md
@@ -178,7 +178,7 @@ The first EPSG code EPSG:4326 in `ST_Transform` is the source CRS of the geometr
 
 The second EPSG code EPSG:3857 in `ST_Transform` is the target CRS of the geometries. It is the most common meter-based CRS.
 
-This `ST_Transform` transform the CRS of these geomtries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)
+This `ST_Transform` transform the CRS of these geometries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)
 
 The coordinates of polygons have been changed. The output will be like this:
 
diff --git a/docs/tutorial/viz.md b/docs/tutorial/viz.md
index 0469f060..9011626e 100644
--- a/docs/tutorial/viz.md
+++ b/docs/tutorial/viz.md
@@ -5,7 +5,7 @@ SedonaViz provides native support for general cartographic design by extending S
 SedonaViz offers Map Visualization SQL. This gives users a more flexible way to design beautiful map visualization effects including scatter plots and heat maps. SedonaViz RDD API is also available.
 
 !!!note
-	All SedonaViz SQL/DataFrame APIs are explained in [SedonaViz API](../../api/viz/sql). Please see [Viz exmaple project](https://github.com/apache/sedona/tree/master/examples/viz)
+	All SedonaViz SQL/DataFrame APIs are explained in [SedonaViz API](../../api/viz/sql). Please see [Viz example project](https://github.com/apache/sedona/tree/master/examples/viz)
 
 ## Why scalable map visualization?
 
@@ -14,7 +14,7 @@ Data visualization allows users to summarize, analyze and reason about data. Gua
 SedonaViz encapsulates the main steps of map visualization process, e.g., pixelize, aggregate, and render, into a set of massively parallelized GeoViz operators and the user can assemble any customized styles.
 
 ## Visualize SpatialRDD
-This tutorial mainly focuses on explaining SQL/DataFrame API. SedonaViz RDD example can be found in Please see [Viz exmaple project](https://github.com/apache/sedona/tree/master/examples/viz)
+This tutorial mainly focuses on explaining SQL/DataFrame API. SedonaViz RDD example can be found in Please see [Viz example project](https://github.com/apache/sedona/tree/master/examples/viz)
 
 ## Set up dependencies
 1. Read [Sedona Maven Central coordinates](../setup/maven-coordinates.md)
@@ -108,7 +108,7 @@ LATERAL VIEW explode(ST_Pixelize(ST_Transform(shape, 'epsg:4326','epsg:3857'), 2
 This will give you a 256*256 resolution image after you run ST_Render at the end of this tutorial.
 
 !!!warning
-	We highly suggest that you should use ST_Transform to transfrom coordiantes to a visualization-specific coordinate sysmte such as epsg:3857. Otherwise you map may look distorted.
+	We highly suggest that you should use ST_Transform to transform coordiantes to a visualization-specific coordinate system such as epsg:3857. Otherwise you map may look distorted.
 	
 ### Aggregate pixels
 
diff --git a/python/tests/core/test_rdd.py b/python/tests/core/test_rdd.py
index df519668..5560f4e0 100644
--- a/python/tests/core/test_rdd.py
+++ b/python/tests/core/test_rdd.py
@@ -335,7 +335,7 @@ class TestSpatialRDD(TestBase):
                 object_rdd, range_query_window, False, False
             )
 
-    def test_crs_tranformed_spatial_range_query_using_index(self):
+    def test_crs_transformed_spatial_range_query_using_index(self):
         object_rdd = PointRDD(
             sparkContext=self.sc,
             InputLocation=point_rdd_input_location,