You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sedona.apache.org by ji...@apache.org on 2022/03/06 09:51:40 UTC

[incubator-sedona] branch master updated: [DOCS] Add Flink tutorial and prepare docs for the next release (#589)

This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-sedona.git


The following commit(s) were added to refs/heads/master by this push:
     new d8f05bc  [DOCS] Add Flink tutorial and prepare docs for the next release (#589)
d8f05bc is described below

commit d8f05bca55d916b6ecb1dc16e2df230405e90f40
Author: Jia Yu <ji...@apache.org>
AuthorDate: Sun Mar 6 01:51:33 2022 -0800

    [DOCS] Add Flink tutorial and prepare docs for the next release (#589)
---
 docs/api/flink/Constructor.md     |  69 +++++++
 docs/api/flink/Function.md        | 102 ++++++++++
 docs/api/flink/Overview.md        |  12 ++
 docs/api/flink/Predicate.md       |  29 +++
 docs/setup/flink/install-scala.md |  13 ++
 docs/setup/flink/modules.md       |  15 ++
 docs/setup/flink/platform.md      |   7 +
 docs/setup/maven-coordinates.md   | 294 +++++++++++++++++------------
 docs/setup/modules.md             |  16 ++
 docs/setup/overview.md            |  40 ++--
 docs/setup/platform.md            |   2 +-
 docs/setup/release-notes.md       |  35 ++++
 docs/tutorial/flink/sql.md        | 383 ++++++++++++++++++++++++++++++++++++++
 mkdocs.yml                        | 120 +++++++-----
 14 files changed, 950 insertions(+), 187 deletions(-)

diff --git a/docs/api/flink/Constructor.md b/docs/api/flink/Constructor.md
new file mode 100644
index 0000000..275bdc9
--- /dev/null
+++ b/docs/api/flink/Constructor.md
@@ -0,0 +1,69 @@
+## ST_GeomFromWKT
+
+Introduction: Construct a Geometry from Wkt
+
+Format:
+`ST_GeomFromWKT (Wkt:string)`
+
+Since: `v1.2.0`
+
+SQL example:
+```SQL
+SELECT ST_GeomFromWKT('POINT(40.7128 -74.0060)') AS geometry
+```
+
+## ST_GeomFromWKB
+
+Introduction: Construct a Geometry from WKB string
+
+Format:
+`ST_GeomFromWKB (Wkb:string)`
+
+Since: `v1.2.0`
+
+SQL example:
+```SQL
+SELECT ST_GeomFromWKB(polygontable._c0) AS polygonshape
+FROM polygontable
+```
+
+## ST_PointFromText
+
+Introduction: Construct a Point from Text, delimited by Delimiter
+
+Format: `ST_PointFromText (Text:string, Delimiter:char)`
+
+Since: `v1.2.0`
+
+SQL example:
+```SQL
+SELECT ST_PointFromText('40.7128,-74.0060', ',') AS pointshape
+```
+
+## ST_PolygonFromText
+
+Introduction: Construct a Polygon from Text, delimited by Delimiter. Path must be closed
+
+Format: `ST_PolygonFromText (Text:string, Delimiter:char)`
+
+Since: `v1.2.0`
+
+SQL example:
+```SQL
+SELECT ST_PolygonFromText('-74.0428197,40.6867969,-74.0421975,40.6921336,-74.0508020,40.6912794,-74.0428197,40.6867969', ',') AS polygonshape
+```
+
+## ST_PolygonFromEnvelope
+
+Introduction: Construct a Polygon from MinX, MinY, MaxX, MaxY.
+
+Format: `ST_PolygonFromEnvelope (MinX:decimal, MinY:decimal, MaxX:decimal, MaxY:decimal)`
+
+Since: `v1.2.0`
+
+SQL example:
+```SQL
+SELECT *
+FROM pointdf
+WHERE ST_Contains(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.pointshape)
+```
diff --git a/docs/api/flink/Function.md b/docs/api/flink/Function.md
new file mode 100644
index 0000000..c559abf
--- /dev/null
+++ b/docs/api/flink/Function.md
@@ -0,0 +1,102 @@
+## ST_Distance
+
+Introduction: Return the Euclidean distance between A and B
+
+Format: `ST_Distance (A:geometry, B:geometry)`
+
+Since: `v1.2.0`
+
+Spark SQL example:
+```SQL
+SELECT ST_Distance(polygondf.countyshape, polygondf.countyshape)
+FROM polygondf
+```
+
+## ST_Transform
+
+Introduction:
+
+Transform the Spatial Reference System / Coordinate Reference System of A, from SourceCRS to TargetCRS
+
+!!!note
+	By default, this function uses lat/lon order. You can use ==ST_FlipCoordinates== to swap X and Y.
+
+!!!note
+	If ==ST_Transform== throws an Exception called "Bursa wolf parameters required", you need to disable the error notification in ST_Transform. You can append a boolean value at the end.
+
+Format: `ST_Transform (A:geometry, SourceCRS:string, TargetCRS:string ,[Optional] DisableError)`
+
+Since: `v1.2.0`
+
+Spark SQL example (simple):
+```SQL
+SELECT ST_Transform(polygondf.countyshape, 'epsg:4326','epsg:3857') 
+FROM polygondf
+```
+
+Spark SQL example (with optional parameters):
+```SQL
+SELECT ST_Transform(polygondf.countyshape, 'epsg:4326','epsg:3857', false)
+FROM polygondf
+```
+
+!!!note
+	The detailed EPSG information can be searched on [EPSG.io](https://epsg.io/).
+
+## ST_Buffer
+
+Introduction: Returns a geometry/geography that represents all points whose distance from this Geometry/geography is less than or equal to distance.
+
+Format: `ST_Buffer (A:geometry, buffer: Double)`
+
+Since: `v1.2.0`
+
+Spark SQL example:
+```SQL
+SELECT ST_Buffer(polygondf.countyshape, 1)
+FROM polygondf
+```
+
+## ST_FlipCoordinates
+
+Introduction: Returns a version of the given geometry with X and Y axis flipped.
+
+Format: `ST_FlipCoordinates(A:geometry)`
+
+Since: `v1.2.0`
+
+Spark SQL example:
+```SQL
+SELECT ST_FlipCoordinates(df.geometry)
+FROM df
+```
+
+Input: `POINT (1 2)`
+
+Output: `POINT (2 1)`
+
+## ST_GeoHash
+
+Introduction: Returns GeoHash of the geometry with given precision
+
+Format: `ST_GeoHash(geom: geometry, precision: int)`
+
+Since: `v1.2.0`
+
+Example: 
+
+Query:
+
+```SQL
+SELECT ST_GeoHash(ST_GeomFromText('POINT(21.427834 52.042576573)'), 5) AS geohash
+```
+
+Result:
+
+```
++-----------------------------+
+|geohash                      |
++-----------------------------+
+|u3r0p                        |
++-----------------------------+
+```
\ No newline at end of file
diff --git a/docs/api/flink/Overview.md b/docs/api/flink/Overview.md
new file mode 100644
index 0000000..07b5fcc
--- /dev/null
+++ b/docs/api/flink/Overview.md
@@ -0,0 +1,12 @@
+# Introduction
+
+SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. Please read the programming guide: [Sedona with Flink SQL app](../../tutorial/flink/sql.md).
+
+Sedona includes SQL operators as follows.
+
+* Constructor: Construct a Geometry given an input string or coordinates
+	* Example: ST_GeomFromWKT (string). Create a Geometry from a WKT String.
+* Function: Execute a function on the given column or columns
+	* Example: ST_Distance (A, B). Given two Geometry A and B, return the Euclidean distance of A and B.
+* Predicate: Execute a logic judgement on the given columns and return true or false
+	* Example: ST_Contains (A, B). Check if A fully contains B. Return "True" if yes, else return "False".
diff --git a/docs/api/flink/Predicate.md b/docs/api/flink/Predicate.md
new file mode 100644
index 0000000..4e4fce6
--- /dev/null
+++ b/docs/api/flink/Predicate.md
@@ -0,0 +1,29 @@
+## ST_Contains
+
+Introduction: Return true if A fully contains B
+
+Format: `ST_Contains (A:geometry, B:geometry)`
+
+Since: `v1.2.0`
+
+SQL example:
+```SQL
+SELECT * 
+FROM pointdf 
+WHERE ST_Contains(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
+```
+
+## ST_Intersects
+
+Introduction: Return true if A intersects B
+
+Format: `ST_Intersects (A:geometry, B:geometry)`
+
+Since: `v1.2.0`
+
+SQL example:
+```SQL
+SELECT * 
+FROM pointdf 
+WHERE ST_Intersects(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
+```
\ No newline at end of file
diff --git a/docs/setup/flink/install-scala.md b/docs/setup/flink/install-scala.md
new file mode 100644
index 0000000..e96a7b3
--- /dev/null
+++ b/docs/setup/flink/install-scala.md
@@ -0,0 +1,13 @@
+Before starting the Sedona journey, you need to make sure your Apache Flink cluster is ready.
+
+Then you can create a self-contained Scala / Java project. A self-contained project allows you to create multiple Scala / Java files and write complex logics in one place.
+
+To use Sedona in your self-contained Flink project, you just need to add Sedona as a dependency in your POM.xml or build.sbt.
+
+1. To add Sedona as dependencies, please read [Sedona Maven Central coordinates](/setup/maven-coordinates)
+2. Read [Sedona Flink guide](/tutorial/flink/sql) and use Sedona Template project to start: [Sedona Template Project](/tutorial/demo/)
+3. Compile your project using Maven. Make sure you obtain the fat jar which packages all dependencies.
+4. Submit your compiled fat jar to Flink cluster. Make sure you are in the root folder of Flink distribution. Then run the following command:
+```
+./bin/flink run /Path/To/YourJar.jar
+```
\ No newline at end of file
diff --git a/docs/setup/flink/modules.md b/docs/setup/flink/modules.md
new file mode 100644
index 0000000..ef7d371
--- /dev/null
+++ b/docs/setup/flink/modules.md
@@ -0,0 +1,15 @@
+# Sedona modules for Apache Flink
+
+| Name |  Introduction|
+|---|---|
+|Core|Spatial query algorithms, data readers/writers|
+|SQL|Spatial SQL function implementation|
+|Flink|Spatial Table and DataStream implementation|
+
+## API availability
+
+|            | **DataStream** | **Table** |
+|:----------:|:------------:|:-----------------:|
+| Scala/Java |✅|✅|
+|   Python   |no|no|
+|      R     |no|no|
\ No newline at end of file
diff --git a/docs/setup/flink/platform.md b/docs/setup/flink/platform.md
new file mode 100644
index 0000000..3c49901
--- /dev/null
+++ b/docs/setup/flink/platform.md
@@ -0,0 +1,7 @@
+Sedona Flink binary releases are compiled by Java 1.8 and Scala 2.12 and tested in the following environments:
+
+=== "Sedona Scala/Java"
+
+	|             | Flink 1.12 | Flink 1.13 | Flink 1.14 |
+	|:-----------:| :---------:|:---------:|:---------:|
+	| Scala 2.12 | ✅  |  ✅  | ✅ |
\ No newline at end of file
diff --git a/docs/setup/maven-coordinates.md b/docs/setup/maven-coordinates.md
index e2e3762..92d23cc 100644
--- a/docs/setup/maven-coordinates.md
+++ b/docs/setup/maven-coordinates.md
@@ -1,68 +1,106 @@
 # Maven Coordinates
 
-Sedona has four modules: `sedona-core, sedona-sql, sedona-viz, sedona-python-adapter`. They have different packing policies. You will need to use `sedona-python-adapter` for Scala, Java and Python API.  ==You may also need geotools-wrapper (see below)==. If you want to use SedonaViz, you will include one more jar: `sedona-viz`.
+Sedona Spark has four modules: `sedona-core, sedona-sql, sedona-viz, sedona-python-adapter`. `sedona-python-adapter` is a fat jar of `sedona-core, sedona-sql` and python adapter code. If you want to use SedonaViz, you will include one more jar: `sedona-viz`.
 
-## Use Sedona fat jars
+Sedona Flink has four modules :`sedona-core, sedona-sql, sedona-python-adapter, sedona-flink`. `sedona-python-adapter` is a fat jar of `sedona-core, sedona-sql`.
 
-=== "Spark 3.0 + Scala 2.12"
-
-	```xml
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-viz-3.0_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	```
-
-=== "Spark 2.4 + Scala 2.11"
-
-	```xml
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-python-adapter-2.4_2.11</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-viz-2.4_2.11</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	```
-	
-=== "Spark 2.4 + Scala 2.12"
-
-	```xml
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-python-adapter-2.4_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-viz-2.4_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	```
 
-### GeoTools 24.0+
+## Use Sedona fat jars
 
-GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This libary is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
+This is the most common way to use Sedona in your environment. Do not use separate Sedona jars if you are not familar with Maven.
 
-```xml
-<!-- https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
-<dependency>
-    <groupId>org.datasyslab</groupId>
-    <artifactId>geotools-wrapper</artifactId>
-    <version>{{ sedona.current_geotools }}</version>
-</dependency>
-```
+The optional GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This libary is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
 
-### SernetCDF 0.1.0
+!!! abstract "Sedona with Apache Spark"
+
+	=== "Spark 3.0+ and Scala 2.12"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-viz-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<!-- Optional: https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
+		<dependency>
+		    <groupId>org.datasyslab</groupId>
+		    <artifactId>geotools-wrapper</artifactId>
+		    <version>{{ sedona.current_geotools }}</version>
+		</dependency>
+		```
+	
+	=== "Spark 2.4 and Scala 2.11"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-python-adapter-2.4_2.11</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-viz-2.4_2.11</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<!-- Optional: https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
+		<dependency>
+		    <groupId>org.datasyslab</groupId>
+		    <artifactId>geotools-wrapper</artifactId>
+		    <version>{{ sedona.current_geotools }}</version>
+		</dependency>
+		```
+		
+	=== "Spark 2.4 and Scala 2.12"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-python-adapter-2.4_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-viz-2.4_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<!-- Optional: https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
+		<dependency>
+		    <groupId>org.datasyslab</groupId>
+		    <artifactId>geotools-wrapper</artifactId>
+		    <version>{{ sedona.current_geotools }}</version>
+		</dependency>
+		```
+
+!!! abstract "Sedona with Apache Flink"
+
+	=== "Flink 2.12+ and Scala 2.12"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-flink_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<!-- Optional: https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
+		<dependency>
+		    <groupId>org.datasyslab</groupId>
+		    <artifactId>geotools-wrapper</artifactId>
+		    <version>{{ sedona.current_geotools }}</version>
+		</dependency>
+		```
+
+
+#### SernetCDF 0.1.0
 
 For Scala / Java API, it is required only if you want to read HDF/NetCDF files.
 
@@ -83,65 +121,89 @@ Under Apache License 2.0.
 
 ==For Scala and Java users==, if by any chance you don't want to use an uber jar that includes every dependency, you can use the following jars instead. ==Otherwise, please do not continue reading this section.==
 
-=== "Spark 3.0 + Scala 2.12"
-
-	```xml
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-core-3.0_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-sql-3.0_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-viz-3.0_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	```
-
-=== "Spark 2.4 + Scala 2.11"
-
-	```xml
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-core-2.4_2.11</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-sql-2.4_2.11</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-viz-2.4_2.11</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	```
-
-=== "Spark 2.4 + Scala 2.12"
-
-	```xml
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-core-2.4_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-sql-2.4_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	<dependency>
-	  <groupId>org.apache.sedona</groupId>
-	  <artifactId>sedona-viz-2.4_2.12</artifactId>
-	  <version>{{ sedona.current_version }}</version>
-	</dependency>
-	```
+!!! abstract "Sedona with Apache Spark"
+
+	=== "Spark 3.0+ and Scala 2.12"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-core-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-sql-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-viz-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		```
+	
+	=== "Spark 2.4 and Scala 2.11"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-core-2.4_2.11</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-sql-2.4_2.11</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-viz-2.4_2.11</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		```
+	
+	=== "Spark 2.4 and Scala 2.12"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-core-2.4_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-sql-2.4_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-viz-2.4_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		```
+
+!!! abstract "Sedona with Apache Flink"
+
+	=== "Flink 1.12+ and Scala 2.12"
+	
+		```xml
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-core-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-sql-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-flink-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>
+		```
 
 ### LocationTech JTS-core 1.18.0+
 
diff --git a/docs/setup/modules.md b/docs/setup/modules.md
new file mode 100644
index 0000000..82fd96d
--- /dev/null
+++ b/docs/setup/modules.md
@@ -0,0 +1,16 @@
+# Sedona modules for Apache Spark
+
+| Name  |  API |  Introduction|
+|---|---|---|
+|Core  | RDD  | SpatialRDDs and Query Operators. |
+|SQL  | SQL/DataFrame  |SQL interfaces for Sedona core.|
+|Viz |  RDD, SQL/DataFrame | Visualization for Spatial RDD and DataFrame|
+|Zeppelin |  Apache Zeppelin | Plugin for Apache Zeppelin 0.8.1+|
+
+## API availability
+
+|            | **Core/RDD** | **DataFrame/SQL** | **Viz RDD/SQL** |
+|:----------:|:------------:|:-----------------:|:---------------:|
+| Scala/Java |✅|✅|✅|
+|   Python   |✅|✅|SQL only|
+|      R     |✅|✅|✅|
\ No newline at end of file
diff --git a/docs/setup/overview.md b/docs/setup/overview.md
index 4121b6a..59a6208 100644
--- a/docs/setup/overview.md
+++ b/docs/setup/overview.md
@@ -1,33 +1,33 @@
-## Companies are using Sedona 
+# Who are using Sedona? 
 
-[<img src="https://www.dataiku.com/static/img/partners/LOGO-Blue-DME-PNG-3.png" width="200">](https://www.bluedme.com/) &nbsp;&nbsp; 
-[<img src="https://images.ukfast.co.uk/comms/news/businesscloud/photos/14-08-2018/gyana.jpg" width="150">](https://www.gyana.co.uk/) &nbsp;&nbsp; 
-[<img src="https://149448277.v2.pressablecdn.com/wp-content/uploads/2021/03/guy-carpenter-logo-1.png" width="130">](https://guycarp.com/) &nbsp;&nbsp; 
+[<img src="https://www.dataiku.com/static/img/partners/LOGO-Blue-DME-PNG-3.png" width="100">](https://www.bluedme.com/) &nbsp;&nbsp; 
+[<img src="https://images.ukfast.co.uk/comms/news/businesscloud/photos/14-08-2018/gyana.jpg" width="100">](https://www.gyana.co.uk/) &nbsp;&nbsp; 
+[<img src="https://149448277.v2.pressablecdn.com/wp-content/uploads/2021/03/guy-carpenter-logo-1.png" width="80">](https://guycarp.com/) &nbsp;&nbsp;[<img src="../../image/meituan-bike.png" width="170">](http://t8.pub) and more...
 
+|Download statistics| **Maven** | **PyPI** | **CRAN** |
+|:-------------:|:------------------:|:--------------:|:---------:|
+| Apache Sedona |         80k/month        |[![Downloads](https://static.pepy.tech/personalized-badge/apache-sedona?period=month&units=international_system&left_color=black&right_color=brightgreen&left_text=downloads/month)](https://pepy.tech/project/apache-sedona) [![Downloads](https://static.pepy.tech/personalized-badge/apache-sedona?period=total&units=international_system&left_color=black&right_color=brightgreen&left_text=total%20downloads)](https://pepy.tech/project/apache-sedona)|[! [...]
+|    Archived GeoSpark releases   |300k/month|[![Downloads](https://static.pepy.tech/personalized-badge/geospark?period=month&units=international_system&left_color=black&right_color=brightgreen&left_text=downloads/month)](https://pepy.tech/project/geospark)[![Downloads](https://static.pepy.tech/personalized-badge/geospark?period=total&units=international_system&left_color=black&right_color=brightgreen&left_text=total%20downloads)](https://pepy.tech/project/geospark)|           |
 
-[<img src="../../image/meituan-bike.png" width="170">](http://t8.pub) and more...
+# What can Sedona do?
 
-Please make a Pull Request to add yourself!
+## Distributed spatial datasets
+- [x] Spatial RDD on Spark
+- [x] Spatial DataFrame/SQL on Spark
+- [x] Spatial DataStream on Flink
+- [x] Spatial Table/SQL on Flink
 
-## Sedona modules
-
-| Name  |  API |  Introduction|
-|---|---|---|
-|Core  | RDD  | SpatialRDDs and Query Operators. |
-|SQL  | SQL/DataFrame  |SQL interfaces for Sedona core.|
-|Viz |  RDD, SQL/DataFrame | Visualization for Spatial RDD and DataFrame|
-|Zeppelin |  Apache Zeppelin | Plugin for Apache Zeppelin 0.8.1+|
-
-## Features
-
-- [x] Spatial RDD
-- [x] Spatial DataFrame/SQL
+## Complex spatial objects
 - [x] Vector geometries / trajectories
 - [x] Raster images with Map Algebra
 - [x] Various input formats: CSV, TSV, WKT, WKB, GeoJSON, Shapefile, GeoTIFF, NetCDF/HDF
+
+## Distributed spatial queries
 - [x] Spatial query: range query, range join query, distance join query, K Nearest Neighbor query
 - [x] Spatial index: R-Tree, Quad-Tree
+
+## Rich spatial analytics tools 
 - [x] Coordinate Reference System / Spatial Reference System Transformation
 - [x] High resolution map generation: [Visualize Spatial DataFrame/RDD](../../tutorial/viz)
 - [x] Apache Zeppelin integration
-- [x] Support Scala, Java, Python, R
+- [x] Support Scala, Java, Python, R
\ No newline at end of file
diff --git a/docs/setup/platform.md b/docs/setup/platform.md
index f3a5f00..ed60958 100644
--- a/docs/setup/platform.md
+++ b/docs/setup/platform.md
@@ -22,6 +22,6 @@ Sedona binary releases are compiled by Java 1.8 and Scala 2.11/2.12 and tested i
 	|:-----------:| :---------:|:---------:|:---------:|:---------:|
 	| Scala 2.11  |  ✅  |  not tested  | not tested  | not tested  |
 	| Scala 2.12 | not tested  |  ✅  | ✅ |  ✅ | ✅ |
-	
+
 !!!warning
 	Sedona Scala/Java/Python/R also work with Spark 2.3, Python 3.6 but we have no plan to officially support it.
\ No newline at end of file
diff --git a/docs/setup/release-notes.md b/docs/setup/release-notes.md
index e06bfa4..cf6ad34 100644
--- a/docs/setup/release-notes.md
+++ b/docs/setup/release-notes.md
@@ -1,3 +1,38 @@
+## Sedona 1.2.0
+
+This version is a major release on Sedona 1.2.0 line. It includes bug fixes and new features: Sedona with Apache Flink.
+
+### RDD
+
+Bug fix:
+
+* [SEDONA-18](https://issues.apache.org/jira/browse/SEDONA-18): Fix an error reading Shapefile
+* [SEDONA-73](https://issues.apache.org/jira/browse/SEDONA-73): Exclude scala-library from scala-collection-compat
+
+Improvement:
+
+* [SEDONA-77](https://issues.apache.org/jira/browse/SEDONA-77): Refactor Format readers and spatial partitioning functions to be standalone libraries. So they can be used by Flink and others.
+
+### SQL
+
+New features:
+
+* [SEDONA-4](https://issues.apache.org/jira/browse/SEDONA-4): Handle nulls in SQL functions
+* [SEDONA-65](https://issues.apache.org/jira/browse/SEDONA-65): Create ST_Difference function
+* [SEDONA-68](https://issues.apache.org/jira/browse/SEDONA-68) Add St_Collect function.
+* [SEDONA-82](https://issues.apache.org/jira/browse/SEDONA-82): Create ST_SymmDifference function
+* [SEDONA-75](https://issues.apache.org/jira/browse/SEDONA-75): Add support for "3D" geometries: Preserve Z coordinates on geometries when serializing, ST_AsText , ST_Z, ST_3DDistance
+* [SEDONA-86](https://issues.apache.org/jira/browse/SEDONA-86): Support empty geometries in ST_AsBinary and ST_AsEWKB
+
+### Flink
+
+Major update:
+
+* [SEDONA-80](https://issues.apache.org/jira/browse/SEDONA-80): Geospatial stream processing support in Flink Table API
+* [SEDONA-85](https://issues.apache.org/jira/browse/SEDONA-85): ST_Geohash function in Flink
+* [SEDONA-87](https://issues.apache.org/jira/browse/SEDONA-87): Support Flink Table and DataStream conversion
+
+
 ## Sedona 1.1.1
 
 This version is a maintenance release on Sedona 1.1.X line. It includes bug fixes and a few new functions.
diff --git a/docs/tutorial/flink/sql.md b/docs/tutorial/flink/sql.md
new file mode 100644
index 0000000..bef96c8
--- /dev/null
+++ b/docs/tutorial/flink/sql.md
@@ -0,0 +1,383 @@
+The page outlines the steps to manage spatial data using SedonaSQL. ==The example code is written in Java but also works for Scala==.
+
+SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. It includes four kinds of SQL operators as follows. All these operators can be directly called through:
+```Java
+Table myTable = tableEnv.sqlQuery("YOUR_SQL")
+```
+
+Detailed SedonaSQL APIs are available here: [SedonaSQL API](/api/flink/Overview)
+
+## Set up dependencies
+
+1. Read [Sedona Maven Central coordinates](/setup/maven-coordinates)
+2. Add Sedona dependencies in build.sbt or pom.xml.
+3. Add [Flink dependencies](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/configuration/overview/) in build.sbt or pom.xml.
+
+## Initiate Stream Environment
+Use the following code to initiate your `StreamExecutionEnvironment` at the beginning:
+```Java
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment()
+EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode().build();
+StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings);
+```
+
+## Register SedonaSQL
+
+Add the following line after your `StreamExecutionEnvironment` and `StreamTableEnvironment` declaration
+
+```Java
+SedonaFlinkRegistrator.registerType(env);
+SedonaFlinkRegistrator.registerFunc(tableEnv);
+```
+
+!!!warning
+	Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption.
+
+This function will register Sedona User Defined Type and User Defined Function
+
+## Create a Geometry type column
+
+All geometrical operations in SedonaSQL are on Geometry type objects. Therefore, before any kind of queries, you need to create a Geometry type column on a DataFrame.
+
+Assume you have a Flink Table `tbl` like this:
+
+```
++----+--------------------------------+--------------------------------+
+| op |                   geom_polygon |                   name_polygon |
++----+--------------------------------+--------------------------------+
+| +I | POLYGON ((-0.5 -0.5, -0.5 0... |                       polygon0 |
+| +I | POLYGON ((0.5 0.5, 0.5 1.5,... |                       polygon1 |
+| +I | POLYGON ((1.5 1.5, 1.5 2.5,... |                       polygon2 |
+| +I | POLYGON ((2.5 2.5, 2.5 3.5,... |                       polygon3 |
+| +I | POLYGON ((3.5 3.5, 3.5 4.5,... |                       polygon4 |
+| +I | POLYGON ((4.5 4.5, 4.5 5.5,... |                       polygon5 |
+| +I | POLYGON ((5.5 5.5, 5.5 6.5,... |                       polygon6 |
+| +I | POLYGON ((6.5 6.5, 6.5 7.5,... |                       polygon7 |
+| +I | POLYGON ((7.5 7.5, 7.5 8.5,... |                       polygon8 |
+| +I | POLYGON ((8.5 8.5, 8.5 9.5,... |                       polygon9 |
++----+--------------------------------+--------------------------------+
+10 rows in set
+```
+
+You can create a Table with a Geometry type column as follows:
+
+```Java
+tableEnv.createTemporaryView("myTable", tbl)
+Table geomTbl = tableEnv.sql("SELECT ST_GeomFromWKT(geom_polygon) as geom_polygon, name_polygon FROM myTable")
+geomTbl.execute().print()
+```
+
+The output will be:
+
+```
++----+--------------------------------+--------------------------------+
+| op |                   geom_polygon |                   name_polygon |
++----+--------------------------------+--------------------------------+
+| +I | POLYGON ((-0.5 -0.5, -0.5 0... |                       polygon0 |
+| +I | POLYGON ((0.5 0.5, 0.5 1.5,... |                       polygon1 |
+| +I | POLYGON ((1.5 1.5, 1.5 2.5,... |                       polygon2 |
+| +I | POLYGON ((2.5 2.5, 2.5 3.5,... |                       polygon3 |
+| +I | POLYGON ((3.5 3.5, 3.5 4.5,... |                       polygon4 |
+| +I | POLYGON ((4.5 4.5, 4.5 5.5,... |                       polygon5 |
+| +I | POLYGON ((5.5 5.5, 5.5 6.5,... |                       polygon6 |
+| +I | POLYGON ((6.5 6.5, 6.5 7.5,... |                       polygon7 |
+| +I | POLYGON ((7.5 7.5, 7.5 8.5,... |                       polygon8 |
+| +I | POLYGON ((8.5 8.5, 8.5 9.5,... |                       polygon9 |
++----+--------------------------------+--------------------------------+
+10 rows in set
+```
+
+Although it looks same with the input, actually the type of column geom_polygon has been changed to ==Geometry== type.
+
+To verify this, use the following code to print the schema of the DataFrame:
+
+```Java
+geomTbl.printSchema()
+```
+
+The output will be like this:
+
+```
+(
+  `geom_polygon` RAW('org.locationtech.jts.geom.Geometry', '...'),
+  `name_polygon` STRING
+)
+```
+
+!!!note
+	SedonaSQL provides lots of functions to create a Geometry column, please read [SedonaSQL constructor API](/api/flink/Constructor).
+
+## Transform the Coordinate Reference System
+
+Sedona doesn't control the coordinate unit (degree-based or meter-based) of all geometries in a Geometry column. The unit of all related distances in SedonaSQL is same as the unit of all geometries in a Geometry column.
+
+To convert Coordinate Reference System of the Geometry column created before, use the following code:
+
+```Java
+Table geomTbl3857 = tableEnv.sqlQuery("SELECT ST_Transform(countyshape, "epsg:4326", "epsg:3857") AS geom_polygon, name_polygon FROM myTable")
+geomTbl3857.execute().print()
+```
+
+The first EPSG code EPSG:4326 in `ST_Transform` is the source CRS of the geometries. It is WGS84, the most common degree-based CRS.
+
+The second EPSG code EPSG:3857 in `ST_Transform` is the target CRS of the geometries. It is the most common meter-based CRS.
+
+This `ST_Transform` transform the CRS of these geomtries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/.)
+
+!!!note
+	Read [SedonaSQL ST_Transform API](/api/flink/Function/#st_transform) to learn different spatial query predicates.
+
+For example, a Table that has coordinates in the US will become like this.
+
+Before the transformation:
+```
++----+--------------------------------+--------------------------------+
+| op |                     geom_point |                     name_point |
++----+--------------------------------+--------------------------------+
+| +I |                POINT (32 -118) |                          point |
+| +I |                POINT (33 -117) |                          point |
+| +I |                POINT (34 -116) |                          point |
+| +I |                POINT (35 -115) |                          point |
+| +I |                POINT (36 -114) |                          point |
+| +I |                POINT (37 -113) |                          point |
+| +I |                POINT (38 -112) |                          point |
+| +I |                POINT (39 -111) |                          point |
+| +I |                POINT (40 -110) |                          point |
+| +I |                POINT (41 -109) |                          point |
++----+--------------------------------+--------------------------------+
+```
+
+After the transformation:
+
+```
++----+--------------------------------+--------------------------------+
+| op |                            _c0 |                     name_point |
++----+--------------------------------+--------------------------------+
+| +I | POINT (-13135699.91360628 3... |                          point |
+| +I | POINT (-13024380.422813008 ... |                          point |
+| +I | POINT (-12913060.932019735 ... |                          point |
+| +I | POINT (-12801741.44122646 4... |                          point |
+| +I | POINT (-12690421.950433187 ... |                          point |
+| +I | POINT (-12579102.459639912 ... |                          point |
+| +I | POINT (-12467782.96884664 4... |                          point |
+| +I | POINT (-12356463.478053367 ... |                          point |
+| +I | POINT (-12245143.987260092 ... |                          point |
+| +I | POINT (-12133824.496466817 ... |                          point |
++----+--------------------------------+--------------------------------+
+```
+
+
+## Run spatial queries
+
+After creating a Geometry type column, you are able to run spatial queries.
+
+### Range query
+
+Use ==ST_Contains==, ==ST_Intersects== and so on to run a range query over a single column.
+
+The following example finds all counties that are within the given polygon:
+
+```Java
+geomTable = tableEnv.sqlQuery(
+  "
+    SELECT *
+    FROM spatialdf
+    WHERE ST_Contains (ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape)
+  ")
+geomTable.execute().print()
+```
+
+!!!note
+	Read [SedonaSQL Predicate API](/api/flink/Predicate) to learn different spatial query predicates.
+	
+### KNN query
+
+Use ==ST_Distance== to calculate the distance and rank the distance.
+
+The following code returns the 5 nearest neighbor of the given polygon.
+
+```Java
+geomTable = tableEnv.sqlQuery(
+  "
+    SELECT countyname, ST_Distance(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape) AS distance
+    FROM geomTable
+    ORDER BY distance DESC
+    LIMIT 5
+  ")
+geomTable.execute().print()
+```
+
+## Convert Spatial Table to Spatial DataStream
+
+### Get DataStream
+
+Use TableEnv's toDataStream function
+
+```Java
+DataStream<Row> geomStream = tableEnv.toDataStream(geomTable)
+```
+
+### Retrieve Geometries
+
+Then get the Geometry from each Row object using Map
+
+```Java
+import org.locationtech.jts.geom.Geometry;
+
+DataStream<Geometry> geometries = geomStream.map(new MapFunction<Row, Geometry>() {
+            @Override
+            public Geometry map(Row value) throws Exception {
+                return (Geometry) value.getField(0);
+            }
+        });
+geometries.print();
+```
+
+The output will be
+
+```
+14> POLYGON ((1.5 1.5, 1.5 2.5, 2.5 2.5, 2.5 1.5, 1.5 1.5))
+2> POLYGON ((5.5 5.5, 5.5 6.5, 6.5 6.5, 6.5 5.5, 5.5 5.5))
+5> POLYGON ((8.5 8.5, 8.5 9.5, 9.5 9.5, 9.5 8.5, 8.5 8.5))
+16> POLYGON ((3.5 3.5, 3.5 4.5, 4.5 4.5, 4.5 3.5, 3.5 3.5))
+12> POLYGON ((-0.5 -0.5, -0.5 0.5, 0.5 0.5, 0.5 -0.5, -0.5 -0.5))
+13> POLYGON ((0.5 0.5, 0.5 1.5, 1.5 1.5, 1.5 0.5, 0.5 0.5))
+15> POLYGON ((2.5 2.5, 2.5 3.5, 3.5 3.5, 3.5 2.5, 2.5 2.5))
+3> POLYGON ((6.5 6.5, 6.5 7.5, 7.5 7.5, 7.5 6.5, 6.5 6.5))
+1> POLYGON ((4.5 4.5, 4.5 5.5, 5.5 5.5, 5.5 4.5, 4.5 4.5))
+4> POLYGON ((7.5 7.5, 7.5 8.5, 8.5 8.5, 8.5 7.5, 7.5 7.5))
+```
+
+### Store non-spatial attributes in Geometries
+
+You can concatenate other non-spatial attributes and store them in Geometry's `userData` field so you can recover them later on. `userData` field can be any object type.
+
+```Java
+import org.locationtech.jts.geom.Geometry;
+
+DataStream<Geometry> geometries = geomStream.map(new MapFunction<Row, Geometry>() {
+            @Override
+            public Geometry map(Row value) throws Exception {
+                Geometry geom = (Geometry) value.getField(0);
+                geom.setUserData(value.getField(1));
+                return geom;
+            }
+        });
+geometries.print();
+```
+
+The `print` command will not print out `userData` field. But you can get it this way:
+
+```Java
+import org.locationtech.jts.geom.Geometry;
+
+geometries.map(new MapFunction<Geometry, String>() {
+            @Override
+            public String map(Geometry value) throws Exception
+            {
+                return (String) value.getUserData();
+            }
+        }).print();
+```
+
+The output will be
+
+```
+13> polygon9
+6> polygon2
+10> polygon6
+11> polygon7
+5> polygon1
+12> polygon8
+8> polygon4
+4> polygon0
+7> polygon3
+9> polygon5
+```
+	
+## Convert Spatial DataStream to Spatial Table
+
+### Create Geometries using Sedona FormatUtils
+
+* Create a Geometry from a WKT string
+
+```Java
+import org.apache.sedona.core.formatMapper.FormatUtils;
+import org.locationtech.jts.geom.Geometry;
+
+DataStream<Geometry> geometries = text.map(new MapFunction<String, Geometry>() {
+            @Override
+            public Geometry map(String value) throws Exception
+            {
+                FormatUtils formatUtils = new FormatUtils(FileDataSplitter.WKT, false);
+                return formatUtils.readGeometry(value);
+            }
+        })
+```
+
+* Create a Point from a String `1.1, 2.2`. Use `,` as the delimiter.
+
+```Java
+import org.apache.sedona.core.formatMapper.FormatUtils;
+import org.locationtech.jts.geom.Geometry;
+
+DataStream<Geometry> geometries = text.map(new MapFunction<String, Geometry>() {
+            @Override
+            public Geometry map(String value) throws Exception
+            {
+                FormatUtils<Geometry> formatUtils = new FormatUtils(",", false, GeometryType.POINT);
+                return formatUtils.readGeometry(value);
+            }
+        })
+```
+
+* Create a Polygon from a String `1.1, 1.1, 10.1, 10.1`. This is a rectangle with (1.1, 1.1) and (10.1, 10.1) as their min/max corners.
+
+```Java
+import org.apache.sedona.core.formatMapper.FormatUtils;
+import org.locationtech.jts.geom.GeometryFactory;
+import org.locationtech.jts.geom.Geometry;
+
+DataStream<Geometry> geometries = text.map(new MapFunction<String, Geometry>() {
+            @Override
+            public Geometry map(String value) throws Exception
+            {
+            	  // Write some code to get four double type values: minX, minY, maxX, maxY
+            	  ...
+	            Coordinate[] coordinates = new Coordinate[5];
+	            coordinates[0] = new Coordinate(minX, minY);
+	            coordinates[1] = new Coordinate(minX, maxY);
+	            coordinates[2] = new Coordinate(maxX, maxY);
+	            coordinates[3] = new Coordinate(maxX, minY);
+	            coordinates[4] = coordinates[0];
+	            GeometryFactory geometryFactory = new GeometryFactory();
+	            return geometryFactory.createPolygon(coordinates);
+            }
+        })
+```
+
+### Create Row objects
+
+Put a geometry in a Flink Row to a `geomStream`. Note that you can put other attributes in Row as well. This example uses a constant value `myName` for all geometries.
+
+```Java
+import org.apache.sedona.core.formatMapper.FormatUtils;
+import org.locationtech.jts.geom.Geometry;
+import org.apache.flink.types.Row;
+
+DataStream<Row> geomStream = text.map(new MapFunction<String, Row>() {
+            @Override
+            public Row map(String value) throws Exception
+            {
+                FormatUtils formatUtils = new FormatUtils(FileDataSplitter.WKT, false);
+                return Row.of(formatUtils.readGeometry(value), "myName");
+            }
+        })
+```
+
+### Get Spatial Table
+
+Use TableEnv's fromDataStream function, with two column names `geom` and `geom_name`.
+```Java
+Table geomTable = tableEnv.fromDataStream(geomStream, "geom", "geom_name")
+```
diff --git a/mkdocs.yml b/mkdocs.yml
index 0bc1eab..63f5534 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -1,64 +1,83 @@
 site_name: Apache Sedona™ (incubating) 
-site_description: Apache Sedona (incubating) is a cluster computing system for processing large-scale spatial data. Sedona extends Apache Spark / SparkSQL with a set of out-of-the-box Spatial Resilient Distributed Datasets / SpatialSQL that efficiently load, process, and analyze large-scale spatial data across machines.
+site_description: Apache Sedona (incubating) is a cluster computing system for processing large-scale spatial data. Sedona extends Apache Spark and Apache Flink with a set of out-of-the-box distributed Spatial Datasets and Spatial SQL that efficiently load, process, and analyze large-scale spatial data across machines.
 nav:
     - Home: index.md
     - Setup:
       - Overview: setup/overview.md
-      - Supported platforms: setup/platform.md
-      - Maven Central coordinate: setup/maven-coordinates.md      
-      - Installation:
+      - Supported platforms: 
+        - Sedona with Apache Spark:
+          - Modules: setup/modules.md
+          - Language wrappers: setup/platform.md
+        - Sedona with Apache Flink:
+          - Modules: setup/flink/modules.md
+          - Language wrappers: setup/flink/platform.md
+      - Maven Central coordinate: setup/maven-coordinates.md
+      - Install with Apache Spark:
         - Install Sedona Scala/Java: setup/install-scala.md
         - Install Sedona Python: setup/install-python.md
         - Install Sedona R: setup/install-r.md
         - Install Sedona-Zeppelin: setup/zeppelin.md
         - Install on Databricks: setup/databricks.md
+        - Set up Spark cluser: setup/cluster.md
+      - Install with Apache Flink:
+        - Install Sedona Scala/Java: setup/flink/install-scala.md
       - Release notes: setup/release-notes.md      
-      - Set up Spark cluser: setup/cluster.md
       - Compile the code: setup/compile.md
       - Publish the code: setup/publish.md
     - Download: download.md
     - Programming Guides:
-      - Spatial RDD app:
-        - Scala/Java: tutorial/rdd.md
-        - Python: tutorial/core-python.md
-        - R: tutorial/rdd-r.md
-      - Spatial SQL app:
-        - Scala/Java: tutorial/sql.md
-        - Pure SQL: tutorial/sql-pure-sql.md
-        - Python: tutorial/sql-python.md
-        - R: tutorial/sql-r.md
-        - Raster data - Map Algebra: tutorial/raster.md
-      - Map visualization SQL app:
-        - Scala/Java: tutorial/viz.md
-        - Use Apache Zeppelin: tutorial/zeppelin.md
-        - R: tutorial/viz-r.md
-        - Gallery: tutorial/viz-gallery.md
+      - Sedona with Apache Spark:
+        - Spatial SQL app:
+          - Scala/Java: tutorial/sql.md
+          - Pure SQL: tutorial/sql-pure-sql.md
+          - Python: tutorial/sql-python.md
+          - R: tutorial/sql-r.md
+          - Raster data - Map Algebra: tutorial/raster.md      
+        - Spatial RDD app:
+          - Scala/Java: tutorial/rdd.md
+          - Python: tutorial/core-python.md
+          - R: tutorial/rdd-r.md
+        - Map visualization SQL app:
+          - Scala/Java: tutorial/viz.md
+          - Use Apache Zeppelin: tutorial/zeppelin.md
+          - R: tutorial/viz-r.md
+          - Gallery: tutorial/viz-gallery.md
+        - Performance tuning:
+          - Benchmark: tutorial/benchmark.md            
+          - Tune RDD application: tutorial/Advanced-Tutorial-Tune-your-Application.md
+      - Sedona with Apache Flink:
+        - Spatial SQL app:
+          - Scala/Java: tutorial/flink/sql.md
       - Examples:
-        - Scala/Java: tutorial/demo.md
-        - Python: tutorial/jupyter-notebook.md
-      - Performance tuning:
-        - Benchmark: tutorial/benchmark.md            
-        - Tune RDD application: tutorial/Advanced-Tutorial-Tune-your-Application.md
+          - Scala/Java: tutorial/demo.md
+          - Python: tutorial/jupyter-notebook.md
     - API Docs:
-      - RDD (core):
-        - Scala/Java doc: api/java-api.md
-        - Python doc: api/python-api.md
-        - R doc: api/r-api.md
-      - SQL:
-        - Quick start: api/sql/Overview.md
-        - Vector data:
-            - Constructor: api/sql/Constructor.md
-            - Function: api/sql/Function.md
-            - Predicate: api/sql/Predicate.md
-            - Aggregate function: api/sql/AggregateFunction.md
-            - Join query (optimizer): api/sql/Optimizer.md
-        - Raster data:
-            - Raster input and output: api/sql/Raster-loader.md
-            - Raster operators: api/sql/Raster-operators.md
-        - Parameter: api/sql/Parameter.md
-      - Viz:
-        - DataFrame/SQL: api/viz/sql.md
-        - RDD: api/viz/java-api.md
+      - Sedona with Apache Spark:
+        - SQL:
+          - Quick start: api/sql/Overview.md
+          - Vector data:
+              - Constructor: api/sql/Constructor.md
+              - Function: api/sql/Function.md
+              - Predicate: api/sql/Predicate.md
+              - Aggregate function: api/sql/AggregateFunction.md
+              - Join query (optimizer): api/sql/Optimizer.md
+          - Raster data:
+              - Raster input and output: api/sql/Raster-loader.md
+              - Raster operators: api/sql/Raster-operators.md
+          - Parameter: api/sql/Parameter.md
+        - RDD (core):
+          - Scala/Java doc: api/java-api.md
+          - Python doc: api/python-api.md
+          - R doc: api/r-api.md
+        - Viz:
+          - DataFrame/SQL: api/viz/sql.md
+          - RDD: api/viz/java-api.md
+      - Sedona with Apache Flink:
+        - SQL:
+          - Overview: api/flink/Overview.md
+          - Constructor: api/flink/Constructor.md
+          - Function: api/flink/Function.md
+          - Predicate: api/flink/Predicate.md
     - Community:
       - Community: community/contact.md 
       - Contributing rule: community/rule.md
@@ -137,11 +156,11 @@ extra:
     - icon: fontawesome/brands/twitter
       link: 'https://twitter.com/ApacheSedona'
   sedona:
-    current_version: 1.1.1-incubating
-    current_git_tag: sedona-1.1.1-incubating-rc1
-    current_rc: 1.1.1-incubating-rc1
-    current_snapshot: 1.2.0-incubating-SNAPSHOT
-    next_version: 1.2.0-incubating
+    current_version: 1.2.0-incubating
+    current_git_tag: sedona-1.2.0-incubating-rc1
+    current_rc: 1.2.0-incubating-rc1
+    current_snapshot: 1.2.1-incubating-SNAPSHOT
+    next_version: 1.2.1-incubating
     current_geotools: 1.1.0-25.2
 copyright: Apache Sedona, Apache Incubator, Apache, the Apache feather logo, and the Apache Incubator project logo are trademarks or registered trademarks of The Apache Software Foundation. Copyright © 2021 The Apache Software Foundation
 markdown_extensions:
@@ -164,7 +183,8 @@ markdown_extensions:
   - pymdownx.mark
   - pymdownx.smartsymbols
   - pymdownx.superfences
-  - pymdownx.tabbed
+  - pymdownx.tabbed:
+      alternate_style: true 
   - pymdownx.tasklist:
       custom_checkbox: true
   - pymdownx.tilde