You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sedona.apache.org by ji...@apache.org on 2023/03/15 08:49:06 UTC

[sedona] branch prepare-1.4.0-doc updated: Push a part of the doc

This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch prepare-1.4.0-doc
in repository https://gitbox.apache.org/repos/asf/sedona.git


The following commit(s) were added to refs/heads/prepare-1.4.0-doc by this push:
     new c3d64ba0 Push a part of the doc
c3d64ba0 is described below

commit c3d64ba06e57bb427b0ef65f180668b9ca3f0e4d
Author: Jia Yu <ji...@apache.org>
AuthorDate: Wed Mar 15 01:48:58 2023 -0700

    Push a part of the doc
---
 docs/api/sql/Optimizer.md       |  11 +-
 docs/setup/install-python.md    |  10 +-
 docs/setup/maven-coordinates.md | 106 ++---
 docs/setup/release-notes.md     | 114 +++++
 docs/tutorial/core-python.md    | 734 -----------------------------
 docs/tutorial/rdd.md            | 989 ++++++++++++++++++++++++++++++----------
 docs/tutorial/sql-python.md     | 562 -----------------------
 docs/tutorial/sql.md            | 811 +++++++++++++++++++++++++++-----
 mkdocs.yml                      |   8 +-
 9 files changed, 1620 insertions(+), 1725 deletions(-)

diff --git a/docs/api/sql/Optimizer.md b/docs/api/sql/Optimizer.md
index a6dd2c7a..7034e44c 100644
--- a/docs/api/sql/Optimizer.md
+++ b/docs/api/sql/Optimizer.md
@@ -76,7 +76,8 @@ DistanceJoin pointshape1#12: geometry, pointshape2#33: geometry, 2.0, true
 Introduction: Perform a range join or distance join but broadcast one of the sides of the join.
 This maintains the partitioning of the non-broadcast side and doesn't require a shuffle.
 Sedona uses broadcast join only if the correct side has a broadcast hint.
-The supported join type - broadcast side combinations are
+The supported join type - broadcast side combinations are:
+
 * Inner - either side, preferring to broadcast left if both sides have the hint
 * Left semi - broadcast right
 * Left anti - broadcast right
@@ -149,6 +150,14 @@ Sedona supports spatial predicate push-down for GeoParquet files. When spatial f
 to determine if all data in the file will be discarded by the spatial predicate. This optimization could reduce the number of files scanned
 when the queried GeoParquet dataset was partitioned by spatial proximity.
 
+To maximize the performance of Sedona GeoParquet filter pushdown, we suggest that you sort the data by their geohash values (see [ST_GeoHash](../../api/sql/Function/#st_geohash)) and then save as a GeoParquet file. An example is as follows:
+
+```
+SELECT col1, col2, geom, ST_GeoHash(geom, 5) as geohash
+FROM spatialDf
+ORDER BY geohash
+```
+
 The following figure is the visualization of a GeoParquet dataset. `bbox`es of all GeoParquet files were plotted as blue rectangles and the query window was plotted as a red rectangle. Sedona will only scan 1 of the 6 files to
 answer queries such as `SELECT * FROM geoparquet_dataset WHERE ST_Intersects(geom, <query window>)`, thus only part of the data covered by the light green rectangle needs to be scanned.
 
diff --git a/docs/setup/install-python.md b/docs/setup/install-python.md
index a298ef0e..f7a09f17 100644
--- a/docs/setup/install-python.md
+++ b/docs/setup/install-python.md
@@ -31,13 +31,13 @@ cd python
 python3 setup.py install
 ```
 
-### Prepare python-adapter jar
+### Prepare sedona-spark-shaded jar
 
-Sedona Python needs one additional jar file called `sedona-python-adapter` to work properly. Please make sure you use the correct version for Spark and Scala. For Spark 3.0 + Scala 2.12, it is called `sedona-python-adapter-3.0_2.12-{{ sedona.current_version }}.jar`
+Sedona Python needs one additional jar file called `sedona-spark-shaded` to work properly. Please make sure you use the correct version for Spark and Scala. For Spark 3.0 + Scala 2.12, it is called `sedona-spark-shaded-3.0_2.12-{{ sedona.current_version }}.jar`
 
 You can get it using one of the following methods:
 
-1. Compile from the source within main project directory and copy it (in `python-adapter/target` folder) to SPARK_HOME/jars/ folder ([more details](../compile))
+1. Compile from the source within main project directory and copy it (in `spark-shaded/target` folder) to SPARK_HOME/jars/ folder ([more details](../compile))
 
 2. Download from [GitHub release](https://github.com/apache/sedona/releases) and copy it to SPARK_HOME/jars/ folder
 3. Call the [Maven Central coordinate](../maven-coordinates) in your python program. For example, in PySparkSQL
@@ -48,7 +48,7 @@ spark = SparkSession. \
     config("spark.serializer", KryoSerializer.getName). \
     config("spark.kryo.registrator", SedonaKryoRegistrator.getName). \
     config('spark.jars.packages',
-           'org.apache.sedona:sedona-python-adapter-3.0_2.12:{{ sedona.current_version }},'
+           'org.apache.sedona:sedona-spark-shaded-3.0_2.12:{{ sedona.current_version }},'
            'org.datasyslab:geotools-wrapper:{{ sedona.current_geotools }}'). \
     getOrCreate()
 ```
@@ -58,7 +58,7 @@ spark = SparkSession. \
 
 ### Setup environment variables
 
-If you manually copy the python-adapter jar to `SPARK_HOME/jars/` folder, you need to setup two environment variables
+If you manually copy the sedona-spark-shaded jar to `SPARK_HOME/jars/` folder, you need to setup two environment variables
 
 * SPARK_HOME. For example, run the command in your terminal
 
diff --git a/docs/setup/maven-coordinates.md b/docs/setup/maven-coordinates.md
index cbd48ded..90f8faf7 100644
--- a/docs/setup/maven-coordinates.md
+++ b/docs/setup/maven-coordinates.md
@@ -1,16 +1,15 @@
 # Maven Coordinates
 
-Sedona Spark has four modules: `sedona-core, sedona-sql, sedona-viz, sedona-python-adapter`. `sedona-python-adapter` is a fat jar of `sedona-core, sedona-sql` and python adapter code. If you want to use SedonaViz, you will include one more jar: `sedona-viz`.
 
-Sedona Flink has four modules :`sedona-core, sedona-sql, sedona-python-adapter, sedona-flink`. `sedona-python-adapter` is a fat jar of `sedona-core, sedona-sql`.
-
-
-## Use Sedona fat jars
+## Use Sedona shaded (fat) jars
 
 !!!warning
-	For Scala/Java/Python/R users, this is the most common way to use Sedona in your environment. Do not use separate Sedona jars otherwise you will get dependency conflicts. `sedona-python-adapter` already contains all you need.
+	For Scala/Java/Python users, this is the most common way to use Sedona in your environment. Do not use separate Sedona jars unless you are sure that you do not need shaded jars.
+	
+!!!warning
+	For R users, this is the only way to use Sedona in your environment.
 
-The optional GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distribution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
+The optional GeoTools library is required if you want to use CRS transformation, ShapefileReader or GeoTiff reader. This wrapper library is a re-distribution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
 
 !!! abstract "Sedona with Apache Spark"
 
@@ -19,7 +18,7 @@ The optional GeoTools library is required only if you want to use CRS transforma
 		```xml
 		<dependency>
 		  <groupId>org.apache.sedona</groupId>
-		  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
+		  <artifactId>sedona-spark-shaded-3.0_2.12</artifactId>
 		  <version>{{ sedona.current_version }}</version>
 		</dependency>
 		<dependency>
@@ -40,7 +39,7 @@ The optional GeoTools library is required only if you want to use CRS transforma
 		```xml
 		<dependency>
 		  <groupId>org.apache.sedona</groupId>
-		  <artifactId>sedona-python-adapter-3.0_2.13</artifactId>
+		  <artifactId>sedona-spark-shaded-3.0_2.13</artifactId>
 		  <version>{{ sedona.current_version }}</version>
 		</dependency>
 		<dependency>
@@ -64,12 +63,7 @@ The optional GeoTools library is required only if you want to use CRS transforma
 		```xml
 		<dependency>
 		  <groupId>org.apache.sedona</groupId>
-		  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
-		  <version>{{ sedona.current_version }}</version>
-		</dependency>
-		<dependency>
-		  <groupId>org.apache.sedona</groupId>
-		  <artifactId>sedona-flink_2.12</artifactId>
+		  <artifactId>sedona-flink-shaded_2.12</artifactId>
 		  <version>{{ sedona.current_version }}</version>
 		</dependency>
 		<!-- Optional: https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
@@ -127,9 +121,12 @@ Under BSD 3-clause (compatible with Apache 2.0 license)
 		```
 
 
-## Use Sedona and third-party jars separately
+## Use Sedona unshaded jars
+
+!!!warning
+	For Scala, Java, Python users, please use the following jars only if you satisfy these conditions: (1) you know how to exclude transient dependencies in a complex application. (2) your environment has internet access (3) you are using some sort of Maven package resolver, or pom.xml, or build.sbt. It usually directly takes an input like this `GroupID:ArtifactID:Version`. If you don't understand what we are talking about, the following jars are not for you.
 
-==For Scala and Java users==, if by any chance you don't want to use an uber jar that includes every dependency, you can use the following jars instead. ==Otherwise, please do not continue reading this section.==
+The optional GeoTools library is required if you want to use CRS transformation, ShapefileReader or GeoTiff reader. This wrapper library is a re-distribution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
 
 !!! abstract "Sedona with Apache Spark"
 
@@ -151,6 +148,17 @@ Under BSD 3-clause (compatible with Apache 2.0 license)
 		  <artifactId>sedona-viz-3.0_2.12</artifactId>
 		  <version>{{ sedona.current_version }}</version>
 		</dependency>
+		<!-- Required if you use Sedona Python -->
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>			
+		<dependency>
+		    <groupId>org.datasyslab</groupId>
+		    <artifactId>geotools-wrapper</artifactId>
+		    <version>{{ sedona.current_geotools }}</version>
+		</dependency>	
 		```
 	=== "Spark 3.0+ and Scala 2.13"
 	
@@ -170,6 +178,17 @@ Under BSD 3-clause (compatible with Apache 2.0 license)
 		  <artifactId>sedona-viz-3.0_2.13</artifactId>
 		  <version>{{ sedona.current_version }}</version>
 		</dependency>
+		<!-- Required if you use Sedona Python -->
+		<dependency>
+		  <groupId>org.apache.sedona</groupId>
+		  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
+		  <version>{{ sedona.current_version }}</version>
+		</dependency>	
+		<dependency>
+		    <groupId>org.datasyslab</groupId>
+		    <artifactId>geotools-wrapper</artifactId>
+		    <version>{{ sedona.current_geotools }}</version>
+		</dependency>		
 		```
 		
 	
@@ -194,56 +213,13 @@ Under BSD 3-clause (compatible with Apache 2.0 license)
 		  <artifactId>sedona-flink-3.0_2.12</artifactId>
 		  <version>{{ sedona.current_version }}</version>
 		</dependency>
+		<dependency>
+		    <groupId>org.datasyslab</groupId>
+		    <artifactId>geotools-wrapper</artifactId>
+		    <version>{{ sedona.current_geotools }}</version>
+		</dependency>		
 		```
 
-### LocationTech JTS-core 1.18.0+
-
-Under Eclipse Public License 2.0 ("EPL") or the Eclipse Distribution License 1.0 (a BSD Style License)
-
-```xml
-<!-- https://mvnrepository.com/artifact/org.locationtech.jts/jts-core -->
-<dependency>
-    <groupId>org.locationtech.jts</groupId>
-    <artifactId>jts-core</artifactId>
-    <version>1.18.0</version>
-</dependency>
-```
-
-### jts2geojson 0.16.1+
-
-Under MIT License. Please make sure you exclude jts and jackson from this library.
-
-```xml
-<!-- https://mvnrepository.com/artifact/org.wololo/jts2geojson -->
-<dependency>
-    <groupId>org.wololo</groupId>
-    <artifactId>jts2geojson</artifactId>
-    <version>0.16.1</version>
-    <exclusions>
-        <exclusion>
-            <groupId>org.locationtech.jts</groupId>
-            <artifactId>jts-core</artifactId>
-        </exclusion>
-        <exclusion>
-            <groupId>com.fasterxml.jackson.core</groupId>
-            <artifactId>*</artifactId>
-        </exclusion>
-    </exclusions>
-</dependency>
-```
-
-### GeoTools 24.0+
-
-GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
-
-```xml
-<!-- https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
-<dependency>
-    <groupId>org.datasyslab</groupId>
-    <artifactId>geotools-wrapper</artifactId>
-    <version>{{ sedona.current_geotools }}</version>
-</dependency>
-```
 
 ### netCDF-Java 5.4.2
 
diff --git a/docs/setup/release-notes.md b/docs/setup/release-notes.md
index 588f89fd..1d7033db 100644
--- a/docs/setup/release-notes.md
+++ b/docs/setup/release-notes.md
@@ -1,6 +1,120 @@
 !!!warning
 	Support of Spark 2.X and Scala 2.11 was removed in Sedona 1.3.0+ although some parts of the source code might still be compatible. Sedona 1.3.0+ releases binary for both Scala 2.12 and 2.13.
 
+## Sedona 1.4.0
+
+Sedona 1.4.0 is compiled against, Spark 3.3 / Flink 1.12, Java 8.
+
+### Highlights
+
+* [X] **Sedona Spark** Pushdown spatial predicate on GeoParquet to reduce memory consumption by 10X: see [explanation](../../api/sql/Optimizer/#geoparquet)
+* [X] **Sedona Spark & Flink** Serialize and deserialize geometries 3 - 7X faster
+* [X] **Sedona Spark** Automatically use broadcast index spatial join for small datasets
+* [X] **Sedona Spark** New RasterUDT added to Sedona GeoTiff reader.
+* [X] **Sedona Spark** A number of bug fixes and improvement to the Sedona R module.
+
+### Behavior change
+
+* **Sedona Spark & Flink** Packaging stategy changed. See [Maven Coordinate](../maven-coordinates). Please change your Sedona dependencies if needed.
+* **Sedona Spark** Join optimization stragy changed. Sedona no longer optimizes spatial join when use a spatial predicate together with a equijoin predicate. By default, it prefers equijoin whenever possible. SedonaConf adds a config option called `sedona.join.optimizationmode`, it can be configured as one of the following values:
+	* `all`: optimize all joins having spatial predicate in join conditions. This was the behavior of Apache Sedona prior to 1.4.0.
+	* `none`: disable spatial join optimization.
+	* `nonequi`: only enable spatial join optimization on non-equi joins. This is the default mode.
+
+When `sedona.join.optimizationmode` is configured as `nonequi`, it won't optimize join queries such as `SELECT * FROM A, B WHERE A.x = B.x AND ST_Contains(A.geom, B.geom)`, since it is an equi-join with equi-condition `A.x = B.x`. Sedona will optimize for `SELECT * FROM A, B WHERE A.x = B.x AND ST_Contains(A.geom, B.geom)`
+
+
+
+
+### Bug
+
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-218'>SEDONA-218</a>] -         Flaky test caused by improper handling of null struct values in Adapter.toDf
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-221'>SEDONA-221</a>] -         Outer join throws NPE for null geometries
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-222'>SEDONA-222</a>] -         GeoParquet reader does not work in non-local mode
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-224'>SEDONA-224</a>] -         java.lang.NoSuchMethodError when loading GeoParquet files using Spark 3.0.x ~ 3.2.x
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-225'>SEDONA-225</a>] -         Cannot count dataframes loaded from GeoParquet files
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-227'>SEDONA-227</a>] -         Python SerDe Performance Degradation
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-230'>SEDONA-230</a>] -         rdd.saveAsGeoJSON should generate feature properties with field names
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-233'>SEDONA-233</a>] -         Incorrect results for several joins in a single stage
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-236'>SEDONA-236</a>] -         Flakey python tests in tests.serialization.test_[de]serializers
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-242'>SEDONA-242</a>] -         Update jars dependencies in Sedona R to Sedona 1.4.0 version
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-250'>SEDONA-250</a>] -         R Deprecate use of Spark 2.4
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-252'>SEDONA-252</a>] -         Fix disabled RS_Base64 test
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-255'>SEDONA-255</a>] -         R – Translation issue for ST_Point and ST_PolygonFromEnvelope
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-258'>SEDONA-258</a>] -         Cannot directly assign raw spatial RDD to CircleRDD using Python binding
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-259'>SEDONA-259</a>] -         Adapter.toSpatialRdd in Python binding does not have valid implementation for specifying custom field names for user data
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-261'>SEDONA-261</a>] -         Cannot run distance join using broadcast index join when the distance expression references to attributes from the right-side relation
+</li>
+</ul>
+        
+### New Feature
+
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-156'>SEDONA-156</a>] -         predicate pushdown support for GeoParquet
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-215'>SEDONA-215</a>] -         Add ST_ConcaveHull
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-216'>SEDONA-216</a>] -         Upgrade jts version to 1.19.0
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-235'>SEDONA-235</a>] -         Create ST_S2CellIds in Sedona
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-246'>SEDONA-246</a>] -         R GeoTiff read/write
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-254'>SEDONA-254</a>] -         R – Add raster type
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-262'>SEDONA-262</a>] -         Don&#39;t optimize equi-join by default, add an option to configure when to optimize spatial joins
+</li>
+</ul>
+        
+<h2>        Improvement
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-205'>SEDONA-205</a>] -         Use BinaryType in GeometryUDT in Sedona Spark
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-207'>SEDONA-207</a>] -         Faster serialization/deserialization of geometry objects
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-212'>SEDONA-212</a>] -         Move shading to separate maven modules
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-217'>SEDONA-217</a>] -         Automatically broadcast small datasets
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-220'>SEDONA-220</a>] -         Upgrade Ubuntu build image from 18.04 to 20.04
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-226'>SEDONA-226</a>] -         Support reading and writing GeoParquet file metadata
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-228'>SEDONA-228</a>] -         Standardize logging dependencies
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-234'>SEDONA-234</a>] -         ST_Point inconsistencies
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-243'>SEDONA-243</a>] -         Improve Sedona R file readers: GeoParquet and Shapefile
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-244'>SEDONA-244</a>] -         Align R read/write functions with the Sparklyr framework
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-249'>SEDONA-249</a>] -         Add jvm flags for running tests on Java 17 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-251'>SEDONA-251</a>] -         Add raster type to Sedona
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-253'>SEDONA-253</a>] -         Upgrade geotools to version 28.2
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/SEDONA-260'>SEDONA-260</a>] -         More intuitive configuration of partition and index-build side of spatial joins in Sedona SQL
+</li>
+</ul>
+
 ## Sedona 1.3.1
 
 This version is a minor release on Sedoma 1.3.0 line. It fixes a few critical bugs in 1.3.0. We suggest all 1.3.0 users to migrate to this version.
diff --git a/docs/tutorial/core-python.md b/docs/tutorial/core-python.md
deleted file mode 100644
index ec2ae086..00000000
--- a/docs/tutorial/core-python.md
+++ /dev/null
@@ -1,734 +0,0 @@
-# Spatial RDD Applications in Python
-
-## Introduction
-<div style="text-align: justify">
-Sedona provides a Python wrapper on Sedona core Java/Scala library.
-Sedona SpatialRDDs (and other classes when it was necessary) have implemented meta classes which allow 
-to use overloaded functions, methods and constructors to be the most similar to Java/Scala API as possible. 
-</div>
-
-Apache Sedona core provides five special SpatialRDDs:
-
-<li> PointRDD </li>
-<li> PolygonRDD </li>
-<li> LineStringRDD </li>
-<li> CircleRDD </li>
-<li> RectangleRDD </li>
-<div style="text-align: justify">
-<p>
-All of them can be imported from <b> sedona.core.SpatialRDD </b> module
-
-<b> sedona </b> has written serializers which convert Sedona SpatialRDD to Python objects.
-Converting will produce GeoData objects which have 2 attributes:
-</p>
-</div>
-<li> geom: shapely.geometry.BaseGeometry </li>
-<li> userData: str </li>
-
-geom attribute holds geometry representation as shapely objects. 
-userData is string representation of other attributes separated by "\t"
-</br>
-
-GeoData has one method to get user data.
-<li> getUserData() -> str </li>
-
-!!!note
-	This tutorial is based on [Sedona Core Jupyter Notebook example](../jupyter-notebook). You can interact with Sedona Python Jupyter notebook immediately on Binder. Click [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder) and wait for a few minutes. Then select a notebook and enjoy!
-
-## Installation
-
-Please read [Quick start](../../setup/install-python) to install Sedona Python.
-
-## Apache Sedona Serializers
-Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption.
-
-```python
-conf.set("spark.serializer", KryoSerializer.getName)
-conf.set("spark.kryo.registrator", SedonaKryoRegistrator.getName)
-sc = SparkContext(conf=conf)
-```
-    
-## Create a SpatialRDD
-
-### Create a typed SpatialRDD
-Apache Sedona core provides three special SpatialRDDs:
-<li> PointRDD </li>
-<li> PolygonRDD </li> 
-<li> LineStringRDD </li>
-<li> CircleRDD </li>
-<li> RectangleRDD </li>
-<br>
-
-They can be loaded from CSV, TSV, WKT, WKB, Shapefiles, GeoJSON formats.
-To pass the format to SpatialRDD constructor please use <b> FileDataSplitter </b> enumeration. 
-
-sedona SpatialRDDs (and other classes when it was necessary) have implemented meta classes which allow 
-to use overloaded functions how Scala/Java Apache Sedona API allows. ex. 
-
-
-```python
-from pyspark import StorageLevel
-from sedona.core.SpatialRDD import PointRDD
-from sedona.core.enums import FileDataSplitter
-
-input_location = "checkin.csv"
-offset = 0  # The point long/lat starts from Column 0
-splitter = FileDataSplitter.CSV # FileDataSplitter enumeration
-carry_other_attributes = True  # Carry Column 2 (hotel, gas, bar...)
-level = StorageLevel.MEMORY_ONLY # Storage level from pyspark
-s_epsg = "epsg:4326" # Source epsg code
-t_epsg = "epsg:5070" # target epsg code
-
-point_rdd = PointRDD(sc, input_location, offset, splitter, carry_other_attributes)
-
-point_rdd = PointRDD(sc, input_location, splitter, carry_other_attributes, level, s_epsg, t_epsg)
-
-point_rdd = PointRDD(
-    sparkContext=sc,
-    InputLocation=input_location,
-    Offset=offset,
-    splitter=splitter,
-    carryInputData=carry_other_attributes
-)
-```
-
-
-#### From SparkSQL DataFrame
-To create spatialRDD from other formats you can use adapter between Spark DataFrame and SpatialRDD
-
-<li> Load data in SedonaSQL. </li>
-
-```python
-csv_point_input_location= "/tests/resources/county_small.tsv"
-
-df = spark.read.\
-    format("csv").\
-    option("delimiter", "\t").\
-    option("header", "false").\
-    load(csv_point_input_location)
-
-df.createOrReplaceTempView("counties")
-
-```
-
-<li> Create a Geometry type column in SedonaSQL </li>
-
-```python
-spatial_df = spark.sql(
-    """
-        SELECT ST_GeomFromWKT(_c0) as geom, _c6 as county_name
-        FROM counties
-    """
-)
-spatial_df.printSchema()
-```
-
-```
-root
- |-- geom: geometry (nullable = false)
- |-- county_name: string (nullable = true)
-```
-
-<li> Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD </li>
-Note that, you have to name your column geometry
-
-```python
-from sedona.utils.adapter import Adapter
-
-spatial_rdd = Adapter.toSpatialRdd(spatial_df)
-spatial_rdd.analyze()
-
-spatial_rdd.boundaryEnvelope
-```
-
-```
-<sedona.core.geom_types.Envelope object at 0x7f1e5f29fe10>
-```
-
-or pass Geometry column name as a second argument
-
-```python
-spatial_rdd = Adapter.toSpatialRdd(spatial_df, "geom")
-```
-
-For WKT/WKB/GeoJSON data, please use ==ST_GeomFromWKT / ST_GeomFromWKB / ST_GeomFromGeoJSON== instead.
-    
-## Read other attributes in an SpatialRDD
-
-Each SpatialRDD can carry non-spatial attributes such as price, age and name as long as the user sets ==carryOtherAttributes== as [TRUE](#create-a-spatialrdd).
-
-The other attributes are combined together to a string and stored in ==UserData== field of each geometry.
-
-To retrieve the UserData field, use the following code:
-```python
-rdd_with_other_attributes = object_rdd.rawSpatialRDD.map(lambda x: x.getUserData())
-```
-
-## Write a Spatial Range Query
-
-```python
-from sedona.core.geom.envelope import Envelope
-from sedona.core.spatialOperator import RangeQuery
-
-range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
-consider_boundary_intersection = False  ## Only return gemeotries fully covered by the window
-using_index = False
-query_result = RangeQuery.SpatialRangeQuery(spatial_rdd, range_query_window, consider_boundary_intersection, using_index)
-```
-
-!!!note
-    Please use RangeQueryRaw from the same module
-    if you want to avoid jvm python serde while converting to Spatial DataFrame
-    It takes the same parameters as RangeQuery but returns reference to jvm rdd which
-    can be converted to dataframe without python - jvm serde using Adapter.
-    
-    Example:
-    ```python
-    from sedona.core.geom.envelope import Envelope
-    from sedona.core.spatialOperator import RangeQueryRaw
-    from sedona.utils.adapter import Adapter
-    
-    range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
-    consider_boundary_intersection = False  ## Only return gemeotries fully covered by the window
-    using_index = False
-    query_result = RangeQueryRaw.SpatialRangeQuery(spatial_rdd, range_query_window, consider_boundary_intersection, using_index)
-    gdf = Adapter.toDf(query_result, spark, ["col1", ..., "coln"])
-
-    ```
-
-### Range query window
-
-Besides the rectangle (Envelope) type range query window, Apache Sedona range query window can be 
-<li> Point </li> 
-<li> Polygon </li>
-<li> LineString </li>
-</br>
-
-To create shapely geometries please follow [Shapely official docs](https://shapely.readthedocs.io/en/stable/manual.html)
-
-
-### Use spatial indexes
-
-Sedona provides two types of spatial indexes,
-<li> Quad-Tree </li>
-<li> R-Tree </li>
-Once you specify an index type, 
-Sedona will build a local tree index on each of the SpatialRDD partition.
-
-To utilize a spatial index in a spatial range query, use the following code:
-
-```python
-from sedona.core.geom.envelope import Envelope
-from sedona.core.enums import IndexType
-from sedona.core.spatialOperator import RangeQuery
-
-range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
-consider_boundary_intersection = False ## Only return gemeotries fully covered by the window
-
-build_on_spatial_partitioned_rdd = False ## Set to TRUE only if run join query
-spatial_rdd.buildIndex(IndexType.QUADTREE, build_on_spatial_partitioned_rdd)
-
-using_index = True
-
-query_result = RangeQuery.SpatialRangeQuery(
-    spatial_rdd,
-    range_query_window,
-    consider_boundary_intersection,
-    using_index
-)
-```
-
-### Output format
-
-The output format of the spatial range query is another RDD which consists of GeoData objects.
-
-SpatialRangeQuery result can be used as RDD with map or other spark RDD functions. Also it can be used as 
-Python objects when using collect method.
-Example:
-
-```python
-query_result.map(lambda x: x.geom.length).collect()
-```
-
-```
-[
- 1.5900840000000045,
- 1.5906639999999896,
- 1.1110299999999995,
- 1.1096700000000084,
- 1.1415619999999933,
- 1.1386399999999952,
- 1.1415619999999933,
- 1.1418860000000137,
- 1.1392780000000045,
- ...
-]
-```
-
-Or transformed to GeoPandas GeoDataFrame
-
-```python
-import geopandas as gpd
-gpd.GeoDataFrame(
-    query_result.map(lambda x: [x.geom, x.userData]).collect(),
-    columns=["geom", "user_data"],
-    geometry="geom"
-)
-```
-
-## Write a Spatial KNN Query
-
-A spatial K Nearnest Neighbor query takes as input a K, a query point and an SpatialRDD and finds the K geometries in the RDD which are the closest to he query point.
-
-Assume you now have an SpatialRDD (typed or generic). You can use the following code to issue an Spatial KNN Query on it.
-
-```python
-from sedona.core.spatialOperator import KNNQuery
-from shapely.geometry import Point
-
-point = Point(-84.01, 34.01)
-k = 1000 ## K Nearest Neighbors
-using_index = False
-result = KNNQuery.SpatialKnnQuery(object_rdd, point, k, using_index)
-```
-
-### Query center geometry
-
-Besides the Point type, Apache Sedona KNN query center can be 
-<li> Polygon </li>
-<li> LineString </li>
-
-To create Polygon or Linestring object please follow [Shapely official docs](https://shapely.readthedocs.io/en/stable/manual.html)
-### Use spatial indexes
-
-To utilize a spatial index in a spatial KNN query, use the following code:
-
-```python
-from sedona.core.spatialOperator import KNNQuery
-from sedona.core.enums import IndexType
-from shapely.geometry import Point
-
-point = Point(-84.01, 34.01)
-k = 5 ## K Nearest Neighbors
-
-build_on_spatial_partitioned_rdd = False ## Set to TRUE only if run join query
-spatial_rdd.buildIndex(IndexType.RTREE, build_on_spatial_partitioned_rdd)
-
-using_index = True
-result = KNNQuery.SpatialKnnQuery(spatial_rdd, point, k, using_index)
-```
-
-!!!warning
-    Only R-Tree index supports Spatial KNN query
-
-### Output format
-
-The output format of the spatial KNN query is a list of GeoData objects. 
-The list has K GeoData objects.
-
-Example:
-```python
->> result
-
-[GeoData, GeoData, GeoData, GeoData, GeoData]
-```
-
-
-## Write a Spatial Join Query
-
-A spatial join query takes as input two Spatial RDD A and B. For each geometry in A, finds the geometries (from B) covered/intersected by it. A and B can be any geometry type and are not necessary to have the same geometry type.
-
-Assume you now have two SpatialRDDs (typed or generic). You can use the following code to issue an Spatial Join Query on them.
-
-```python
-from sedona.core.enums import GridType
-from sedona.core.spatialOperator import JoinQuery
-
-consider_boundary_intersection = False ## Only return geometries fully covered by each query window in queryWindowRDD
-using_index = False
-
-object_rdd.analyze()
-
-object_rdd.spatialPartitioning(GridType.KDBTREE)
-query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
-
-result = JoinQuery.SpatialJoinQuery(object_rdd, query_window_rdd, using_index, consider_boundary_intersection)
-```
-
-Result of SpatialJoinQuery is RDD which consists of GeoData instance and list of GeoData instances which spatially intersects or 
-are covered by GeoData. 
-
-```python
-result.collect())
-```
-
-```
-[
-    [GeoData, [GeoData, GeoData, GeoData, GeoData]],
-    [GeoData, [GeoData, GeoData, GeoData]],
-    [GeoData, [GeoData]],
-    [GeoData, [GeoData, GeoData]],
-    ...
-    [GeoData, [GeoData, GeoData]]
-]
-
-```
-
-### Use spatial partitioning
-
-Apache Sedona spatial partitioning method can significantly speed up the join query. Three spatial partitioning methods are available: KDB-Tree, Quad-Tree and R-Tree. Two SpatialRDD must be partitioned by the same way.
-
-If you first partition SpatialRDD A, then you must use the partitioner of A to partition B.
-
-```python
-object_rdd.spatialPartitioning(GridType.KDBTREE)
-query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
-```
-
-Or 
-
-```python
-query_window_rdd.spatialPartitioning(GridType.KDBTREE)
-object_rdd.spatialPartitioning(query_window_rdd.getPartitioner())
-```
-
-
-### Use spatial indexes
-
-To utilize a spatial index in a spatial join query, use the following code:
-
-```python
-from sedona.core.enums import GridType
-from sedona.core.enums import IndexType
-from sedona.core.spatialOperator import JoinQuery
-
-object_rdd.spatialPartitioning(GridType.KDBTREE)
-query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
-
-build_on_spatial_partitioned_rdd = True ## Set to TRUE only if run join query
-using_index = True
-query_window_rdd.buildIndex(IndexType.QUADTREE, build_on_spatial_partitioned_rdd)
-
-result = JoinQuery.SpatialJoinQueryFlat(object_rdd, query_window_rdd, using_index, True)
-```
-
-The index should be built on either one of two SpatialRDDs. In general, you should build it on the larger SpatialRDD.
-
-### Output format
-
-The output format of the spatial join query is a PairRDD. In this PairRDD, each object is a pair of two GeoData objects.
-The left one is the GeoData from object_rdd and the right one is the GeoData from the query_window_rdd.
-
-```
-Point,Polygon
-Point,Polygon
-Point,Polygon
-Polygon,Polygon
-LineString,LineString
-Polygon,LineString
-...
-```
-
-example 
-```python
-result.collect()
-```
-
-```
-[
- [GeoData, GeoData],
- [GeoData, GeoData],
- [GeoData, GeoData],
- [GeoData, GeoData],
- ...
- [GeoData, GeoData],
- [GeoData, GeoData]
-]
-```
-
-Each object on the left is covered/intersected by the object on the right.
-
-## Write a Distance Join Query
-
-!!!warning
-    RDD distance joins are only reliable for points. For other geometry types, please use Spatial SQL.
-
-A distance join query takes two spatial RDD assuming that we have two SpatialRDD's:
-<li> object_rdd </li>
-<li> spatial_rdd </li>
-
-And finds the geometries (from spatial_rdd) are within given distance to it. spatial_rdd and object_rdd
-can be any geometry type (point, line, polygon) and are not necessary to have the same geometry type
- 
-You can use the following code to issue an Distance Join Query on them.
-
-```python
-from sedona.core.SpatialRDD import CircleRDD
-from sedona.core.enums import GridType
-from sedona.core.spatialOperator import JoinQuery
-
-object_rdd.analyze()
-
-circle_rdd = CircleRDD(object_rdd, 0.1) ## Create a CircleRDD using the given distance
-circle_rdd.analyze()
-
-circle_rdd.spatialPartitioning(GridType.KDBTREE)
-spatial_rdd.spatialPartitioning(circle_rdd.getPartitioner())
-
-consider_boundary_intersection = False ## Only return gemeotries fully covered by each query window in queryWindowRDD
-using_index = False
-
-result = JoinQuery.DistanceJoinQueryFlat(spatial_rdd, circle_rdd, using_index, consider_boundary_intersection)
-```
-
-!!!note
-    Please use JoinQueryRaw from the same module for methods 
-    
-    - spatialJoin
-    
-    - DistanceJoinQueryFlat
-
-    - SpatialJoinQueryFlat
-
-    For better performance while converting to dataframe with adapter. 
-    That approach allows to avoid costly serialization between Python 
-    and jvm and in result operating on python object instead of native geometries.
-    
-    Example:
-    ```python
-    from sedona.core.SpatialRDD import CircleRDD
-    from sedona.core.enums import GridType
-    from sedona.core.spatialOperator import JoinQueryRaw
-    
-    object_rdd.analyze()
-    
-    circle_rdd = CircleRDD(object_rdd, 0.1) ## Create a CircleRDD using the given distance
-    circle_rdd.analyze()
-    
-    circle_rdd.spatialPartitioning(GridType.KDBTREE)
-    spatial_rdd.spatialPartitioning(circle_rdd.getPartitioner())
-    
-    consider_boundary_intersection = False ## Only return gemeotries fully covered by each query window in queryWindowRDD
-    using_index = False
-    
-    result = JoinQueryRaw.DistanceJoinQueryFlat(spatial_rdd, circle_rdd, using_index, consider_boundary_intersection)
-    
-    gdf = Adapter.toDf(result, ["left_col1", ..., "lefcoln"], ["rightcol1", ..., "rightcol2"], spark)
-    ```
-
-### Output format
-
-Result for this query is RDD which holds two GeoData objects within list of lists.
-Example:
-```python
-result.collect()
-```
-
-```
-[[GeoData, GeoData], [GeoData, GeoData] ...]
-```
-
-It is possible to do some RDD operation on result data ex. Getting polygon centroid.
-```python
-result.map(lambda x: x[0].geom.centroid).collect()
-```
-
-```
-[
- <shapely.geometry.point.Point at 0x7efee2d28128>,
- <shapely.geometry.point.Point at 0x7efee2d280b8>,
- <shapely.geometry.point.Point at 0x7efee2d28fd0>,
- <shapely.geometry.point.Point at 0x7efee2d28080>,
- ...
-]
-```
-    
-## Save to permanent storage
-
-You can always save an SpatialRDD back to some permanent storage such as HDFS and Amazon S3. You can save distributed SpatialRDD to WKT, GeoJSON and object files.
-
-!!!note
-    Non-spatial attributes such as price, age and name will also be stored to permanent storage.
-
-### Save an SpatialRDD (not indexed)
-
-Typed SpatialRDD and generic SpatialRDD can be saved to permanent storage.
-
-#### Save to distributed WKT text file
-
-Use the following code to save an SpatialRDD as a distributed WKT text file:
-
-```python
-object_rdd.rawSpatialRDD.saveAsTextFile("hdfs://PATH")
-object_rdd.saveAsWKT("hdfs://PATH")
-```
-
-#### Save to distributed WKB text file
-
-Use the following code to save an SpatialRDD as a distributed WKB text file:
-
-```python
-object_rdd.saveAsWKB("hdfs://PATH")
-```
-
-#### Save to distributed GeoJSON text file
-
-Use the following code to save an SpatialRDD as a distributed GeoJSON text file:
-
-```python
-object_rdd.saveAsGeoJSON("hdfs://PATH")
-```
-
-
-#### Save to distributed object file
-
-Use the following code to save an SpatialRDD as a distributed object file:
-
-```python
-object_rdd.rawJvmSpatialRDD.saveAsObjectFile("hdfs://PATH")
-```
-
-!!!note
-    Each object in a distributed object file is a byte array (not human-readable). This byte array is the serialized format of a Geometry or a SpatialIndex.
-
-### Save an SpatialRDD (indexed)
-
-Indexed typed SpatialRDD and generic SpatialRDD can be saved to permanent storage. However, the indexed SpatialRDD has to be stored as a distributed object file.
-
-#### Save to distributed object file
-
-Use the following code to save an SpatialRDD as a distributed object file:
-
-```python
-object_rdd.indexedRawRDD.saveAsObjectFile("hdfs://PATH")
-```
-
-### Save an SpatialRDD (spatialPartitioned W/O indexed)
-
-A spatial partitioned RDD can be saved to permanent storage but Spark is not able to maintain the same RDD partition Id of the original RDD. This will lead to wrong join query results. We are working on some solutions. Stay tuned!
-
-### Reload a saved SpatialRDD
-
-You can easily reload an SpatialRDD that has been saved to ==a distributed object file==.
-
-#### Load to a typed SpatialRDD
-
-Use the following code to reload the PointRDD/PolygonRDD/LineStringRDD:
-
-```python
-from sedona.core.formatMapper.disc_utils import load_spatial_rdd_from_disc, GeoType
-
-polygon_rdd = load_spatial_rdd_from_disc(sc, "hdfs://PATH", GeoType.POLYGON)
-point_rdd = load_spatial_rdd_from_disc(sc, "hdfs://PATH", GeoType.POINT)
-linestring_rdd = load_spatial_rdd_from_disc(sc, "hdfs://PATH", GeoType.LINESTRING)
-```
-
-#### Load to a generic SpatialRDD
-
-Use the following code to reload the SpatialRDD:
-
-```python
-saved_rdd = load_spatial_rdd_from_disc(sc, "hdfs://PATH", GeoType.GEOMETRY)
-```
-
-Use the following code to reload the indexed SpatialRDD:
-```python
-saved_rdd = SpatialRDD()
-saved_rdd.indexedRawRDD = load_spatial_index_rdd_from_disc(sc, "hdfs://PATH")
-```
-
-## Read from other Geometry files
-
-All below methods will return SpatialRDD object which can be used with Spatial functions such as Spatial Join etc.
-
-### Read from WKT file
-```python
-from sedona.core.formatMapper import WktReader
-
-WktReader.readToGeometryRDD(sc, wkt_geometries_location, 0, True, False)
-```
-```
-<sedona.core.SpatialRDD.spatial_rdd.SpatialRDD at 0x7f8fd2fbf250>
-```
-
-### Read from WKB file
-```python
-from sedona.core.formatMapper import WkbReader
-
-WkbReader.readToGeometryRDD(sc, wkb_geometries_location, 0, True, False)
-```
-```
-<sedona.core.SpatialRDD.spatial_rdd.SpatialRDD at 0x7f8fd2eece50>
-```
-### Read from GeoJson file
-
-```python
-from sedona.core.formatMapper import GeoJsonReader
-
-GeoJsonReader.readToGeometryRDD(sc, geo_json_file_location)
-```
-```
-<sedona.core.SpatialRDD.spatial_rdd.SpatialRDD at 0x7f8fd2eecb90>
-```
-### Read from Shapefile
-
-```python
-from sedona.core.formatMapper.shapefileParser import ShapefileReader
-
-ShapefileReader.readToGeometryRDD(sc, shape_file_location)
-```
-```
-<sedona.core.SpatialRDD.spatial_rdd.SpatialRDD at 0x7f8fd2ee0710>
-```
-
-### Tips
-When you use Sedona functions such as
-
-- JoinQuery.spatialJoin
-
-- JoinQuery.DistanceJoinQueryFlat
-
-- JoinQuery.SpatialJoinQueryFlat
-
-- RangeQuery.SpatialRangeQuery
-
-For better performance when converting to dataframe you can use
-JoinQueryRaw and RangeQueryRaw from the same module and adapter to convert 
-to Spatial DataFrame. 
-
-Example, JoinQueryRaw:
-
-```python
-from sedona.core.SpatialRDD import CircleRDD
-from sedona.core.enums import GridType
-from sedona.core.spatialOperator import JoinQueryRaw
-
-object_rdd.analyze()
-
-circle_rdd = CircleRDD(object_rdd, 0.1) ## Create a CircleRDD using the given distance
-circle_rdd.analyze()
-
-circle_rdd.spatialPartitioning(GridType.KDBTREE)
-spatial_rdd.spatialPartitioning(circle_rdd.getPartitioner())
-
-consider_boundary_intersection = False ## Only return gemeotries fully covered by each query window in queryWindowRDD
-using_index = False
-
-result = JoinQueryRaw.DistanceJoinQueryFlat(spatial_rdd, circle_rdd, using_index, consider_boundary_intersection)
-
-gdf = Adapter.toDf(result, ["left_col1", ..., "lefcoln"], ["rightcol1", ..., "rightcol2"], spark)
-```
-
-and RangeQueryRaw
-
-```python
-from sedona.core.geom.envelope import Envelope
-from sedona.core.spatialOperator import RangeQueryRaw
-from sedona.utils.adapter import Adapter
-
-range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
-consider_boundary_intersection = False  ## Only return gemeotries fully covered by the window
-using_index = False
-query_result = RangeQueryRaw.SpatialRangeQuery(spatial_rdd, range_query_window, consider_boundary_intersection, using_index)
-gdf = Adapter.toDf(query_result, spark, ["col1", ..., "coln"])
-```
\ No newline at end of file
diff --git a/docs/tutorial/rdd.md b/docs/tutorial/rdd.md
index f97a5a2d..99880f38 100644
--- a/docs/tutorial/rdd.md
+++ b/docs/tutorial/rdd.md
@@ -1,104 +1,83 @@
 
-The page outlines the steps to create Spatial RDDs and run spatial queries using Sedona-core. ==The example code is written in Scala but also works for Java==.
+The page outlines the steps to create Spatial RDDs and run spatial queries using Sedona-core.
 
 ## Set up dependencies
 
-1. Read [Sedona Maven Central coordinates](../setup/maven-coordinates.md)
-2. Select ==the minimum dependencies==: Add Apache Spark (only the Spark core) and Sedona (core).
-3. Add the dependencies in build.sbt or pom.xml.
+=== "Scala/Java"
 
-!!!note
-	To enjoy the full functions of Sedona, we suggest you include ==the full dependencies==: [Apache Spark core](https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11), [Apache SparkSQL](https://mvnrepository.com/artifact/org.apache.spark/spark-sql), Sedona-core, Sedona-SQL, Sedona-Viz. Please see [RDD example project](../demo/)
+	1. Read [Sedona Maven Central coordinates](../setup/maven-coordinates.md) and add Sedona dependencies in build.sbt or pom.xml.
+	2. Add [Apache Spark core](https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11), [Apache SparkSQL](https://mvnrepository.com/artifact/org.apache.spark/spark-sql) in build.sbt or pom.xml.
+	3. Please see [RDD example project](../demo/)
 
-## Initiate SparkContext
+=== "Python"
 
-```scala
-val conf = new SparkConf()
-conf.setAppName("SedonaRunnableExample") // Change this to a proper name
-conf.setMaster("local[*]") // Delete this if run in cluster mode
-// Enable Sedona custom Kryo serializer
-conf.set("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
-conf.set("spark.kryo.registrator", classOf[SedonaKryoRegistrator].getName) // org.apache.sedona.core.serde.SedonaKryoRegistrator
-val sc = new SparkContext(conf)
-```
+	1. Please read [Quick start](../../setup/install-python) to install Sedona Python.
+	2. This tutorial is based on [Sedona Core Jupyter Notebook example](../jupyter-notebook). You can interact with Sedona Python Jupyter notebook immediately on Binder. Click [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder) to interact with Sedona Python Jupyter notebook immediately on Binder.
 
-!!!warning
-	Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption.
+## Initiate SparkContext
 
-If you add ==the Sedona full dependencies== as suggested above, please use the following two lines to enable Sedona Kryo serializer instead:
-```scala
-conf.set("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
-conf.set("spark.kryo.registrator", classOf[SedonaVizKryoRegistrator].getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
-```
+=== "Scala"
 
-## Create a SpatialRDD
+	```scala
+	val conf = new SparkConf()
+	conf.setAppName("SedonaRunnableExample") // Change this to a proper name
+	conf.setMaster("local[*]") // Delete this if run in cluster mode
+	// Enable Sedona custom Kryo serializer
+	conf.set("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
+	conf.set("spark.kryo.registrator", classOf[SedonaKryoRegistrator].getName) // org.apache.sedona.core.serde.SedonaKryoRegistrator
+	val sc = new SparkContext(conf)
+	```
+	
+	If you add ==the Sedona full dependencies== as suggested above, please use the following two lines to enable Sedona Kryo serializer instead:
+	```scala
+	conf.set("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
+	conf.set("spark.kryo.registrator", classOf[SedonaVizKryoRegistrator].getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
+	```
 
-### Create a typed SpatialRDD
-Sedona-core provides three special SpatialRDDs: ==PointRDD, PolygonRDD, and LineStringRDD==. They can be loaded from CSV, TSV, WKT, WKB, Shapefiles, GeoJSON and NetCDF/HDF format.
+=== "Java"
 
-#### PointRDD from CSV/TSV
-Suppose we have a `checkin.csv` CSV file at Path `/Download/checkin.csv` as follows:
-```
--88.331492,32.324142,hotel
--88.175933,32.360763,gas
--88.388954,32.357073,bar
--88.221102,32.35078,restaurant
-```
-This file has three columns and corresponding ==offsets==(Column IDs) are 0, 1, 2.
-Use the following code to create a PointRDD
+	```java
+	SparkConf conf = new SparkConf()
+	conf.setAppName("SedonaRunnableExample") // Change this to a proper name
+	conf.setMaster("local[*]") // Delete this if run in cluster mode
+	// Enable Sedona custom Kryo serializer
+	conf.set("spark.serializer", KryoSerializer.class.getName) // org.apache.spark.serializer.KryoSerializer
+	conf.set("spark.kryo.registrator", SedonaKryoRegistrator.class.getName) // org.apache.sedona.core.serde.SedonaKryoRegistrator
+	SparkContext sc = new SparkContext(conf)
+	```
+	
+	If you use SedonaViz with SedonaRDD, please use the following two lines to enable Sedona Kryo serializer instead:
+	```scala
+	conf.set("spark.serializer", KryoSerializer.class.getName) // org.apache.spark.serializer.KryoSerializer
+	conf.set("spark.kryo.registrator", SedonaVizKryoRegistrator.class.getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
+	```
 
-```scala
-val pointRDDInputLocation = "/Download/checkin.csv"
-val pointRDDOffset = 0 // The point long/lat starts from Column 0
-val pointRDDSplitter = FileDataSplitter.CSV
-val carryOtherAttributes = true // Carry Column 2 (hotel, gas, bar...)
-var objectRDD = new PointRDD(sc, pointRDDInputLocation, pointRDDOffset, pointRDDSplitter, carryOtherAttributes)
-```
+=== "Python"
 
-If the data file is in TSV format, just simply use the following line to replace the old FileDataSplitter:
-```scala
-val pointRDDSplitter = FileDataSplitter.TSV
+```python
+conf.set("spark.serializer", KryoSerializer.getName)
+conf.set("spark.kryo.registrator", SedonaKryoRegistrator.getName)
+sc = SparkContext(conf=conf)
 ```
 
-#### PolygonRDD/LineStringRDD from CSV/TSV
-In general, polygon and line string data is stored in WKT, WKB, GeoJSON and Shapefile formats instead of CSV/TSV because the geometries in a file may have different lengths. However, if all polygons / line strings in your CSV/TSV possess the same length, you can create PolygonRDD and LineStringRDD from these files.
+!!!warning
+	Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption.
 
-Suppose we have a `checkinshape.csv` CSV file at Path `/Download/checkinshape.csv` as follows:
-```
--88.331492,32.324142,-88.331492,32.324142,-88.331492,32.324142,-88.331492,32.324142,-88.331492,32.324142,hotel
--88.175933,32.360763,-88.175933,32.360763,-88.175933,32.360763,-88.175933,32.360763,-88.175933,32.360763,gas
--88.388954,32.357073,-88.388954,32.357073,-88.388954,32.357073,-88.388954,32.357073,-88.388954,32.357073,bar
--88.221102,32.35078,-88.221102,32.35078,-88.221102,32.35078,-88.221102,32.35078,-88.221102,32.35078,restaurant
-```
+## Create a SpatialRDD
 
-This file has 11 columns and corresponding offsets (Column IDs) are 0 - 10. Column 0 - 9 are 5 coordinates (longitude/latitude pairs). In this file, all geometries have the same number of coordinates. The geometries can be polyons or line strings.
+### Create a typed SpatialRDD
+Sedona-core provides three special SpatialRDDs: PointRDD, PolygonRDD, and LineStringRDD.
 
 !!!warning
-	For polygon data, the last coordinate must be the same as the first coordinate because a polygon is a closed linear ring.
-	
-Use the following code to create a PolygonRDD.
-```scala
-val polygonRDDInputLocation = "/Download/checkinshape.csv"
-val polygonRDDStartOffset = 0 // The coordinates start from Column 0
-val polygonRDDEndOffset = 9 // The coordinates end at Column 9
-val polygonRDDSplitter = FileDataSplitter.CSV
-val carryOtherAttributes = true // Carry Column 10 (hotel, gas, bar...)
-var objectRDD = new PolygonRDD(sc, polygonRDDInputLocation, polygonRDDStartOffset, polygonRDDEndOffset, polygonRDDSplitter, carryOtherAttributes)
-```
-
-If the data file is in TSV format, just simply use the following line to replace the old FileDataSplitter:
-```scala
-val polygonRDDSplitter = FileDataSplitter.TSV
-```
-
-The way to create a LineStringRDD is the same as PolygonRDD.
+	Typed SpatialRDD has been deprecated for a long time. We do NOT recommend it anymore.
 
 ### Create a generic SpatialRDD
 
 A generic SpatialRDD is not typed to a certain geometry type and open to more scenarios. It allows an input data file contains mixed types of geometries. For instance, a WKT file contains three types gemetries ==LineString==, ==Polygon== and ==MultiPolygon==.
 
 #### From WKT/WKB
-Geometries in a WKT and WKB file always occupy a single column no matter how many coordinates they have. Therefore, creating a typed SpatialRDD is easy.
+
+Geometries in a WKT and WKB file always occupy a single column no matter how many coordinates they have. Sedona provides `WktReader ` and `WkbReader` to create generic SpatialRDD.
 
 Suppose we have a `checkin.tsv` WKT TSV file at Path `/Download/checkin.tsv` as follows:
 ```
@@ -111,13 +90,37 @@ This file has two columns and corresponding ==offsets==(Column IDs) are 0, 1. Co
 
 Use the following code to create a SpatialRDD
 
-```scala
-val inputLocation = "/Download/checkin.tsv"
-val wktColumn = 0 // The WKT string starts from Column 0
-val allowTopologyInvalidGeometries = true // Optional
-val skipSyntaxInvalidGeometries = false // Optional
-val spatialRDD = WktReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, wktColumn, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
-```
+=== "Scala"
+
+	```scala
+	val inputLocation = "/Download/checkin.tsv"
+	val wktColumn = 0 // The WKT string starts from Column 0
+	val allowTopologyInvalidGeometries = true // Optional
+	val skipSyntaxInvalidGeometries = false // Optional
+	val spatialRDD = WktReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, wktColumn, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
+	```
+
+=== "Java"
+
+	```java
+	String inputLocation = "/Download/checkin.tsv"
+	int wktColumn = 0 // The WKT string starts from Column 0
+	boolean allowTopologyInvalidGeometries = true // Optional
+	boolean skipSyntaxInvalidGeometries = false // Optional
+	SpatialRDD spatialRDD = WktReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, wktColumn, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.formatMapper import WktReader
+	from sedona.core.formatMapper import WkbReader
+	
+	WktReader.readToGeometryRDD(sc, wkt_geometries_location, 0, True, False)
+	
+	WkbReader.readToGeometryRDD(sc, wkb_geometries_location, 0, True, False)
+	```
+
 
 #### From GeoJSON
 
@@ -134,25 +137,65 @@ Suppose we have a `polygon.json` GeoJSON file at Path `/Download/polygon.json` a
 ```
 
 Use the following code to create a generic SpatialRDD:
-```scala
-val inputLocation = "/Download/polygon.json"
-val allowTopologyInvalidGeometries = true // Optional
-val skipSyntaxInvalidGeometries = false // Optional
-val spatialRDD = GeoJsonReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
-```
+
+=== "Scala"
+
+	```scala
+	val inputLocation = "/Download/polygon.json"
+	val allowTopologyInvalidGeometries = true // Optional
+	val skipSyntaxInvalidGeometries = false // Optional
+	val spatialRDD = GeoJsonReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
+	```
+
+=== "Java"
+
+	```java
+	String inputLocation = "/Download/polygon.json"
+	boolean allowTopologyInvalidGeometries = true // Optional
+	boolean skipSyntaxInvalidGeometries = false // Optional
+	SpatialRDD spatialRDD = GeoJsonReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
+	```
+	
+=== "Python"
+
+	```python
+	from sedona.core.formatMapper import GeoJsonReader
+	
+	GeoJsonReader.readToGeometryRDD(sc, geo_json_file_location)
+	```
 
 !!!warning
 	The way that Sedona reads JSON file is different from SparkSQL
 	
 #### From Shapefile
 
-```scala
-val shapefileInputLocation="/Download/myshapefile"
-val spatialRDD = ShapefileReader.readToGeometryRDD(sparkSession.sparkContext, shapefileInputLocation)
-```
+=== "Scala"
+
+	```scala
+	val shapefileInputLocation="/Download/myshapefile"
+	val spatialRDD = ShapefileReader.readToGeometryRDD(sparkSession.sparkContext, shapefileInputLocation)
+	```
+
+=== "Java"
+
+	```java
+	String shapefileInputLocation="/Download/myshapefile"
+	SpatialRDD spatialRDD = ShapefileReader.readToGeometryRDD(sparkSession.sparkContext, shapefileInputLocation)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.formatMapper.shapefileParser import ShapefileReader
+	
+	ShapefileReader.readToGeometryRDD(sc, shape_file_location)
+	```
+
+
 
 !!!note
 	The file extensions of .shp, .shx, .dbf must be in lowercase. Assume you have a shape file called ==myShapefile==, the file structure should be like this:
+	
 	```
 	- shapefile1
 	- shapefile2
@@ -165,19 +208,24 @@ val spatialRDD = ShapefileReader.readToGeometryRDD(sparkSession.sparkContext, sh
 	```
 
 If the file you are reading contains non-ASCII characters you'll need to explicitly set the encoding
-via `sedona.global.charset` system property before the call to `ShapefileReader.readToGeometryRDD`.
+via `sedona.global.charset` system property before creating your Spark context.
 
 Example:
 
 ```scala
 System.setProperty("sedona.global.charset", "utf8")
+
+val sc = new SparkContext(...)
 ```
 
-#### From SparkSQL DataFrame
+#### From SedonaSQL DataFrame
 
-To create a generic SpatialRDD from CSV, TSV, WKT, WKB and GeoJSON input formats, you can use SedonaSQL. Make sure you include ==the full dependencies== of Sedona. Read [SedonaSQL API](../../api/sql/Overview).
+!!!note
+	More details about SedonaSQL, please read the SedonaSQL tutorial.
 
-We use [checkin.csv CSV file](#pointrdd-from-csvtsv) as the example. You can create a generic SpatialRDD using the following steps:
+To create a generic SpatialRDD from CSV, TSV, WKT, WKB and GeoJSON input formats, you can use SedonaSQL.
+
+We use checkin.csv CSV file as the example. You can create a generic SpatialRDD using the following steps:
 
 1. Load data in SedonaSQL.
 ```scala
@@ -207,48 +255,92 @@ Sedona doesn't control the coordinate unit (degree-based or meter-based) of all
 
 To convert Coordinate Reference System of an SpatialRDD, use the following code:
 
-```scala
-val sourceCrsCode = "epsg:4326" // WGS84, the most common degree-based CRS
-val targetCrsCode = "epsg:3857" // The most common meter-based CRS
-objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, false)
-```
+=== "Scala"
+
+	```scala
+	val sourceCrsCode = "epsg:4326" // WGS84, the most common degree-based CRS
+	val targetCrsCode = "epsg:3857" // The most common meter-based CRS
+	objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, false)
+	```
+
+=== "Java"
+
+	```java
+	String sourceCrsCode = "epsg:4326" // WGS84, the most common degree-based CRS
+	String targetCrsCode = "epsg:3857" // The most common meter-based CRS
+	objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, false)
+	```
+
+=== "Python"
+
+	```python
+	sourceCrsCode = "epsg:4326" // WGS84, the most common degree-based CRS
+	targetCrsCode = "epsg:3857" // The most common meter-based CRS
+	objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, False)
+	```
 
 `false` in CRSTransform(sourceCrsCode, targetCrsCode, false) means that it will not tolerate Datum shift. If you want it to be lenient, use `true` instead.
 
 !!!warning
 	CRS transformation should be done right after creating each SpatialRDD, otherwise it will lead to wrong query results. For instance, use something like this:
+
+
+=== "Scala"
+
 	```scala
-	var objectRDD = new PointRDD(sc, pointRDDInputLocation, pointRDDOffset, pointRDDSplitter, carryOtherAttributes)
+	val objectRDD = WktReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, wktColumn, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
+	objectRDD.CRSTransform("epsg:4326", "epsg:3857", false)
+	```
+
+=== "Java"
+
+	```java
+	SpatialRDD objectRDD = WktReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, wktColumn, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
 	objectRDD.CRSTransform("epsg:4326", "epsg:3857", false)
 	```
 
+=== "Python"
+
+	```python
+	objectRDD = WktReader.readToGeometryRDD(sparkSession.sparkContext, inputLocation, wktColumn, allowTopologyInvalidGeometries, skipSyntaxInvalidGeometries)
+	objectRDD.CRSTransform("epsg:4326", "epsg:3857", False)
+	```
+
 The details CRS information can be found on [EPSG.io](https://epsg.io/)
 
 ## Read other attributes in an SpatialRDD
 
-Each SpatialRDD can carry non-spatial attributes such as price, age and name as long as the user sets ==carryOtherAttributes== as [TRUE](#create-a-spatialrdd).
+Each SpatialRDD can carry non-spatial attributes such as price, age and name.
 
 The other attributes are combined together to a string and stored in ==UserData== field of each geometry.
 
 To retrieve the UserData field, use the following code:
-```scala
-val rddWithOtherAttributes = objectRDD.rawSpatialRDD.rdd.map[String](f=>f.getUserData.asInstanceOf[String])
-```
+
+=== "Scala"
+
+	```scala
+	val rddWithOtherAttributes = objectRDD.rawSpatialRDD.rdd.map[String](f=>f.getUserData.asInstanceOf[String])
+	```
+
+=== "Java"
+
+	```java
+	SpatialRDD<Geometry> spatialRDD = Adapter.toSpatialRdd(spatialDf, "arealandmark");
+	spatialRDD.rawSpatialRDD.map(obj -> {return obj.getUserData();});
+	```
+
+=== "Python"
+
+	```python
+	rdd_with_other_attributes = object_rdd.rawSpatialRDD.map(lambda x: x.getUserData())
+	```
 
 ## Write a Spatial Range Query
 
 A spatial range query takes as input a range query window and an SpatialRDD and returns all geometries that have specified relationship with the query window.
 
-
 Assume you now have an SpatialRDD (typed or generic). You can use the following code to issue an Spatial Range Query on it.
 
-```scala
-val rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
-val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
-val usingIndex = false
-var queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spatialPredicate, usingIndex)
-```
-
 ==spatialPredicate== can be set to `SpatialPredicate.INTERSECTS` to return all geometries intersect with query window. Supported spatial predicates are:
 
 * `CONTAINS`: geometry is completely inside the query window
@@ -269,41 +361,110 @@ var queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spa
 	WHERE ST_Intersects(checkin.location, queryWindow)
 	```
 
+=== "Scala"
+
+	```scala
+	val rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
+	val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
+	val usingIndex = false
+	var queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spatialPredicate, usingIndex)
+	```
+
+=== "Java"
+
+	```java
+	Envelope rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
+	SpatialPredicate spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
+	boolean usingIndex = false
+	JavaRDD queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spatialPredicate, usingIndex)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.geom.envelope import Envelope
+	from sedona.core.spatialOperator import RangeQuery
+	
+	range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
+	consider_boundary_intersection = False  ## Only return gemeotries fully covered by the window
+	using_index = False
+	query_result = RangeQuery.SpatialRangeQuery(spatial_rdd, range_query_window, consider_boundary_intersection, using_index)
+	```
+
+!!!note
+    Sedona Python users: Please use RangeQueryRaw from the same module if you want to avoid jvm python serde while converting to Spatial DataFrame. It takes the same parameters as RangeQuery but returns reference to jvm rdd which can be converted to dataframe without python - jvm serde using Adapter.
+    
+    Example:
+    ```python
+    from sedona.core.geom.envelope import Envelope
+    from sedona.core.spatialOperator import RangeQueryRaw
+    from sedona.utils.adapter import Adapter
+    
+    range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
+    consider_boundary_intersection = False  ## Only return gemeotries fully covered by the window
+    using_index = False
+    query_result = RangeQueryRaw.SpatialRangeQuery(spatial_rdd, range_query_window, consider_boundary_intersection, using_index)
+    gdf = Adapter.toDf(query_result, spark, ["col1", ..., "coln"])
+    ```
+
+
 ### Range query window
 
 Besides the rectangle (Envelope) type range query window, Sedona range query window can be Point/Polygon/LineString.
 
-The code to create a point is as follows:
+The code to create a point, linestring (4 vertexes) and polygon (4 vertexes) is as follows:
 
-```scala
-val geometryFactory = new GeometryFactory()
-val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
-```
+=== "Scala"
 
-The code to create a polygon (with 4 vertexes) is as follows:
+	```scala
+	val geometryFactory = new GeometryFactory()
+	val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
+	
+	val geometryFactory = new GeometryFactory()
+	val coordinates = new Array[Coordinate](5)
+	coordinates(0) = new Coordinate(0,0)
+	coordinates(1) = new Coordinate(0,4)
+	coordinates(2) = new Coordinate(4,4)
+	coordinates(3) = new Coordinate(4,0)
+	coordinates(4) = coordinates(0) // The last coordinate is the same as the first coordinate in order to compose a closed ring
+	val polygonObject = geometryFactory.createPolygon(coordinates)
+	
+	val geometryFactory = new GeometryFactory()
+	val coordinates = new Array[Coordinate](4)
+	coordinates(0) = new Coordinate(0,0)
+	coordinates(1) = new Coordinate(0,4)
+	coordinates(2) = new Coordinate(4,4)
+	coordinates(3) = new Coordinate(4,0)
+	val linestringObject = geometryFactory.createLineString(coordinates)
+	```
 
-```scala
-val geometryFactory = new GeometryFactory()
-val coordinates = new Array[Coordinate](5)
-coordinates(0) = new Coordinate(0,0)
-coordinates(1) = new Coordinate(0,4)
-coordinates(2) = new Coordinate(4,4)
-coordinates(3) = new Coordinate(4,0)
-coordinates(4) = coordinates(0) // The last coordinate is the same as the first coordinate in order to compose a closed ring
-val polygonObject = geometryFactory.createPolygon(coordinates)
-```
+=== "Java"
 
-The code to create a line string (with 4 vertexes) is as follows:
+	```java
+	GeometryFactory geometryFactory = new GeometryFactory()
+	Point pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
+	
+	GeometryFactory geometryFactory = new GeometryFactory()
+	Coordinate[] coordinates = new Array[Coordinate](5)
+	coordinates(0) = new Coordinate(0,0)
+	coordinates(1) = new Coordinate(0,4)
+	coordinates(2) = new Coordinate(4,4)
+	coordinates(3) = new Coordinate(4,0)
+	coordinates(4) = coordinates(0) // The last coordinate is the same as the first coordinate in order to compose a closed ring
+	Polygon polygonObject = geometryFactory.createPolygon(coordinates)
+	
+	GeometryFactory geometryFactory = new GeometryFactory()
+	val coordinates = new Array[Coordinate](4)
+	coordinates(0) = new Coordinate(0,0)
+	coordinates(1) = new Coordinate(0,4)
+	coordinates(2) = new Coordinate(4,4)
+	coordinates(3) = new Coordinate(4,0)
+	LineString linestringObject = geometryFactory.createLineString(coordinates)
+	```
 
-```scala
-val geometryFactory = new GeometryFactory()
-val coordinates = new Array[Coordinate](4)
-coordinates(0) = new Coordinate(0,0)
-coordinates(1) = new Coordinate(0,4)
-coordinates(2) = new Coordinate(4,4)
-coordinates(3) = new Coordinate(4,0)
-val linestringObject = geometryFactory.createLineString(coordinates)
-```
+=== "Python"
+
+	A Shapely geometry can be used as a query window. To create shapely geometries, please follow [Shapely official docs](https://shapely.readthedocs.io/en/stable/manual.html)
 
 ### Use spatial indexes
 
@@ -311,23 +472,101 @@ Sedona provides two types of spatial indexes, Quad-Tree and R-Tree. Once you spe
 
 To utilize a spatial index in a spatial range query, use the following code:
 
-```scala
-val rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
-val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
+=== "Scala"
 
-val buildOnSpatialPartitionedRDD = false // Set to TRUE only if run join query
-spatialRDD.buildIndex(IndexType.QUADTREE, buildOnSpatialPartitionedRDD)
+	```scala
+	val rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
+	val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
+	
+	val buildOnSpatialPartitionedRDD = false // Set to TRUE only if run join query
+	spatialRDD.buildIndex(IndexType.QUADTREE, buildOnSpatialPartitionedRDD)
+	
+	val usingIndex = true
+	var queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spatialPredicate, usingIndex)
+	```
 
-val usingIndex = true
-var queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spatialPredicate, usingIndex)
-```
+=== "Java"
+
+	```java
+	Envelope rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
+	SpatialPredicate spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
+	
+	boolean buildOnSpatialPartitionedRDD = false // Set to TRUE only if run join query
+	spatialRDD.buildIndex(IndexType.QUADTREE, buildOnSpatialPartitionedRDD)
+	
+	boolean usingIndex = true
+	JavaRDD queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spatialPredicate, usingIndex)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.geom.envelope import Envelope
+	from sedona.core.enums import IndexType
+	from sedona.core.spatialOperator import RangeQuery
+	
+	range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
+	consider_boundary_intersection = False ## Only return gemeotries fully covered by the window
+	
+	build_on_spatial_partitioned_rdd = False ## Set to TRUE only if run join query
+	spatial_rdd.buildIndex(IndexType.QUADTREE, build_on_spatial_partitioned_rdd)
+	
+	using_index = True
+	
+	query_result = RangeQuery.SpatialRangeQuery(
+	    spatial_rdd,
+	    range_query_window,
+	    consider_boundary_intersection,
+	    using_index
+	)
+	```
 
 !!!tip
 	Using an index might not be the best choice all the time because building index also takes time. A spatial index is very useful when your data is complex polygons and line strings.
 
 ### Output format
 
-The output format of the spatial range query is another SpatialRDD.
+=== "Scala/Java"
+
+	The output format of the spatial range query is another SpatialRDD.
+
+=== "Python"
+
+	The output format of the spatial range query is another RDD which consists of GeoData objects.
+	
+	SpatialRangeQuery result can be used as RDD with map or other spark RDD functions. Also it can be used as 
+	Python objects when using collect method.
+	Example:
+	
+	```python
+	query_result.map(lambda x: x.geom.length).collect()
+	```
+	
+	```
+	[
+	 1.5900840000000045,
+	 1.5906639999999896,
+	 1.1110299999999995,
+	 1.1096700000000084,
+	 1.1415619999999933,
+	 1.1386399999999952,
+	 1.1415619999999933,
+	 1.1418860000000137,
+	 1.1392780000000045,
+	 ...
+	]
+	```
+	
+	Or transformed to GeoPandas GeoDataFrame
+	
+	```python
+	import geopandas as gpd
+	gpd.GeoDataFrame(
+	    query_result.map(lambda x: [x.geom, x.userData]).collect(),
+	    columns=["geom", "user_data"],
+	    geometry="geom"
+	)
+	```
 
 ## Write a Spatial KNN Query
 
@@ -335,13 +574,37 @@ A spatial K Nearnest Neighbor query takes as input a K, a query point and an Spa
 
 Assume you now have an SpatialRDD (typed or generic). You can use the following code to issue an Spatial KNN Query on it.
 
-```scala
-val geometryFactory = new GeometryFactory()
-val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
-val K = 1000 // K Nearest Neighbors
-val usingIndex = false
-val result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
-```
+=== "Scala"
+
+	```scala
+	val geometryFactory = new GeometryFactory()
+	val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
+	val K = 1000 // K Nearest Neighbors
+	val usingIndex = false
+	val result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
+	```
+
+=== "Java"
+
+	```java
+	GeometryFactory geometryFactory = new GeometryFactory()
+	Point pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
+	int K = 1000 // K Nearest Neighbors
+	boolean usingIndex = false
+	JavaRDD result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.spatialOperator import KNNQuery
+	from shapely.geometry import Point
+	
+	point = Point(-84.01, 34.01)
+	k = 1000 ## K Nearest Neighbors
+	using_index = False
+	result = KNNQuery.SpatialKnnQuery(object_rdd, point, k, using_index)
+	```
 
 !!!note
 	Spatial KNN query that returns 5 Nearest Neighbors is equal to the following statement in Spatial SQL
@@ -356,7 +619,13 @@ val result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
 
 Besides the Point type, Sedona KNN query center can be Polygon and LineString.
 
-To learn how to create Polygon and LineString object, see [Range query window](#range-query-window).
+=== "Scala/Java"
+
+	To learn how to create Polygon and LineString object, see [Range query window](#range-query-window).
+
+=== "Python"
+
+	To create Polygon or Linestring object please follow [Shapely official docs](https://shapely.readthedocs.io/en/stable/manual.html)
 
 
 
@@ -365,43 +634,126 @@ To learn how to create Polygon and LineString object, see [Range query window](#
 
 To utilize a spatial index in a spatial KNN query, use the following code:
 
-```scala
-val geometryFactory = new GeometryFactory()
-val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
-val K = 1000 // K Nearest Neighbors
+=== "Scala"
 
+	```scala
+	val geometryFactory = new GeometryFactory()
+	val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
+	val K = 1000 // K Nearest Neighbors
+	
+	
+	val buildOnSpatialPartitionedRDD = false // Set to TRUE only if run join query
+	objectRDD.buildIndex(IndexType.RTREE, buildOnSpatialPartitionedRDD)
+	
+	val usingIndex = true
+	val result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
+	```
 
-val buildOnSpatialPartitionedRDD = false // Set to TRUE only if run join query
-objectRDD.buildIndex(IndexType.RTREE, buildOnSpatialPartitionedRDD)
+=== "Java"
+
+	```java
+	GeometryFactory geometryFactory = new GeometryFactory()
+	Point pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
+	val K = 1000 // K Nearest Neighbors
+	
+	
+	boolean buildOnSpatialPartitionedRDD = false // Set to TRUE only if run join query
+	objectRDD.buildIndex(IndexType.RTREE, buildOnSpatialPartitionedRDD)
+	
+	boolean usingIndex = true
+	JavaRDD result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.spatialOperator import KNNQuery
+	from sedona.core.enums import IndexType
+	from shapely.geometry import Point
+	
+	point = Point(-84.01, 34.01)
+	k = 5 ## K Nearest Neighbors
+	
+	build_on_spatial_partitioned_rdd = False ## Set to TRUE only if run join query
+	spatial_rdd.buildIndex(IndexType.RTREE, build_on_spatial_partitioned_rdd)
+	
+	using_index = True
+	result = KNNQuery.SpatialKnnQuery(spatial_rdd, point, k, using_index)
+	```
 
-val usingIndex = true
-val result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
-```
 
 !!!warning
 	Only R-Tree index supports Spatial KNN query
 
 ### Output format
 
-The output format of the spatial KNN query is a list of geometries. The list has K geometry objects.
+=== "Scala/Java"
+
+	The output format of the spatial KNN query is a list of geometries. The list has K geometry objects.
+
+=== "Python"
+
+	The output format of the spatial KNN query is a list of GeoData objects. 
+	The list has K GeoData objects.
+	
+	Example:
+	```python
+	>> result
+	
+	[GeoData, GeoData, GeoData, GeoData, GeoData]
+	```
 
 ## Write a Spatial Join Query
 
+
 A spatial join query takes as input two Spatial RDD A and B. For each geometry in A, finds the geometries (from B) covered/intersected by it. A and B can be any geometry type and are not necessary to have the same geometry type.
 
 Assume you now have two SpatialRDDs (typed or generic). You can use the following code to issue an Spatial Join Query on them.
 
-```scala
-val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by each query window in queryWindowRDD
-val usingIndex = false
+=== "Scala"
 
-objectRDD.analyze()
+	```scala
+	val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by each query window in queryWindowRDD
+	val usingIndex = false
+	
+	objectRDD.analyze()
+	
+	objectRDD.spatialPartitioning(GridType.KDBTREE)
+	queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
+	
+	val result = JoinQuery.SpatialJoinQuery(objectRDD, queryWindowRDD, usingIndex, spatialPredicate)
+	```
 
-objectRDD.spatialPartitioning(GridType.KDBTREE)
-queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
+=== "Java"
 
-val result = JoinQuery.SpatialJoinQuery(objectRDD, queryWindowRDD, usingIndex, spatialPredicate)
-```
+	```java
+	SpatialPredicate spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by each query window in queryWindowRDD
+	val usingIndex = false
+	
+	objectRDD.analyze()
+	
+	objectRDD.spatialPartitioning(GridType.KDBTREE)
+	queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
+	
+	JavaPairRDD result = JoinQuery.SpatialJoinQuery(objectRDD, queryWindowRDD, usingIndex, spatialPredicate)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.enums import GridType
+	from sedona.core.spatialOperator import JoinQuery
+	
+	consider_boundary_intersection = False ## Only return geometries fully covered by each query window in queryWindowRDD
+	using_index = False
+	
+	object_rdd.analyze()
+	
+	object_rdd.spatialPartitioning(GridType.KDBTREE)
+	query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
+	
+	result = JoinQuery.SpatialJoinQuery(object_rdd, query_window_rdd, using_index, consider_boundary_intersection)
+	```
 
 !!!note
 	Spatial join query is equal to the following query in Spatial SQL:
@@ -418,51 +770,156 @@ Sedona spatial partitioning method can significantly speed up the join query. Th
 
 If you first partition SpatialRDD A, then you must use the partitioner of A to partition B.
 
-```scala
-objectRDD.spatialPartitioning(GridType.KDBTREE)
-queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
-```
+=== "Scala/Java"
+
+	```scala
+	objectRDD.spatialPartitioning(GridType.KDBTREE)
+	queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
+	```
+
+=== "Python"
+
+	```python
+	object_rdd.spatialPartitioning(GridType.KDBTREE)
+	query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
+	```
 
 Or 
 
-```scala
-queryWindowRDD.spatialPartitioning(GridType.KDBTREE)
-objectRDD.spatialPartitioning(queryWindowRDD.getPartitioner)
-```
+=== "Scala/Java"
+
+	```scala
+	queryWindowRDD.spatialPartitioning(GridType.KDBTREE)
+	objectRDD.spatialPartitioning(queryWindowRDD.getPartitioner)
+	```
+
+=== "Python"
+
+	```python
+	query_window_rdd.spatialPartitioning(GridType.KDBTREE)
+	object_rdd.spatialPartitioning(query_window_rdd.getPartitioner())
+	```
 
 
 ### Use spatial indexes
 
 To utilize a spatial index in a spatial join query, use the following code:
 
-```scala
-objectRDD.spatialPartitioning(joinQueryPartitioningType)
-queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
+=== "Scala"
 
-val buildOnSpatialPartitionedRDD = true // Set to TRUE only if run join query
-val usingIndex = true
-queryWindowRDD.buildIndex(IndexType.QUADTREE, buildOnSpatialPartitionedRDD)
+	```scala
+	objectRDD.spatialPartitioning(joinQueryPartitioningType)
+	queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
+	
+	val buildOnSpatialPartitionedRDD = true // Set to TRUE only if run join query
+	val usingIndex = true
+	queryWindowRDD.buildIndex(IndexType.QUADTREE, buildOnSpatialPartitionedRDD)
+	
+	val result = JoinQuery.SpatialJoinQueryFlat(objectRDD, queryWindowRDD, usingIndex, spatialPredicate)
+	```
 
-val result = JoinQuery.SpatialJoinQueryFlat(objectRDD, queryWindowRDD, usingIndex, spatialPredicate)
-```
+=== "Java"
+
+	```java
+	objectRDD.spatialPartitioning(joinQueryPartitioningType)
+	queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
+	
+	boolean buildOnSpatialPartitionedRDD = true // Set to TRUE only if run join query
+	boolean usingIndex = true
+	queryWindowRDD.buildIndex(IndexType.QUADTREE, buildOnSpatialPartitionedRDD)
+	
+	JavaPairRDD result = JoinQuery.SpatialJoinQueryFlat(objectRDD, queryWindowRDD, usingIndex, spatialPredicate)
+	```
+
+=== "Python"
+
+	```python
+	from sedona.core.enums import GridType
+	from sedona.core.enums import IndexType
+	from sedona.core.spatialOperator import JoinQuery
+	
+	object_rdd.spatialPartitioning(GridType.KDBTREE)
+	query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
+	
+	build_on_spatial_partitioned_rdd = True ## Set to TRUE only if run join query
+	using_index = True
+	query_window_rdd.buildIndex(IndexType.QUADTREE, build_on_spatial_partitioned_rdd)
+	
+	result = JoinQuery.SpatialJoinQueryFlat(object_rdd, query_window_rdd, using_index, True)
+	```
 
 The index should be built on either one of two SpatialRDDs. In general, you should build it on the larger SpatialRDD.
 
 ### Output format
 
-The output format of the spatial join query is a PairRDD. In this PairRDD, each object is a pair of two geometries. The left one is the geometry from objectRDD and the right one is the geometry from the queryWindowRDD.
+=== "Scala/Java"
 
-```
-Point,Polygon
-Point,Polygon
-Point,Polygon
-Polygon,Polygon
-LineString,LineString
-Polygon,LineString
-...
-```
+	The output format of the spatial join query is a PairRDD. In this PairRDD, each object is a pair of two geometries. The left one is the geometry from objectRDD and the right one is the geometry from the queryWindowRDD.
+	
+	```
+	Point,Polygon
+	Point,Polygon
+	Point,Polygon
+	Polygon,Polygon
+	LineString,LineString
+	Polygon,LineString
+	...
+	```
+	
+	Each object on the left is covered/intersected by the object on the right.
+
+=== "Python"
+
+	Result for this query is RDD which holds two GeoData objects within list of lists.
+	Example:
+	```python
+	result.collect()
+	```
+	
+	```
+	[[GeoData, GeoData], [GeoData, GeoData] ...]
+	```
+	
+	It is possible to do some RDD operation on result data ex. Getting polygon centroid.
+	```python
+	result.map(lambda x: x[0].geom.centroid).collect()
+	```
+
+!!!note
+    Sedona Python users: Please use JoinQueryRaw from the same module for methods 
+    
+    - spatialJoin
+    
+    - DistanceJoinQueryFlat
+
+    - SpatialJoinQueryFlat
+
+    For better performance while converting to dataframe with adapter. 
+    That approach allows to avoid costly serialization between Python 
+    and jvm and in result operating on python object instead of native geometries.
+    
+    Example:
+    ```python
+    from sedona.core.SpatialRDD import CircleRDD
+    from sedona.core.enums import GridType
+    from sedona.core.spatialOperator import JoinQueryRaw
+    
+    object_rdd.analyze()
+    
+    circle_rdd = CircleRDD(object_rdd, 0.1) ## Create a CircleRDD using the given distance
+    circle_rdd.analyze()
+    
+    circle_rdd.spatialPartitioning(GridType.KDBTREE)
+    spatial_rdd.spatialPartitioning(circle_rdd.getPartitioner())
+    
+    consider_boundary_intersection = False ## Only return gemeotries fully covered by each query window in queryWindowRDD
+    using_index = False
+    
+    result = JoinQueryRaw.DistanceJoinQueryFlat(spatial_rdd, circle_rdd, using_index, consider_boundary_intersection)
+    
+    gdf = Adapter.toDf(result, ["left_col1", ..., "lefcoln"], ["rightcol1", ..., "rightcol2"], spark)
+    ```
 
-Each object on the left is covered/intersected by the object on the right.
 
 ## Write a Distance Join Query
 
@@ -473,19 +930,58 @@ A distance join query takes as input two Spatial RDD A and B and a distance. For
 
 Assume you now have two SpatialRDDs (typed or generic). You can use the following code to issue an Distance Join Query on them.
 
-```scala
-objectRddA.analyze()
+=== "Scala"
 
-val circleRDD = new CircleRDD(objectRddA, 0.1) // Create a CircleRDD using the given distance
+	```scala
+	objectRddA.analyze()
+	
+	val circleRDD = new CircleRDD(objectRddA, 0.1) // Create a CircleRDD using the given distance
+	
+	circleRDD.spatialPartitioning(GridType.KDBTREE)
+	objectRddB.spatialPartitioning(circleRDD.getPartitioner)
+	
+	val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by each query window in queryWindowRDD
+	val usingIndex = false
+	
+	val result = JoinQuery.DistanceJoinQueryFlat(objectRddB, circleRDD, usingIndex, spatialPredicate)
+	```
 
-circleRDD.spatialPartitioning(GridType.KDBTREE)
-objectRddB.spatialPartitioning(circleRDD.getPartitioner)
+=== "Java"
+
+	```java
+	objectRddA.analyze()
+	
+	CircleRDD circleRDD = new CircleRDD(objectRddA, 0.1) // Create a CircleRDD using the given distance
+	
+	circleRDD.spatialPartitioning(GridType.KDBTREE)
+	objectRddB.spatialPartitioning(circleRDD.getPartitioner)
+	
+	SpatialPredicate spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by each query window in queryWindowRDD
+	boolean usingIndex = false
+	
+	JavaPairRDD result = JoinQuery.DistanceJoinQueryFlat(objectRddB, circleRDD, usingIndex, spatialPredicate)
+	```
 
-val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by each query window in queryWindowRDD
-val usingIndex = false
+=== "Python"
 
-val result = JoinQuery.DistanceJoinQueryFlat(objectRddB, circleRDD, usingIndex, spatialPredicate)
-```
+	```python
+	from sedona.core.SpatialRDD import CircleRDD
+	from sedona.core.enums import GridType
+	from sedona.core.spatialOperator import JoinQuery
+	
+	object_rdd.analyze()
+	
+	circle_rdd = CircleRDD(object_rdd, 0.1) ## Create a CircleRDD using the given distance
+	circle_rdd.analyze()
+	
+	circle_rdd.spatialPartitioning(GridType.KDBTREE)
+	spatial_rdd.spatialPartitioning(circle_rdd.getPartitioner())
+	
+	consider_boundary_intersection = False ## Only return gemeotries fully covered by each query window in queryWindowRDD
+	using_index = False
+	
+	result = JoinQuery.DistanceJoinQueryFlat(spatial_rdd, circle_rdd, using_index, consider_boundary_intersection)
+	```
 
 Distance join can only accept `COVERED_BY` and `INTERSECTS` as spatial predicates. The rest part of the join query is same as the spatial join query.
 
@@ -545,9 +1041,17 @@ objectRDD.saveAsGeoJSON("hdfs://PATH")
 
 Use the following code to save an SpatialRDD as a distributed object file:
 
-```scala
-objectRDD.rawSpatialRDD.saveAsObjectFile("hdfs://PATH")
-```
+=== "Scala/Java"
+
+	```scala
+	objectRDD.rawSpatialRDD.saveAsObjectFile("hdfs://PATH")
+	```
+
+=== "Python"
+
+	```python
+	object_rdd.rawJvmSpatialRDD.saveAsObjectFile("hdfs://PATH")
+	```
 
 !!!note
 	Each object in a distributed object file is a byte array (not human-readable). This byte array is the serialized format of a Geometry or a SpatialIndex.
@@ -574,27 +1078,52 @@ You can easily reload an SpatialRDD that has been saved to ==a distributed objec
 
 #### Load to a typed SpatialRDD
 
-Use the following code to reload the PointRDD/PolygonRDD/LineStringRDD:
+!!!warning
+	Typed SpatialRDD has been deprecated for a long time. We do NOT recommend it anymore.
 
-```scala
-var savedRDD = new PointRDD(sc.objectFile[Point]("hdfs://PATH"))
+#### Load to a generic SpatialRDD
 
-var savedRDD = new PointRDD(sc.objectFile[Polygon]("hdfs://PATH"))
+Use the following code to reload the SpatialRDD:
 
-var savedRDD = new PointRDD(sc.objectFile[LineString]("hdfs://PATH"))
-```
+=== "Scala"
 
-#### Load to a generic SpatialRDD
+	```scala
+	var savedRDD = new SpatialRDD[Geometry]
+	savedRDD.rawSpatialRDD = sc.objectFile[Geometry]("hdfs://PATH")
+	```
 
-Use the following code to reload the SpatialRDD:
+=== "Java"
 
-```scala
-var savedRDD = new SpatialRDD[Geometry]
-savedRDD.rawSpatialRDD = sc.objectFile[Geometry]("hdfs://PATH")
-```
+	```java
+	SpatialRDD savedRDD = new SpatialRDD<Geometry>
+	savedRDD.rawSpatialRDD = sc.objectFile<Geometry>("hdfs://PATH")
+	```
+
+=== "Python"
+
+	```python
+	saved_rdd = load_spatial_rdd_from_disc(sc, "hdfs://PATH", GeoType.GEOMETRY)
+	```
 
 Use the following code to reload the indexed SpatialRDD:
-```scala
-var savedRDD = new SpatialRDD[Geometry]
-savedRDD.indexedRawRDD = sc.objectFile[SpatialIndex]("hdfs://PATH")
-```
+
+=== "Scala"
+
+	```scala
+	var savedRDD = new SpatialRDD[Geometry]
+	savedRDD.indexedRawRDD = sc.objectFile[SpatialIndex]("hdfs://PATH")
+	```
+
+=== "Java"
+
+	```java
+	SpatialRDD savedRDD = new SpatialRDD<Geometry>
+	savedRDD.indexedRawRDD = sc.objectFile<SpatialIndex>("hdfs://PATH")
+	```
+
+=== "Python"
+
+	```python
+	saved_rdd = SpatialRDD()
+	saved_rdd.indexedRawRDD = load_spatial_index_rdd_from_disc(sc, "hdfs://PATH")
+	```
\ No newline at end of file
diff --git a/docs/tutorial/sql-python.md b/docs/tutorial/sql-python.md
deleted file mode 100644
index af472571..00000000
--- a/docs/tutorial/sql-python.md
+++ /dev/null
@@ -1,562 +0,0 @@
-# Spatial SQL Application in Python
-
-## Introduction
-
-This package is an extension to Apache Spark SQL package. It allow to use 
-spatial functions on dataframes.
-
-SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. 
-It includes four kinds of SQL operators as follows.
-All these operators can be directly called through:
-
-```python
-spark.sql("YOUR_SQL")
-```
-
-!!!note
-	This tutorial is based on [Sedona SQL Jupyter Notebook example](../jupyter-notebook). You can interact with Sedona Python Jupyter notebook immediately on Binder. Click [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder) and wait for a few minutes. Then select a notebook and enjoy!
-	
-## Installation
-
-Please read [Quick start](../../setup/install-python) to install Sedona Python.
-
-## Register package
-Before writing any code with Sedona please use the following code.
-
-```python
-from sedona.register import SedonaRegistrator
-
-SedonaRegistrator.registerAll(spark)
-```
-
-You can also register functions by passing `--conf spark.sql.extensions=org.apache.sedona.sql.SedonaSqlExtensions` to `spark-submit` or `spark-shell`.
-
-## Writing Application
-
-Use KryoSerializer.getName and SedonaKryoRegistrator.getName class properties to reduce memory impact.
-
-```python
-spark = SparkSession.\
-    builder.\
-    master("local[*]").\
-    appName("Sedona App").\
-    config("spark.serializer", KryoSerializer.getName).\
-    config("spark.kryo.registrator", SedonaKryoRegistrator.getName) .\
-    getOrCreate()
-```
-
-To turn on SedonaSQL function inside pyspark code use SedonaRegistrator.registerAll method on existing pyspark.sql.SparkSession instance ex.
-
-`SedonaRegistrator.registerAll(spark)`
-
-After that all the functions from SedonaSQL are available,
-moreover using collect or toPandas methods on Spark DataFrame 
-returns Shapely BaseGeometry objects. 
-
-Based on GeoPandas DataFrame,
-Pandas DataFrame with shapely objects or Sequence with 
-shapely objects, Spark DataFrame can be created using 
-spark.createDataFrame method. To specify Schema with 
-geometry inside please use `GeometryType()` instance 
-(look at examples section to see that in practice).
-
-
-### Examples
-
-### SedonaSQL
-
-
-All SedonaSQL functions (list depends on SedonaSQL version) are available in Python API.
-For details please refer to API/SedonaSQL page.
-
-For example use SedonaSQL for Spatial Join.
-
-```python3
-
-counties = spark.\
-    read.\
-    option("delimiter", "|").\
-    option("header", "true").\
-    csv("counties.csv")
-
-counties.createOrReplaceTempView("county")
-
-counties_geom = spark.sql(
-      "SELECT county_code, st_geomFromWKT(geom) as geometry from county"
-)
-
-counties_geom.show(5)
-
-```
-```
-+-----------+--------------------+
-|county_code|            geometry|
-+-----------+--------------------+
-|       1815|POLYGON ((21.6942...|
-|       1410|POLYGON ((22.7238...|
-|       1418|POLYGON ((21.1100...|
-|       1425|POLYGON ((20.9891...|
-|       1427|POLYGON ((19.5087...|
-+-----------+--------------------+
-```
-```python3
-import geopandas as gpd
-
-points = gpd.read_file("gis_osm_pois_free_1.shp")
-
-points_geom = spark.createDataFrame(
-    points[["fclass", "geometry"]]
-)
-
-points_geom.show(5, False)
-```
-```
-+---------+-----------------------------+
-|fclass   |geometry                     |
-+---------+-----------------------------+
-|camp_site|POINT (15.3393145 52.3504247)|
-|chalet   |POINT (14.8709625 52.691693) |
-|motel    |POINT (15.0946636 52.3130396)|
-|atm      |POINT (15.0732014 52.3141083)|
-|hotel    |POINT (15.0696777 52.3143013)|
-+---------+-----------------------------+
-```
-
-```python3
-
-points_geom.createOrReplaceTempView("pois")
-counties_geom.createOrReplaceTempView("counties")
-
-spatial_join_result = spark.sql(
-    """
-        SELECT c.county_code, p.fclass
-        FROM pois AS p, counties AS c
-        WHERE ST_Intersects(p.geometry, c.geometry)
-    """
-)
-
-spatial_join_result.explain()
-
-```
-```
-== Physical Plan ==
-*(2) Project [county_code#230, fclass#239]
-+- RangeJoin geometry#240: geometry, geometry#236: geometry, true
-   :- Scan ExistingRDD[fclass#239,geometry#240]
-   +- Project [county_code#230, st_geomfromwkt(geom#232) AS geometry#236]
-      +- *(1) FileScan csv [county_code#230,geom#232] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:/projects/sedona/counties.csv], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<county_code:string,geom:string>
-```
-Calculating Number of Pois within counties per fclass.
-
-```python3
-pois_per_county = spatial_join_result.groupBy("county_code", "fclass"). \
-    count()
-
-pois_per_county.show(5, False)
-
-```
-```
-+-----------+---------+-----+
-|county_code|fclass   |count|
-+-----------+---------+-----+
-|0805       |atm      |6    |
-|0805       |bench    |75   |
-|0803       |museum   |9    |
-|0802       |fast_food|5    |
-|0862       |atm      |20   |
-+-----------+---------+-----+
-```
-
-## Integration with GeoPandas and Shapely
-
-
-sedona has implemented serializers and deserializers which allows to convert Sedona Geometry objects into Shapely BaseGeometry objects. Based on that it is possible to load the data with geopandas from file (look at Fiona possible drivers) and create Spark DataFrame based on GeoDataFrame object.
-
-Example, loading the data from shapefile using geopandas read_file method and create Spark DataFrame based on GeoDataFrame:
-
-```python
-
-import geopandas as gpd
-from pyspark.sql import SparkSession
-
-from sedona.register import SedonaRegistrator
-
-spark = SparkSession.builder.\
-      getOrCreate()
-
-SedonaRegistrator.registerAll(spark)
-
-gdf = gpd.read_file("gis_osm_pois_free_1.shp")
-
-spark.createDataFrame(
-  gdf
-).show()
-
-```
-
-```
-
-+---------+----+-----------+--------------------+--------------------+
-|   osm_id|code|     fclass|                name|            geometry|
-+---------+----+-----------+--------------------+--------------------+
-| 26860257|2422|  camp_site|            de Kroon|POINT (15.3393145...|
-| 26860294|2406|     chalet|      Leśne Ustronie|POINT (14.8709625...|
-| 29947493|2402|      motel|                null|POINT (15.0946636...|
-| 29947498|2602|        atm|                null|POINT (15.0732014...|
-| 29947499|2401|      hotel|                null|POINT (15.0696777...|
-| 29947505|2401|      hotel|                null|POINT (15.0155749...|
-+---------+----+-----------+--------------------+--------------------+
-
-```
-
-Reading data with Spark and converting to GeoPandas
-
-```python
-
-import geopandas as gpd
-from pyspark.sql import SparkSession
-
-from sedona.register import SedonaRegistrator
-
-spark = SparkSession.builder.\
-    getOrCreate()
-
-SedonaRegistrator.registerAll(spark)
-
-counties = spark.\
-    read.\
-    option("delimiter", "|").\
-    option("header", "true").\
-    csv("counties.csv")
-
-counties.createOrReplaceTempView("county")
-
-counties_geom = spark.sql(
-    "SELECT *, st_geomFromWKT(geom) as geometry from county"
-)
-
-df = counties_geom.toPandas()
-gdf = gpd.GeoDataFrame(df, geometry="geometry")
-
-gdf.plot(
-    figsize=(10, 8),
-    column="value",
-    legend=True,
-    cmap='YlOrBr',
-    scheme='quantiles',
-    edgecolor='lightgray'
-)
-
-```
-<br>
-<br>
-
-![poland_image](https://user-images.githubusercontent.com/22958216/67603296-c08b4680-f778-11e9-8cde-d2e14ffbba3b.png)
-
-<br>
-<br>
-
-### DataFrame Style API
-
-Sedona functions can be called used a DataFrame style API similar to PySpark's own functions.
-The functions are spread across four different modules: `sedona.sql.st_constructors`, `sedona.sql.st_functions`, `sedona.sql.st_predicates`, and `sedona.sql.st_aggregates`.
-All of the functions can take columns or strings as arguments and will return a column representing the sedona function call.
-This makes them integratable with `DataFrame.select`, `DataFrame.join`, and all of the PySpark functions found in the `pyspark.sql.functions` module.
-
-As an example of the flexibility:
-
-```python3
-from pyspark.sql import functions as f
-
-from sedona.sql import st_constructors as stc
-
-df = spark.sql("SELECT array(0.0, 1.0, 2.0) AS values")
-
-min_value = f.array_min("values")
-max_value = f.array_max("values")
-
-df = df.select(stc.ST_Point(min_value, max_value).alias("point"))
-```
-
-The above code will generate the following dataframe:
-```
-+-----------+
-|point      |
-+-----------+
-|POINT (0 2)|
-+-----------+
-```
-
-Some functions will take native python values and infer them as literals.
-For example:
-
-```python3
-df = df.select(stc.ST_Point(1.0, 3.0).alias("point"))
-```
-
-This will generate a dataframe with a constant point in a column:
-```
-+-----------+
-|point      |
-+-----------+
-|POINT (1 3)|
-+-----------+
-```
-
-For a description of what values a function may take please refer to their specific docstrings.
-
-The following rules are followed when passing values to the sedona functions:
-1. `Column` type arguments are passed straight through and are always accepted.
-1. `str` type arguments are always assumed to be names of columns and are wrapped in a `Column` to support that.
-If an actual string literal needs to be passed then it will need to be wrapped in a `Column` using `pyspark.sql.functions.lit`.
-1. Any other types of arguments are checked on a per function basis.
-Generally, arguments that could reasonably support a python native type are accepted and passed through.
-Check the specific docstring of the function to be sure.
-1. Shapely `Geometry` objects are not currently accepted in any of the functions.
-
-## Creating Spark DataFrame based on shapely objects
-
-### Supported Shapely objects
-
-| shapely object  | Available          |
-|-----------------|--------------------|
-| Point           | :heavy_check_mark: |
-| MultiPoint      | :heavy_check_mark: |
-| LineString      | :heavy_check_mark: |
-| MultiLinestring | :heavy_check_mark: |
-| Polygon         | :heavy_check_mark: |
-| MultiPolygon    | :heavy_check_mark: |
-
-To create Spark DataFrame based on mentioned Geometry types, please use <b> GeometryType </b> from  <b> sedona.sql.types </b> module. Converting works for list or tuple with shapely objects.
-
-Schema for target table with integer id and geometry type can be defined as follow:
-
-```python
-
-from pyspark.sql.types import IntegerType, StructField, StructType
-
-from sedona.sql.types import GeometryType
-
-schema = StructType(
-    [
-        StructField("id", IntegerType(), False),
-        StructField("geom", GeometryType(), False)
-    ]
-)
-
-```
-
-Also Spark DataFrame with geometry type can be converted to list of shapely objects with <b> collect </b> method.
-
-## Example usage for Shapely objects
-
-### Point
-
-```python
-from shapely.geometry import Point
-
-data = [
-    [1, Point(21.0, 52.0)],
-    [1, Point(23.0, 42.0)],
-    [1, Point(26.0, 32.0)]
-]
-
-
-gdf = spark.createDataFrame(
-    data,
-    schema
-)
-
-gdf.show()
-
-```
-
-```
-+---+-------------+
-| id|         geom|
-+---+-------------+
-|  1|POINT (21 52)|
-|  1|POINT (23 42)|
-|  1|POINT (26 32)|
-+---+-------------+
-```
-
-```python
-gdf.printSchema()
-```
-
-```
-root
- |-- id: integer (nullable = false)
- |-- geom: geometry (nullable = false)
-```
-
-### MultiPoint
-
-```python3
-
-from shapely.geometry import MultiPoint
-
-data = [
-    [1, MultiPoint([[19.511463, 51.765158], [19.446408, 51.779752]])]
-]
-
-gdf = spark.createDataFrame(
-    data,
-    schema
-).show(1, False)
-
-```
-
-```
-
-+---+---------------------------------------------------------+
-|id |geom                                                     |
-+---+---------------------------------------------------------+
-|1  |MULTIPOINT ((19.511463 51.765158), (19.446408 51.779752))|
-+---+---------------------------------------------------------+
-
-
-```
-
-### LineString
-
-```python3
-
-from shapely.geometry import LineString
-
-line = [(40, 40), (30, 30), (40, 20), (30, 10)]
-
-data = [
-    [1, LineString(line)]
-]
-
-gdf = spark.createDataFrame(
-    data,
-    schema
-)
-
-gdf.show(1, False)
-
-```
-
-```
-
-+---+--------------------------------+
-|id |geom                            |
-+---+--------------------------------+
-|1  |LINESTRING (10 10, 20 20, 10 40)|
-+---+--------------------------------+
-
-```
-
-### MultiLineString
-
-```python3
-
-from shapely.geometry import MultiLineString
-
-line1 = [(10, 10), (20, 20), (10, 40)]
-line2 = [(40, 40), (30, 30), (40, 20), (30, 10)]
-
-data = [
-    [1, MultiLineString([line1, line2])]
-]
-
-gdf = spark.createDataFrame(
-    data,
-    schema
-)
-
-gdf.show(1, False)
-
-```
-
-```
-
-+---+---------------------------------------------------------------------+
-|id |geom                                                                 |
-+---+---------------------------------------------------------------------+
-|1  |MULTILINESTRING ((10 10, 20 20, 10 40), (40 40, 30 30, 40 20, 30 10))|
-+---+---------------------------------------------------------------------+
-
-```
-
-### Polygon
-
-```python3
-
-from shapely.geometry import Polygon
-
-polygon = Polygon(
-    [
-         [19.51121, 51.76426],
-         [19.51056, 51.76583],
-         [19.51216, 51.76599],
-         [19.51280, 51.76448],
-         [19.51121, 51.76426]
-    ]
-)
-
-data = [
-    [1, polygon]
-]
-
-gdf = spark.createDataFrame(
-    data,
-    schema
-)
-
-gdf.show(1, False)
-
-```
-
-
-```
-
-+---+--------------------------------------------------------------------------------------------------------+
-|id |geom                                                                                                    |
-+---+--------------------------------------------------------------------------------------------------------+
-|1  |POLYGON ((19.51121 51.76426, 19.51056 51.76583, 19.51216 51.76599, 19.5128 51.76448, 19.51121 51.76426))|
-+---+--------------------------------------------------------------------------------------------------------+
-
-```
-
-### MultiPolygon
-
-```python3
-
-from shapely.geometry import MultiPolygon
-
-exterior_p1 = [(0, 0), (0, 2), (2, 2), (2, 0), (0, 0)]
-interior_p1 = [(1, 1), (1, 1.5), (1.5, 1.5), (1.5, 1), (1, 1)]
-
-exterior_p2 = [(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)]
-
-polygons = [
-    Polygon(exterior_p1, [interior_p1]),
-    Polygon(exterior_p2)
-]
-
-data = [
-    [1, MultiPolygon(polygons)]
-]
-
-gdf = spark.createDataFrame(
-    data,
-    schema
-)
-
-gdf.show(1, False)
-
-```
-
-```
-
-+---+----------------------------------------------------------------------------------------------------------+
-|id |geom                                                                                                      |
-+---+----------------------------------------------------------------------------------------------------------+
-|1  |MULTIPOLYGON (((0 0, 0 2, 2 2, 2 0, 0 0), (1 1, 1.5 1, 1.5 1.5, 1 1.5, 1 1)), ((0 0, 0 1, 1 1, 1 0, 0 0)))|
-+---+----------------------------------------------------------------------------------------------------------+
-
-```
diff --git a/docs/tutorial/sql.md b/docs/tutorial/sql.md
index 0219c3e8..6c589cb3 100644
--- a/docs/tutorial/sql.md
+++ b/docs/tutorial/sql.md
@@ -1,50 +1,125 @@
-The page outlines the steps to manage spatial data using SedonaSQL. ==The example code is written in Scala but also works for Java==.
+# Write a Sedona SQL application
+
+The page outlines the steps to manage spatial data using SedonaSQL.
+
 
 SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. It includes four kinds of SQL operators as follows. All these operators can be directly called through:
-```scala
-var myDataFrame = sparkSession.sql("YOUR_SQL")
-```
 
-Detailed SedonaSQL APIs are available here: [SedonaSQL API](../api/sql/Overview.md)
+=== "Scala"
+
+	```scala
+	var myDataFrame = sparkSession.sql("YOUR_SQL")
+	myDataFrame.createOrReplaceTempView("spatialDf")
+	```
+
+=== "Java"
+
+	```java
+	Dataset<Row> myDataFrame = sparkSession.sql("YOUR_SQL")
+	myDataFrame.createOrReplaceTempView("spatialDf")
+	```
+	
+=== "Python"
+
+	```python
+	myDataFrame = sparkSession.sql("YOUR_SQL")
+	myDataFrame.createOrReplaceTempView("spatialDf")
+	```
+
+Detailed SedonaSQL APIs are available here: [SedonaSQL API](../api/sql/Overview.md). You can find example county data (i.e., `county_small.tsv`) in [Sedona GitHub repo](https://github.com/apache/sedona/tree/master/core/src/test/resources).
 
 ## Set up dependencies
 
-1. Read [Sedona Maven Central coordinates](../setup/maven-coordinates.md)
-2. Select ==the minimum dependencies==: Add [Apache Spark core](https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11), [Apache SparkSQL](https://mvnrepository.com/artifact/org.apache.spark/spark-sql), Sedona-core and Sedona-SQL
-3. Add the dependencies in build.sbt or pom.xml.
+=== "Scala/Java"
 
-!!!note
-	To enjoy the full functions of Sedona, we suggest you include ==the full dependencies==: [Apache Spark core](https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11), [Apache SparkSQL](https://mvnrepository.com/artifact/org.apache.spark/spark-sql), Sedona-core, Sedona-SQL, Sedona-Viz. Please see [SQL example project](../demo/)
+	1. Read [Sedona Maven Central coordinates](../setup/maven-coordinates.md) and add Sedona dependencies in build.sbt or pom.xml.
+	2. Add [Apache Spark core](https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11), [Apache SparkSQL](https://mvnrepository.com/artifact/org.apache.spark/spark-sql) in build.sbt or pom.xml.
+	3. Please see [SQL example project](../demo/)
+
+=== "Python"
 
+	1. Please read [Quick start](../../setup/install-python) to install Sedona Python.
+	2. This tutorial is based on [Sedona SQL Jupyter Notebook example](../jupyter-notebook). You can interact with Sedona Python Jupyter notebook immediately on Binder. Click [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder) to interact with Sedona Python Jupyter notebook immediately on Binder.
 
 ## Initiate SparkSession
 Use the following code to initiate your SparkSession at the beginning:
-```scala
-var sparkSession = SparkSession.builder()
-.master("local[*]") // Delete this if run in cluster mode
-.appName("readTestScala") // Change this to a proper name
-// Enable Sedona custom Kryo serializer
-.config("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
-.config("spark.kryo.registrator", classOf[SedonaKryoRegistrator].getName)
-.getOrCreate() // org.apache.sedona.core.serde.SedonaKryoRegistrator
-```
+
+=== "Scala"
+
+	```scala
+	var sparkSession = SparkSession.builder()
+	.master("local[*]") // Delete this if run in cluster mode
+	.appName("readTestScala") // Change this to a proper name
+	// Enable Sedona custom Kryo serializer
+	.config("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
+	.config("spark.kryo.registrator", classOf[SedonaKryoRegistrator].getName)
+	.getOrCreate() // org.apache.sedona.core.serde.SedonaKryoRegistrator
+	```
+	If you use SedonaViz together with SedonaSQL, please use the following two lines to enable Sedona Kryo serializer instead:
+	```scala
+	.config("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
+	.config("spark.kryo.registrator", classOf[SedonaVizKryoRegistrator].getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
+	```
+
+=== "Java"
+
+	```java
+	SparkSession sparkSession = SparkSession.builder()
+	.master("local[*]") // Delete this if run in cluster mode
+	.appName("readTestScala") // Change this to a proper name
+	// Enable Sedona custom Kryo serializer
+	.config("spark.serializer", KryoSerializer.class.getName) // org.apache.spark.serializer.KryoSerializer
+	.config("spark.kryo.registrator", SedonaKryoRegistrator.class.getName)
+	.getOrCreate() // org.apache.sedona.core.serde.SedonaKryoRegistrator
+	```
+	If you use SedonaViz together with SedonaSQL, please use the following two lines to enable Sedona Kryo serializer instead:
+	```scala
+	.config("spark.serializer", KryoSerializer.class.getName) // org.apache.spark.serializer.KryoSerializer
+	.config("spark.kryo.registrator", SedonaVizKryoRegistrator.class.getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
+	```
+	
+=== "Python"
+
+	```python
+	sparkSession = SparkSession. \
+	    builder. \
+	    appName('appName'). \
+	    config("spark.serializer", KryoSerializer.getName). \
+	    config("spark.kryo.registrator", SedonaKryoRegistrator.getName). \
+	    config('spark.jars.packages',
+	           'org.apache.sedona:sedona-spark-shaded-3.0_2.12:{{ sedona.current_version }},'
+	           'org.datasyslab:geotools-wrapper:{{ sedona.current_geotools }}'). \
+	    getOrCreate()
+	```
 
 !!!warning
-	Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption.
+	Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption and slow performance.
+
 
-If you add ==the Sedona full dependencies== as suggested above, please use the following two lines to enable Sedona Kryo serializer instead:
-```scala
-.config("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
-.config("spark.kryo.registrator", classOf[SedonaVizKryoRegistrator].getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
-```
 
 ## Register SedonaSQL
 
 Add the following line after your SparkSession declaration
 
-```scala
-SedonaSQLRegistrator.registerAll(sparkSession)
-```
+=== "Scala"
+
+	```scala
+	SedonaSQLRegistrator.registerAll(sparkSession)
+	```
+
+=== "Java"
+
+	```java
+	SedonaSQLRegistrator.registerAll(sparkSession)
+	```
+	
+=== "Python"
+
+	```python
+	from sedona.register import SedonaRegistrator
+	
+	SedonaRegistrator.registerAll(spark)
+	```
 
 This function will register Sedona User Defined Type, User Defined Function and optimized join query strategy.
 
@@ -64,11 +139,26 @@ The file may have many other columns.
 
 Use the following code to load the data and create a raw DataFrame:
 
-```scala
-var rawDf = sparkSession.read.format("csv").option("delimiter", "\t").option("header", "false").load("/Download/usa-county.tsv")
-rawDf.createOrReplaceTempView("rawdf")
-rawDf.show()
-```
+=== "Scala/Java"
+	```scala
+	var rawDf = sparkSession.read.format("csv").option("delimiter", "\t").option("header", "false").load("/Download/usa-county.tsv")
+	rawDf.createOrReplaceTempView("rawdf")
+	rawDf.show()
+	```
+
+=== "Java"
+	```java
+	Dataset<Row> rawDf = sparkSession.read.format("csv").option("delimiter", "\t").option("header", "false").load("/Download/usa-county.tsv")
+	rawDf.createOrReplaceTempView("rawdf")
+	rawDf.show()
+	```
+
+=== "Python"
+	```python
+	rawDf = sparkSession.read.format("csv").option("delimiter", "\t").option("header", "false").load("/Download/usa-county.tsv")
+	rawDf.createOrReplaceTempView("rawdf")
+	rawDf.show()
+	```
 
 The output will be like this:
 
@@ -85,15 +175,8 @@ The output will be like this:
 
 All geometrical operations in SedonaSQL are on Geometry type objects. Therefore, before any kind of queries, you need to create a Geometry type column on a DataFrame.
 
-
-```scala
-var spatialDf = sparkSession.sql(
-  """
-    |SELECT ST_GeomFromWKT(_c0) AS countyshape, _c1, _c2
-    |FROM rawdf
-  """.stripMargin)
-spatialDf.createOrReplaceTempView("spatialdf")
-spatialDf.show()
+```sql
+SELECT ST_GeomFromWKT(_c0) AS countyshape, _c1, _c2
 ```
 
 You can select many other attributes to compose this `spatialdDf`. The output will be something like this:
@@ -140,10 +223,27 @@ Shapefile and GeoJSON must be loaded by SpatialRDD and converted to DataFrame us
 
 Since v`1.3.0`, Sedona natively supports loading GeoParquet file. Sedona will infer geometry fields using the "geo" metadata in GeoParquet files.
 
-```scala
-val df = sparkSession.read.format("geoparquet").load(geoparquetdatalocation1)
-df.printSchema()
-```
+=== "Scala/Java"
+
+	```scala
+	val df = sparkSession.read.format("geoparquet").load(geoparquetdatalocation1)
+	df.printSchema()
+	```
+
+=== "Java"
+
+	```java
+	Dataset<Row> df = sparkSession.read.format("geoparquet").load(geoparquetdatalocation1)
+	df.printSchema()
+	```
+
+=== "Python"
+
+	```python
+	df = sparkSession.read.format("geoparquet").load(geoparquetdatalocation1)
+	df.printSchema()
+	```
+
 The output will be as follows:
 
 ```
@@ -164,14 +264,9 @@ Sedona doesn't control the coordinate unit (degree-based or meter-based) of all
 
 To convert Coordinate Reference System of the Geometry column created before, use the following code:
 
-```scala
-spatialDf = sparkSession.sql(
-  """
-    |SELECT ST_Transform(countyshape, "epsg:4326", "epsg:3857") AS newcountyshape, _c1, _c2, _c3, _c4, _c5, _c6, _c7
-    |FROM spatialdf
-  """.stripMargin)
-spatialDf.createOrReplaceTempView("spatialdf")
-spatialDf.show()
+```sql
+SELECT ST_Transform(countyshape, "epsg:4326", "epsg:3857") AS newcountyshape, _c1, _c2, _c3, _c4, _c5, _c6, _c7
+FROM spatialdf
 ```
 
 The first EPSG code EPSG:4326 in `ST_Transform` is the source CRS of the geometries. It is WGS84, the most common degree-based CRS.
@@ -204,35 +299,27 @@ Use ==ST_Contains==, ==ST_Intersects==, ==ST_Within== to run a range query over
 
 The following example finds all counties that are within the given polygon:
 
-```scala
-spatialDf = sparkSession.sql(
-  """
-    |SELECT *
-    |FROM spatialdf
-    |WHERE ST_Contains (ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape)
-  """.stripMargin)
-spatialDf.createOrReplaceTempView("spatialdf")
-spatialDf.show()
+```sql
+SELECT *
+FROM spatialdf
+WHERE ST_Contains (ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape)
 ```
 
+
 !!!note
 	Read [SedonaSQL constructor API](../api/sql/Constructor.md) to learn how to create a Geometry type query window
+
 ### KNN query
 
 Use ==ST_Distance== to calculate the distance and rank the distance.
 
 The following code returns the 5 nearest neighbor of the given polygon.
 
-```scala
-spatialDf = sparkSession.sql(
-  """
-    |SELECT countyname, ST_Distance(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape) AS distance
-    |FROM spatialdf
-    |ORDER BY distance DESC
-    |LIMIT 5
-  """.stripMargin)
-spatialDf.createOrReplaceTempView("spatialdf")
-spatialDf.show()
+```sql
+SELECT countyname, ST_Distance(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape) AS distance
+FROM spatialdf
+ORDER BY distance DESC
+LIMIT 5
 ```
 
 ### Join query
@@ -249,12 +336,10 @@ To save a Spatial DataFrame to some permanent storage such as Hive tables and HD
 
 
 Use the following code to convert the Geometry column in a DataFrame back to a WKT string column:
-```scala
-var stringDf = sparkSession.sql(
-  """
-    |SELECT ST_AsText(countyshape)
-    |FROM polygondf
-  """.stripMargin)
+
+```sql
+SELECT ST_AsText(countyshape)
+FROM polygondf
 ```
 
 !!!note
@@ -269,15 +354,42 @@ Since v`1.3.0`, Sedona natively supports writing GeoParquet file. GeoParquet can
 df.write.format("geoparquet").save(geoparquetoutputlocation + "/GeoParquet_File_Name.parquet")
 ```
 
+## Sort then Save GeoParquet
+
+To maximize the performance of Sedona GeoParquet filter pushdown, we suggest that you sort the data by their geohash values (see [ST_GeoHash](../../api/sql/Function/#st_geohash)) and then save as a GeoParquet file. An example is as follows:
+
+```
+SELECT col1, col2, geom, ST_GeoHash(geom, 5) as geohash
+FROM spatialDf
+ORDER BY geohash
+```
+
+
 ## Convert between DataFrame and SpatialRDD
 
 ### DataFrame to SpatialRDD
 
 Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. Please read [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html)
 
-```scala
-var spatialRDD = Adapter.toSpatialRdd(spatialDf, "usacounty")
-```
+=== "Scala"
+
+	```scala
+	var spatialRDD = Adapter.toSpatialRdd(spatialDf, "usacounty")
+	```
+	
+=== "Java"
+
+	```java
+	SpatialRDD spatialRDD = Adapter.toSpatialRdd(spatialDf, "usacounty")
+	```
+
+=== "Python"
+
+	```python
+	from sedona.utils.adapter import Adapter
+
+	spatialRDD = Adapter.toSpatialRdd(spatialDf, "usacounty")
+	```
 
 "usacounty" is the name of the geometry column
 
@@ -288,9 +400,25 @@ var spatialRDD = Adapter.toSpatialRdd(spatialDf, "usacounty")
 
 Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. Please read [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html)
 
-```scala
-var spatialDf = Adapter.toDf(spatialRDD, sparkSession)
-```
+=== "Scala"
+
+	```scala
+	var spatialDf = Adapter.toDf(spatialRDD, sparkSession)
+	```
+
+=== "Java"
+
+	```java
+	Dataset<Row> spatialDf = Adapter.toDf(spatialRDD, sparkSession)
+	```
+	
+=== "Python"
+
+	```python
+	from sedona.utils.adapter import Adapter
+	
+	spatialDf = Adapter.toDf(spatialRDD, sparkSession)
+	```
 
 All other attributes such as price and age will be also brought to the DataFrame as long as you specify ==carryOtherAttributes== (see [Read other attributes in an SpatialRDD](../rdd#read-other-attributes-in-an-spatialrdd)).
 
@@ -299,30 +427,67 @@ types. Note that string schemas and not all data types are supported&mdash;pleas
 [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) to confirm what is supported for your use
 case. At least one column for the user data must be provided.
 
-```scala
-val schema = StructType(Array(
-  StructField("county", GeometryUDT, nullable = true),
-  StructField("name", StringType, nullable = true),
-  StructField("price", DoubleType, nullable = true),
-  StructField("age", IntegerType, nullable = true)
-))
-val spatialDf = Adapter.toDf(spatialRDD, schema, sparkSession)
-```
+=== "Scala"
+
+	```scala
+	val schema = StructType(Array(
+	  StructField("county", GeometryUDT, nullable = true),
+	  StructField("name", StringType, nullable = true),
+	  StructField("price", DoubleType, nullable = true),
+	  StructField("age", IntegerType, nullable = true)
+	))
+	val spatialDf = Adapter.toDf(spatialRDD, schema, sparkSession)
+	```
 
 ### SpatialPairRDD to DataFrame
 
 PairRDD is the result of a spatial join query or distance join query. SedonaSQL DataFrame-RDD Adapter can convert the result to a DataFrame. But you need to provide the name of other attributes.
 
-```scala
-var joinResultDf = Adapter.toDf(joinResultPairRDD, Seq("left_attribute1", "left_attribute2"), Seq("right_attribute1", "right_attribute2"), sparkSession)
-```
+=== "Scala"
+
+	```scala
+	var joinResultDf = Adapter.toDf(joinResultPairRDD, Seq("left_attribute1", "left_attribute2"), Seq("right_attribute1", "right_attribute2"), sparkSession)
+	```
+
+=== "Java"
+
+	```java
+	import scala.collection.JavaConverters;	
+	
+	List leftFields = new ArrayList<>(Arrays.asList("c1", "c2", "c3"));
+	List rightFields = new ArrayList<>(Arrays.asList("c4", "c5", "c6"));
+	Dataset joinResultDf = Adapter.toDf(joinResultPairRDD, JavaConverters.asScalaBuffer(leftFields).toSeq(), JavaConverters.asScalaBuffer(rightFields).toSeq(), sparkSession);
+	```
+
+=== "Python"
+
+	```python
+	from sedona.utils.adapter import Adapter
 
+	joinResultDf = Adapter.toDf(jvm_sedona_rdd, ["poi_from_id", "poi_from_name"], ["poi_to_id", "poi_to_name"], spark))
+	```
 or you can use the attribute names directly from the input RDD
 
-```scala
-import scala.collection.JavaConversions._
-var joinResultDf = Adapter.toDf(joinResultPairRDD, leftRdd.fieldNames, rightRdd.fieldNames, sparkSession)
-```
+=== "Scala"
+
+	```scala
+	import scala.collection.JavaConversions._
+	var joinResultDf = Adapter.toDf(joinResultPairRDD, leftRdd.fieldNames, rightRdd.fieldNames, sparkSession)
+	```
+
+=== "Java"
+
+	```java
+	import scala.collection.JavaConverters;	
+	Dataset joinResultDf = Adapter.toDf(joinResultPairRDD, JavaConverters.asScalaBuffer(leftRdd.fieldNames).toSeq(), JavaConverters.asScalaBuffer(rightRdd.fieldNames).toSeq(), sparkSession);
+	```
+=== "Python"
+
+	```python
+	from sedona.utils.adapter import Adapter
+
+	joinResultDf = Adapter.toDf(result_pair_rdd, leftRdd.fieldNames, rightRdd.fieldNames, spark)
+	```
 
 All other attributes such as price and age will be also brought to the DataFrame as long as you specify ==carryOtherAttributes== (see [Read other attributes in an SpatialRDD](../rdd#read-other-attributes-in-an-spatialrdd)).
 
@@ -331,38 +496,438 @@ types. Note that string schemas and not all data types are supported&mdash;pleas
 [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) to confirm what is supported for your use
 case. Columns for the left and right user data must be provided.
 
-```scala
-val schema = StructType(Array(
-  StructField("leftGeometry", GeometryUDT, nullable = true),
-  StructField("name", StringType, nullable = true),
-  StructField("price", DoubleType, nullable = true),
-  StructField("age", IntegerType, nullable = true),
-  StructField("rightGeometry", GeometryUDT, nullable = true),
-  StructField("category", StringType, nullable = true)
-))
-val joinResultDf = Adapter.toDf(joinResultPairRDD, schema, sparkSession)
-```
+=== "Scala"
+
+	```scala
+	val schema = StructType(Array(
+	  StructField("leftGeometry", GeometryUDT, nullable = true),
+	  StructField("name", StringType, nullable = true),
+	  StructField("price", DoubleType, nullable = true),
+	  StructField("age", IntegerType, nullable = true),
+	  StructField("rightGeometry", GeometryUDT, nullable = true),
+	  StructField("category", StringType, nullable = true)
+	))
+	val joinResultDf = Adapter.toDf(joinResultPairRDD, schema, sparkSession)
+	```
 
 ## DataFrame Style API
 
 Sedona functions can be used in a DataFrame style API similar to Spark functions.
+
 The following objects contain the exposed functions: `org.apache.spark.sql.sedona_sql.expressions.st_functions`, `org.apache.spark.sql.sedona_sql.expressions.st_constructors`, `org.apache.spark.sql.sedona_sql.expressions.st_predicates`, and `org.apache.spark.sql.sedona_sql.expressions.st_aggregates`.
-Every functions can take all `Column` arguments.
-Additionally, overloaded forms can commonly take a mix of `String` and other Scala types (such as `Double`) as arguments.
+
+Every functions can take all `Column` arguments. Additionally, overloaded forms can commonly take a mix of `String` and other Scala types (such as `Double`) as arguments.
+
 In general the following rules apply (although check the documentation of specific functions for any exceptions):
-1. Every function returns a `Column` so that it can be used interchangeably with Spark functions as well as `DataFrame` methods such as `DataFrame.select` or `DataFrame.join`.
-1. Every function has a form that takes all `Column` arguments.
-These are the most versatile of the forms.
-1. Most functions have a form that takes a mix of `String` arguments with other Scala types.
+
+=== "Scala"
+	1. Every function returns a `Column` so that it can be used interchangeably with Spark functions as well as `DataFrame` methods such as `DataFrame.select` or `DataFrame.join`.
+	2. Every function has a form that takes all `Column` arguments.
+	These are the most versatile of the forms.
+	3. Most functions have a form that takes a mix of `String` arguments with other Scala types.
+
+=== "Python"
+
+	1. `Column` type arguments are passed straight through and are always accepted.
+	2. `str` type arguments are always assumed to be names of columns and are wrapped in a `Column` to support that.
+	If an actual string literal needs to be passed then it will need to be wrapped in a `Column` using `pyspark.sql.functions.lit`.
+	3. Any other types of arguments are checked on a per function basis.
+	Generally, arguments that could reasonably support a python native type are accepted and passed through.
+	Check the specific docstring of the function to be sure.
+	4. Shapely `Geometry` objects are not currently accepted in any of the functions.
+
 The exact mixture of argument types allowed is function specific.
 However, in these instances, all `String` arguments are assumed to be the names of columns and will be wrapped in a `Column` automatically.
 Non-`String` arguments are assumed to be literals that are passed to the sedona function. If you need to pass a `String` literal then you should use the all `Column` form of the sedona function and wrap the `String` literal in a `Column` with the `lit` Spark function.
 
 A short example of using this API (uses the `array_min` and `array_max` Spark functions):
 
-```scala
-val values_df = spark.sql("SELECT array(0.0, 1.0, 2.0) AS values")
-val min_value = array_min("values")
-val max_value = array_max("values")
-val point_df = values_df.select(ST_Point(min_value, max_value).as("point"))
+=== "Scala"
+
+	```scala
+	val values_df = spark.sql("SELECT array(0.0, 1.0, 2.0) AS values")
+	val min_value = array_min("values")
+	val max_value = array_max("values")
+	val point_df = values_df.select(ST_Point(min_value, max_value).as("point"))
+	```
+
+=== "Python"
+
+	```python3
+	from pyspark.sql import functions as f
+	
+	from sedona.sql import st_constructors as stc
+	
+	df = spark.sql("SELECT array(0.0, 1.0, 2.0) AS values")
+	
+	min_value = f.array_min("values")
+	max_value = f.array_max("values")
+	
+	df = df.select(stc.ST_Point(min_value, max_value).alias("point"))
+	```
+	
+The above code will generate the following dataframe:
+```
++-----------+
+|point      |
++-----------+
+|POINT (0 2)|
++-----------+
+```
+
+Some functions will take native python values and infer them as literals.
+For example:
+
+```python3
+df = df.select(stc.ST_Point(1.0, 3.0).alias("point"))
+```
+
+This will generate a dataframe with a constant point in a column:
+```
++-----------+
+|point      |
++-----------+
+|POINT (1 3)|
++-----------+
+```
+
+## Interoperate with GeoPandas
+
+Sedona Python has implemented serializers and deserializers which allows to convert Sedona Geometry objects into Shapely BaseGeometry objects. Based on that it is possible to load the data with geopandas from file (look at Fiona possible drivers) and create Spark DataFrame based on GeoDataFrame object.
+
+### From GeoPandas to Sedona DataFrame
+
+Loading the data from shapefile using geopandas read_file method and create Spark DataFrame based on GeoDataFrame:
+
+```python
+
+import geopandas as gpd
+from pyspark.sql import SparkSession
+
+from sedona.register import SedonaRegistrator
+
+spark = SparkSession.builder.\
+      getOrCreate()
+
+SedonaRegistrator.registerAll(spark)
+
+gdf = gpd.read_file("gis_osm_pois_free_1.shp")
+
+spark.createDataFrame(
+  gdf
+).show()
+
 ```
+
+This query will show the following outputs:
+
+```
+
++---------+----+-----------+--------------------+--------------------+
+|   osm_id|code|     fclass|                name|            geometry|
++---------+----+-----------+--------------------+--------------------+
+| 26860257|2422|  camp_site|            de Kroon|POINT (15.3393145...|
+| 26860294|2406|     chalet|      Leśne Ustronie|POINT (14.8709625...|
+| 29947493|2402|      motel|                null|POINT (15.0946636...|
+| 29947498|2602|        atm|                null|POINT (15.0732014...|
+| 29947499|2401|      hotel|                null|POINT (15.0696777...|
+| 29947505|2401|      hotel|                null|POINT (15.0155749...|
++---------+----+-----------+--------------------+--------------------+
+
+```
+
+### From Sedona DataFrame to GeoPandas
+
+Reading data with Spark and converting to GeoPandas
+
+```python
+
+import geopandas as gpd
+from pyspark.sql import SparkSession
+
+from sedona.register import SedonaRegistrator
+
+spark = SparkSession.builder.\
+    getOrCreate()
+
+SedonaRegistrator.registerAll(spark)
+
+counties = spark.\
+    read.\
+    option("delimiter", "|").\
+    option("header", "true").\
+    csv("counties.csv")
+
+counties.createOrReplaceTempView("county")
+
+counties_geom = spark.sql(
+    "SELECT *, st_geomFromWKT(geom) as geometry from county"
+)
+
+df = counties_geom.toPandas()
+gdf = gpd.GeoDataFrame(df, geometry="geometry")
+
+gdf.plot(
+    figsize=(10, 8),
+    column="value",
+    legend=True,
+    cmap='YlOrBr',
+    scheme='quantiles',
+    edgecolor='lightgray'
+)
+
+```
+<br>
+<br>
+
+![poland_image](https://user-images.githubusercontent.com/22958216/67603296-c08b4680-f778-11e9-8cde-d2e14ffbba3b.png)
+
+<br>
+<br>
+
+
+## Interoperate with shapely objects
+
+### Supported Shapely objects
+
+| shapely object  | Available          |
+|-----------------|--------------------|
+| Point           | :heavy_check_mark: |
+| MultiPoint      | :heavy_check_mark: |
+| LineString      | :heavy_check_mark: |
+| MultiLinestring | :heavy_check_mark: |
+| Polygon         | :heavy_check_mark: |
+| MultiPolygon    | :heavy_check_mark: |
+
+To create Spark DataFrame based on mentioned Geometry types, please use <b> GeometryType </b> from  <b> sedona.sql.types </b> module. Converting works for list or tuple with shapely objects.
+
+Schema for target table with integer id and geometry type can be defined as follow:
+
+```python
+
+from pyspark.sql.types import IntegerType, StructField, StructType
+
+from sedona.sql.types import GeometryType
+
+schema = StructType(
+    [
+        StructField("id", IntegerType(), False),
+        StructField("geom", GeometryType(), False)
+    ]
+)
+
+```
+
+Also Spark DataFrame with geometry type can be converted to list of shapely objects with <b> collect </b> method.
+
+### Point example
+
+```python
+from shapely.geometry import Point
+
+data = [
+    [1, Point(21.0, 52.0)],
+    [1, Point(23.0, 42.0)],
+    [1, Point(26.0, 32.0)]
+]
+
+
+gdf = spark.createDataFrame(
+    data,
+    schema
+)
+
+gdf.show()
+
+```
+
+```
++---+-------------+
+| id|         geom|
++---+-------------+
+|  1|POINT (21 52)|
+|  1|POINT (23 42)|
+|  1|POINT (26 32)|
++---+-------------+
+```
+
+```python
+gdf.printSchema()
+```
+
+```
+root
+ |-- id: integer (nullable = false)
+ |-- geom: geometry (nullable = false)
+```
+
+### MultiPoint example
+
+```python3
+
+from shapely.geometry import MultiPoint
+
+data = [
+    [1, MultiPoint([[19.511463, 51.765158], [19.446408, 51.779752]])]
+]
+
+gdf = spark.createDataFrame(
+    data,
+    schema
+).show(1, False)
+
+```
+
+```
+
++---+---------------------------------------------------------+
+|id |geom                                                     |
++---+---------------------------------------------------------+
+|1  |MULTIPOINT ((19.511463 51.765158), (19.446408 51.779752))|
++---+---------------------------------------------------------+
+
+
+```
+
+### LineString example
+
+```python3
+
+from shapely.geometry import LineString
+
+line = [(40, 40), (30, 30), (40, 20), (30, 10)]
+
+data = [
+    [1, LineString(line)]
+]
+
+gdf = spark.createDataFrame(
+    data,
+    schema
+)
+
+gdf.show(1, False)
+
+```
+
+```
+
++---+--------------------------------+
+|id |geom                            |
++---+--------------------------------+
+|1  |LINESTRING (10 10, 20 20, 10 40)|
++---+--------------------------------+
+
+```
+
+### MultiLineString example
+
+```python3
+
+from shapely.geometry import MultiLineString
+
+line1 = [(10, 10), (20, 20), (10, 40)]
+line2 = [(40, 40), (30, 30), (40, 20), (30, 10)]
+
+data = [
+    [1, MultiLineString([line1, line2])]
+]
+
+gdf = spark.createDataFrame(
+    data,
+    schema
+)
+
+gdf.show(1, False)
+
+```
+
+```
+
++---+---------------------------------------------------------------------+
+|id |geom                                                                 |
++---+---------------------------------------------------------------------+
+|1  |MULTILINESTRING ((10 10, 20 20, 10 40), (40 40, 30 30, 40 20, 30 10))|
++---+---------------------------------------------------------------------+
+
+```
+
+### Polygon example
+
+```python3
+
+from shapely.geometry import Polygon
+
+polygon = Polygon(
+    [
+         [19.51121, 51.76426],
+         [19.51056, 51.76583],
+         [19.51216, 51.76599],
+         [19.51280, 51.76448],
+         [19.51121, 51.76426]
+    ]
+)
+
+data = [
+    [1, polygon]
+]
+
+gdf = spark.createDataFrame(
+    data,
+    schema
+)
+
+gdf.show(1, False)
+
+```
+
+
+```
+
++---+--------------------------------------------------------------------------------------------------------+
+|id |geom                                                                                                    |
++---+--------------------------------------------------------------------------------------------------------+
+|1  |POLYGON ((19.51121 51.76426, 19.51056 51.76583, 19.51216 51.76599, 19.5128 51.76448, 19.51121 51.76426))|
++---+--------------------------------------------------------------------------------------------------------+
+
+```
+
+### MultiPolygon example
+
+```python3
+
+from shapely.geometry import MultiPolygon
+
+exterior_p1 = [(0, 0), (0, 2), (2, 2), (2, 0), (0, 0)]
+interior_p1 = [(1, 1), (1, 1.5), (1.5, 1.5), (1.5, 1), (1, 1)]
+
+exterior_p2 = [(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)]
+
+polygons = [
+    Polygon(exterior_p1, [interior_p1]),
+    Polygon(exterior_p2)
+]
+
+data = [
+    [1, MultiPolygon(polygons)]
+]
+
+gdf = spark.createDataFrame(
+    data,
+    schema
+)
+
+gdf.show(1, False)
+
+```
+
+```
+
++---+----------------------------------------------------------------------------------------------------------+
+|id |geom                                                                                                      |
++---+----------------------------------------------------------------------------------------------------------+
+|1  |MULTIPOLYGON (((0 0, 0 2, 2 2, 2 0, 0 0), (1 1, 1.5 1, 1.5 1.5, 1 1.5, 1 1)), ((0 0, 0 1, 1 1, 1 0, 0 0)))|
++---+----------------------------------------------------------------------------------------------------------+
+
+```
+
diff --git a/mkdocs.yml b/mkdocs.yml
index 2834d5dc..643cdc9c 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -27,14 +27,12 @@ nav:
     - Programming Guides:
       - Sedona with Apache Spark:
         - Spatial SQL app:
-          - Scala/Java: tutorial/sql.md
+          - Scala/Java/Python: tutorial/sql.md
           - Pure SQL: tutorial/sql-pure-sql.md
-          - Python: tutorial/sql-python.md
           - R: tutorial/sql-r.md
-          - Raster data - Map Algebra: tutorial/raster.md      
+          - Raster data processing: tutorial/raster.md      
         - Spatial RDD app:
-          - Scala/Java: tutorial/rdd.md
-          - Python: tutorial/core-python.md
+          - Scala/Java/Python: tutorial/rdd.md
           - R: tutorial/rdd-r.md
         - Map visualization SQL app:
           - Scala/Java: tutorial/viz.md