You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sedona.apache.org by ji...@apache.org on 2023/02/12 22:48:10 UTC

[sedona] branch master updated: [DOCS] Markdown: Standardize code block linguist languages (#763)

This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git


The following commit(s) were added to refs/heads/master by this push:
     new b0b74295 [DOCS] Markdown: Standardize code block linguist languages (#763)
b0b74295 is described below

commit b0b74295a1899c94fdadf2497ea671111fc4f235
Author: John Bampton <jb...@users.noreply.github.com>
AuthorDate: Mon Feb 13 08:48:06 2023 +1000

    [DOCS] Markdown: Standardize code block linguist languages (#763)
---
 docs/api/flink/Aggregator.md                       |   4 +-
 docs/api/flink/Constructor.md                      |  32 ++--
 docs/api/flink/Function.md                         | 108 +++++++-------
 docs/api/flink/Predicate.md                        |  16 +-
 docs/api/sql/AggregateFunction.md                  |   6 +-
 docs/api/sql/Constructor.md                        |  46 +++---
 docs/api/sql/Function.md                           | 164 ++++++++++-----------
 docs/api/sql/Optimizer.md                          |  16 +-
 docs/api/sql/Overview.md                           |   8 +-
 docs/api/sql/Parameter.md                          |   6 +-
 docs/api/sql/Predicate.md                          |  24 +--
 docs/api/sql/Raster-loader.md                      |  24 +--
 docs/api/sql/Raster-operators.md                   |  44 +++---
 docs/api/viz/sql.md                                |  16 +-
 docs/setup/databricks.md                           |   4 +-
 docs/setup/install-python.md                       |   2 +-
 docs/setup/install-r.md                            |   8 +-
 .../Advanced-Tutorial-Tune-your-Application.md     |   6 +-
 docs/tutorial/core-python.md                       |   2 +-
 docs/tutorial/flink/sql.md                         |  34 ++---
 docs/tutorial/rdd-r.md                             |   4 +-
 docs/tutorial/rdd.md                               |  80 +++++-----
 docs/tutorial/sql-r.md                             |   6 +-
 docs/tutorial/sql.md                               |  40 ++---
 docs/tutorial/viz-r.md                             |   2 +-
 25 files changed, 351 insertions(+), 351 deletions(-)

diff --git a/docs/api/flink/Aggregator.md b/docs/api/flink/Aggregator.md
index e886ded6..8e4e9554 100644
--- a/docs/api/flink/Aggregator.md
+++ b/docs/api/flink/Aggregator.md
@@ -7,7 +7,7 @@ Format: `ST_Envelope_Aggr (A:geometryColumn)`
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_Envelope_Aggr(pointdf.arealandmark)
 FROM pointdf
 ```
@@ -21,7 +21,7 @@ Format: `ST_Union_Aggr (A:geometryColumn)`
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_Union_Aggr(polygondf.polygonshape)
 FROM polygondf
 ```
\ No newline at end of file
diff --git a/docs/api/flink/Constructor.md b/docs/api/flink/Constructor.md
index 22cff1ac..f24313e8 100644
--- a/docs/api/flink/Constructor.md
+++ b/docs/api/flink/Constructor.md
@@ -7,7 +7,7 @@ Format: `ST_GeomFromGeoHash(geohash: string, precision: int)`
 Since: `v1.2.1`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromGeoHash('s00twy01mt', 4) AS geom
 ```
 
@@ -20,7 +20,7 @@ Format: `ST_GeomFromGeoJSON (GeoJson:string)`
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromGeoJSON(polygontable._c0) AS polygonshape
 FROM polygontable
 ```
@@ -35,7 +35,7 @@ Format:
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromGML('<gml:LineString srsName="EPSG:4269"><gml:coordinates>-71.16028,42.258729 -71.160837,42.259112 -71.161143,42.25932</gml:coordinates></gml:LineString>') AS geometry
 ```
 
@@ -49,7 +49,7 @@ Format:
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromKML('<LineString><coordinates>-71.1663,42.2614 -71.1667,42.2616</coordinates></LineString>') AS geometry
 ```
 
@@ -63,7 +63,7 @@ Format:
 Since: `v1.2.1`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromText('POINT(40.7128 -74.0060)') AS geometry
 ```
 
@@ -78,7 +78,7 @@ Format:
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromWKB(polygontable._c0) AS polygonshape
 FROM polygontable
 ```
@@ -89,7 +89,7 @@ Format:
 Since: `v1.2.1`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromWKB(polygontable._c0) AS polygonshape
 FROM polygontable
 ```
@@ -104,7 +104,7 @@ Format:
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromWKT('POINT(40.7128 -74.0060)') AS geometry
 ```
 
@@ -117,7 +117,7 @@ Format: `ST_LineFromText (Text:string, Delimiter:char)`
 Since: `v1.2.1`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_LineFromText('Linestring(1 2, 3 4)') AS line
 ```
 
@@ -130,7 +130,7 @@ Format: `ST_LineStringFromText (Text:string, Delimiter:char)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_LineStringFromText('Linestring(1 2, 3 4)') AS line
 ```
 
@@ -143,7 +143,7 @@ Format: `ST_MLineFromText (Text:string, Srid: int)`
 Since: `1.3.1`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_MLineFromText('MULTILINESTRING((1 2, 3 4), (4 5, 6 7))') AS multiLine
 SELECT ST_MLineFromText('MULTILINESTRING((1 2, 3 4), (4 5, 6 7))', 4269) AS multiLine
 ```
@@ -157,7 +157,7 @@ Format: `ST_MPolyFromText (Text:string, Srid: int)`
 Since: `1.3.1`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_MPolyFromText('MULTIPOLYGON(((-70.916 42.1002,-70.9468 42.0946,-70.9765 42.0872 )))') AS multiPolygon
 SELECT ST_MPolyFromText('MULTIPOLYGON(((-70.916 42.1002,-70.9468 42.0946,-70.9765 42.0872 )))', 4269) AS multiPolygon
 ```
@@ -171,7 +171,7 @@ Format: `ST_Point (X:decimal, Y:decimal)`
 Since: `v1.2.1`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_Point(x, y) AS pointshape
 FROM pointtable
 ```
@@ -185,7 +185,7 @@ Format: `ST_PointFromText (Text:string, Delimiter:char)`
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_PointFromText('40.7128,-74.0060', ',') AS pointshape
 ```
 
@@ -198,7 +198,7 @@ Format: `ST_PolygonFromEnvelope (MinX:decimal, MinY:decimal, MaxX:decimal, MaxY:
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT *
 FROM pointdf
 WHERE ST_Contains(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.pointshape)
@@ -213,6 +213,6 @@ Format: `ST_PolygonFromText (Text:string, Delimiter:char)`
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_PolygonFromText('-74.0428197,40.6867969,-74.0421975,40.6921336,-74.0508020,40.6912794,-74.0428197,40.6867969', ',') AS polygonshape
 ```
diff --git a/docs/api/flink/Function.md b/docs/api/flink/Function.md
index cd656206..239800ee 100644
--- a/docs/api/flink/Function.md
+++ b/docs/api/flink/Function.md
@@ -8,7 +8,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_3DDistance(polygondf.countyshape, polygondf.countyshape)
 FROM polygondf
 ```
@@ -24,7 +24,7 @@ Format: `ST_AddPoint(geom: geometry, point: geometry)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_AddPoint(ST_GeomFromText("LINESTRING(0 0, 1 1, 1 0)"), ST_GeomFromText("Point(21 52)"), 1)
 
 SELECT ST_AddPoint(ST_GeomFromText("Linestring(0 0, 1 1, 1 0)"), ST_GeomFromText("Point(21 52)"))
@@ -46,7 +46,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Area(polygondf.countyshape)
 FROM polygondf
 ```
@@ -61,7 +61,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsBinary(polygondf.countyshape)
 FROM polygondf
 ```
@@ -79,7 +79,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsEWKB(polygondf.countyshape)
 FROM polygondf
 ```
@@ -97,7 +97,7 @@ Format: `ST_AsEWKT (A:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsEWKT(polygondf.countyshape)
 FROM polygondf
 ```
@@ -111,7 +111,7 @@ Format: `ST_AsGeoJSON (A:geometry)`
 Since: `v1.3.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsGeoJSON(polygondf.countyshape)
 FROM polygondf
 ```
@@ -125,7 +125,7 @@ Format: `ST_AsGML (A:geometry)`
 Since: `v1.3.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsGML(polygondf.countyshape)
 FROM polygondf
 ```
@@ -139,7 +139,7 @@ Format: `ST_AsKML (A:geometry)`
 Since: `v1.3.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsKML(polygondf.countyshape)
 FROM polygondf
 ```
@@ -154,7 +154,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsText(polygondf.countyshape)
 FROM polygondf
 ```
@@ -169,7 +169,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Azimuth(ST_POINT(0.0, 25.0), ST_POINT(0.0, 0.0))
 ```
 
@@ -184,7 +184,7 @@ Format: `ST_Boundary(geom: geometry)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_Boundary(ST_GeomFromText('POLYGON ((1 1, 0 0, -1 1, 1 1))'))
 ```
 
@@ -199,7 +199,7 @@ Format: `ST_Buffer (A:geometry, buffer: Double)`
 Since: `v1.2.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Buffer(polygondf.countyshape, 1)
 FROM polygondf
 ```
@@ -214,7 +214,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_BuildArea(ST_Collect(smallDf, bigDf)) AS geom
 FROM smallDf, bigDf
 ```
@@ -235,7 +235,7 @@ Since: `v1.4.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_ConcaveHull(polygondf.countyshape, pctConvex)`
 FROM polygondf
 ```
@@ -253,7 +253,7 @@ Format: `ST_Distance (A:geometry, B:geometry)`
 Since: `v1.2.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Distance(polygondf.countyshape, polygondf.countyshape)
 FROM polygondf
 ```
@@ -268,7 +268,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Envelope(polygondf.countyshape)
 FROM polygondf
 ```
@@ -283,7 +283,7 @@ Since: `v1.2.1`
 
 Examples:
 
-```SQL
+```sql
 SELECT ST_ExteriorRing(df.geometry)
 FROM df
 ```
@@ -301,7 +301,7 @@ Format: `ST_FlipCoordinates(A:geometry)`
 Since: `v1.2.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_FlipCoordinates(df.geometry)
 FROM df
 ```
@@ -320,7 +320,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Force_2D(df.geometry) AS geom
 FROM df
 ```
@@ -341,7 +341,7 @@ Example:
 
 Query:
 
-```SQL
+```sql
 SELECT ST_GeoHash(ST_GeomFromText('POINT(21.427834 52.042576573)'), 5) AS geohash
 ```
 
@@ -364,7 +364,7 @@ Format: `ST_GeometryN(geom: geometry, n: Int)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_GeometryN(ST_GeomFromText('MULTIPOINT((1 2), (3 4), (5 6), (8 9))'), 1)
 ```
 
@@ -379,7 +379,7 @@ Format: `ST_InteriorRingN(geom: geometry, n: Int)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_InteriorRingN(ST_GeomFromText('POLYGON((0 0, 0 5, 5 5, 5 0, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1), (1 3, 2 3, 2 4, 1 4, 1 3), (3 3, 4 3, 4 4, 3 4, 3 3))'), 0)
 ```
 
@@ -395,7 +395,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_IsClosed(ST_GeomFromText('LINESTRING(0 0, 1 1, 1 0)'))
 ```
 
@@ -409,7 +409,7 @@ Since: `v1.2.1`
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_IsEmpty(polygondf.countyshape)
 FROM polygondf
 ```
@@ -424,7 +424,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_IsRing(ST_GeomFromText("LINESTRING(0 0, 0 1, 1 1, 1 0, 0 0)"))
 ```
 
@@ -440,7 +440,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_IsSimple(polygondf.countyshape)
 FROM polygondf
 ```
@@ -455,7 +455,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_IsValid(polygondf.countyshape)
 FROM polygondf
 ```
@@ -470,7 +470,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Length(polygondf.countyshape)
 FROM polygondf
 ```
@@ -485,7 +485,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_LineFromMultiPoint(df.geometry) AS geom
 FROM df
 ```
@@ -506,7 +506,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsEWKT(ST_Normalize(ST_GeomFromWKT('POLYGON((0 1, 1 1, 1 0, 0 0, 0 1))'))) AS geom
 ```
 
@@ -529,7 +529,7 @@ Since: `v1.3.0`
 Format: `ST_NPoints (A:geometry)`
 
 Example:
-```SQL
+```sql
 SELECT ST_NPoints(polygondf.countyshape)
 FROM polygondf
 ```
@@ -544,7 +544,7 @@ Since: `v1.3.1`
 
 Spark SQL example with z co-rodinate:
 
-```SQL
+```sql
 SELECT ST_NDims(ST_GeomFromEWKT('POINT(1 1 2)'))
 ```
 
@@ -552,7 +552,7 @@ Output: `3`
 
 Spark SQL example with x,y co-ordinate:
 
-```SQL
+```sql
 SELECT ST_NDims(ST_GeomFromText('POINT(1 1)'))
 ```
 
@@ -567,7 +567,7 @@ Format: `ST_NumGeometries (A:geometry)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_NumGeometries(df.geometry)
 FROM df
 ```
@@ -581,7 +581,7 @@ Format: `ST_NumInteriorRings(geom: geometry)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_NumInteriorRings(ST_GeomFromText('POLYGON ((0 0, 0 5, 5 5, 5 0, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1))'))
 ```
 
@@ -597,7 +597,7 @@ Since: `v1.2.1`
 
 Examples:
 
-```SQL
+```sql
 SELECT ST_PointN(df.geometry, 2)
 FROM df
 ```
@@ -624,7 +624,7 @@ Since: `v1.2.1`
 
 Examples:
 
-```SQL
+```sql
 SELECT ST_PointOnSurface(df.geometry)
 FROM df
 ```
@@ -655,7 +655,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Reverse(df.geometry) AS geom
 FROM df
 ```
@@ -675,7 +675,7 @@ Format: `ST_RemovePoint(geom: geometry)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_RemovePoint(ST_GeomFromText("LINESTRING(0 0, 1 1, 1 0)"), 1)
 ```
 
@@ -691,7 +691,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_SetPoint(ST_GeomFromText('LINESTRING (0 0, 0 1, 1 1)'), 2, ST_GeomFromText('POINT (1 0)')) AS geom
 ```
 
@@ -714,7 +714,7 @@ Format: `ST_SetSRID (A:geometry, srid: integer)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_SetSRID(polygondf.countyshape, 3021)
 FROM polygondf
 ```
@@ -728,7 +728,7 @@ Format: `ST_SRID (A:geometry)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_SRID(polygondf.countyshape)
 FROM polygondf
 ```
@@ -751,13 +751,13 @@ Format: `ST_Transform (A:geometry, SourceCRS:string, TargetCRS:string ,[Optional
 Since: `v1.2.0`
 
 Spark SQL example (simple):
-```SQL
+```sql
 SELECT ST_Transform(polygondf.countyshape, 'epsg:4326','epsg:3857') 
 FROM polygondf
 ```
 
 Spark SQL example (with optional parameters):
-```SQL
+```sql
 SELECT ST_Transform(polygondf.countyshape, 'epsg:4326','epsg:3857', false)
 FROM polygondf
 ```
@@ -774,7 +774,7 @@ Format: `ST_X(pointA: Point)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_X(ST_POINT(0.0 25.0))
 ```
 
@@ -790,7 +790,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_XMax(df.geometry) AS xmax
 FROM df
 ```
@@ -809,7 +809,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_XMin(df.geometry) AS xmin
 FROM df
 ```
@@ -827,7 +827,7 @@ Format: `ST_Y(pointA: Point)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_Y(ST_POINT(0.0 25.0))
 ```
 
@@ -842,7 +842,7 @@ Format: `ST_YMax (A:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_YMax(ST_GeomFromText('POLYGON((0 0 1, 1 1 1, 1 2 1, 1 1 1, 0 0 1))'))
 ```
 
@@ -857,7 +857,7 @@ Format: `ST_Y_Min (A:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_YMin(ST_GeomFromText('POLYGON((0 0 1, 1 1 1, 1 2 1, 1 1 1, 0 0 1))'))
 ```
 
@@ -872,7 +872,7 @@ Format: `ST_Z(pointA: Point)`
 Since: `v1.3.0`
 
 Example:
-```SQL
+```sql
 SELECT ST_Z(ST_POINT(0.0 25.0 11.0))
 ```
 
@@ -887,7 +887,7 @@ Format: `ST_ZMax(geom: geometry)`
 Since: `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_ZMax(ST_GeomFromText('POLYGON((0 0 1, 1 1 1, 1 2 1, 1 1 1, 0 0 1))'))
 ```
 
@@ -902,7 +902,7 @@ Format: `ST_ZMin(geom: geometry)`
 Since: `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_ZMin(ST_GeomFromText('LINESTRING(1 3 4, 5 6 7)'))
 ```
 
diff --git a/docs/api/flink/Predicate.md b/docs/api/flink/Predicate.md
index 86ce3679..8794ddc6 100644
--- a/docs/api/flink/Predicate.md
+++ b/docs/api/flink/Predicate.md
@@ -7,7 +7,7 @@ Format: `ST_Contains (A:geometry, B:geometry)`
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Contains(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
@@ -22,7 +22,7 @@ Format: `ST_Disjoint (A:geometry, B:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT *
 FROM pointdf 
 WHERE ST_Disjoinnt(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
@@ -37,7 +37,7 @@ Format: `ST_Intersects (A:geometry, B:geometry)`
 Since: `v1.2.0`
 
 SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Intersects(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
@@ -52,7 +52,7 @@ Format: `ST_Within (A:geometry, B:geometry)`
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Within(pointdf.arealandmark, ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0))
@@ -66,14 +66,14 @@ Format: `ST_OrderingEquals(A: geometry, B: geometry)`
 Since: `v1.2.1`
 
 SQL example 1:
-```SQL
+```sql
 SELECT ST_OrderingEquals(ST_GeomFromWKT('POLYGON((2 0, 0 2, -2 0, 2 0))'), ST_GeomFromWKT('POLYGON((2 0, 0 2, -2 0, 2 0))'))
 ```
 
 Output: `true`
 
 SQL example 2:
-```SQL
+```sql
 SELECT ST_OrderingEquals(ST_GeomFromWKT('POLYGON((2 0, 0 2, -2 0, 2 0))'), ST_GeomFromWKT('POLYGON((0 2, -2 0, 2 0, 0 2))'))
 ```
 
@@ -88,7 +88,7 @@ Format: `ST_Covers (A:geometry, B:geometry)`
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Covers(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
@@ -103,7 +103,7 @@ Format: `ST_CoveredBy (A:geometry, B:geometry)`
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_CoveredBy(pointdf.arealandmark, ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0))
diff --git a/docs/api/sql/AggregateFunction.md b/docs/api/sql/AggregateFunction.md
index dff767c9..e4e636e4 100644
--- a/docs/api/sql/AggregateFunction.md
+++ b/docs/api/sql/AggregateFunction.md
@@ -7,7 +7,7 @@ Format: `ST_Envelope_Aggr (A:geometryColumn)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Envelope_Aggr(pointdf.arealandmark)
 FROM pointdf
 ```
@@ -21,7 +21,7 @@ Format: `ST_Intersection_Aggr (A:geometryColumn)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Intersection_Aggr(polygondf.polygonshape)
 FROM polygondf
 ```
@@ -35,7 +35,7 @@ Format: `ST_Union_Aggr (A:geometryColumn)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Union_Aggr(polygondf.polygonshape)
 FROM polygondf
 ```
\ No newline at end of file
diff --git a/docs/api/sql/Constructor.md b/docs/api/sql/Constructor.md
index 292c1176..91776b16 100644
--- a/docs/api/sql/Constructor.md
+++ b/docs/api/sql/Constructor.md
@@ -5,7 +5,7 @@ Since: `v1.0.0`
 
 SparkSQL example:
 
-```Scala
+```scala
 var spatialRDD = new SpatialRDD[Geometry]
 spatialRDD.rawSpatialRDD = ShapefileReader.readToGeometryRDD(sparkSession.sparkContext, shapefileInputLocation)
 var rawSpatialDf = Adapter.toDf(spatialRDD,sparkSession)
@@ -39,7 +39,7 @@ via `sedona.global.charset` system property before the call to `ShapefileReader.
 
 Example:
 
-```Scala
+```scala
 System.setProperty("sedona.global.charset", "utf8")
 ```
 
@@ -52,7 +52,7 @@ Format: `ST_GeomFromGeoHash(geohash: string, precision: int)`
 Since: `v1.1.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromGeoHash('s00twy01mt', 4) AS geom
 ```
 
@@ -75,7 +75,7 @@ Format: `ST_GeomFromGeoJSON (GeoJson:string)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```Scala
+```scala
 var polygonJsonDf = sparkSession.read.format("csv").option("delimiter","\t").option("header","false").load(geoJsonGeomInputLocation)
 polygonJsonDf.createOrReplaceTempView("polygontable")
 polygonJsonDf.show()
@@ -100,7 +100,7 @@ Format:
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromGML('<gml:LineString srsName="EPSG:4269"><gml:coordinates>-71.16028,42.258729 -71.160837,42.259112 -71.161143,42.25932</gml:coordinates></gml:LineString>') AS geometry
 ```
 
@@ -114,7 +114,7 @@ Format:
 Since: `v1.3.0`
 
 SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromKML('<LineString><coordinates>-71.1663,42.2614 -71.1667,42.2616</coordinates></LineString>') AS geometry
 ```
 
@@ -131,7 +131,7 @@ Since: `v1.0.0`
 The optional srid parameter was added in `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromText('POINT(40.7128 -74.0060)') AS geometry
 ```
 
@@ -146,7 +146,7 @@ Format:
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromWKB(polygontable._c0) AS polygonshape
 FROM polygontable
 ```
@@ -164,12 +164,12 @@ Since: `v1.0.0`
 The optional srid parameter was added in `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_GeomFromWKT(polygontable._c0) AS polygonshape
 FROM polygontable
 ```
 
-```SQL
+```sql
 SELECT ST_GeomFromWKT('POINT(40.7128 -74.0060)') AS geometry
 ```
 
@@ -183,12 +183,12 @@ Format:
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_LineFromText(linetable._c0) AS lineshape
 FROM linetable
 ```
 
-```SQL
+```sql
 SELECT ST_LineFromText('Linestring(1 2, 3 4)') AS line
 ```
 
@@ -201,12 +201,12 @@ Format: `ST_LineStringFromText (Text:string, Delimiter:char)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_LineStringFromText(linestringtable._c0,',') AS linestringshape
 FROM linestringtable
 ```
 
-```SQL
+```sql
 SELECT ST_LineStringFromText('-74.0428197,40.6867969,-74.0421975,40.6921336,-74.0508020,40.6912794', ',') AS linestringshape
 ```
 ## ST_MLineFromText
@@ -220,7 +220,7 @@ Format:
 Since: `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_MLineFromText('MULTILINESTRING((1 2, 3 4), (4 5, 6 7))') AS multiLine;
 SELECT ST_MLineFromText('MULTILINESTRING((1 2, 3 4), (4 5, 6 7))',4269) AS multiLine;
 ```
@@ -236,7 +236,7 @@ Format:
 Since: `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_MPolyFromText('MULTIPOLYGON(((-70.916 42.1002,-70.9468 42.0946,-70.9765 42.0872 )))') AS multiPolygon
 SELECT ST_MPolyFromText('MULTIPOLYGON(((-70.916 42.1002,-70.9468 42.0946,-70.9765 42.0872 )))',4269) AS multiPolygon
 
@@ -254,7 +254,7 @@ In `v1.4.0` an optional Z parameter was removed to be more consistent with other
 If you are upgrading from an older version of Sedona - please use ST_PointZ to create 3D points.
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Point(CAST(pointtable._c0 AS Decimal(24,20)), CAST(pointtable._c1 AS Decimal(24,20))) AS pointshape
 FROM pointtable
 ```
@@ -269,7 +269,7 @@ Format: `ST_PointZ (X:decimal, Y:decimal, Z:decimal, srid:integer)`
 Since: `v1.4.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_PointZ(1.0, 2.0, 3.0) AS pointshape
 ```
 
@@ -282,12 +282,12 @@ Format: `ST_PointFromText (Text:string, Delimiter:char)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_PointFromText(pointtable._c0,',') AS pointshape
 FROM pointtable
 ```
 
-```SQL
+```sql
 SELECT ST_PointFromText('40.7128,-74.0060', ',') AS pointshape
 ```
 
@@ -300,7 +300,7 @@ Format: `ST_PolygonFromEnvelope (MinX:decimal, MinY:decimal, MaxX:decimal, MaxY:
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT *
 FROM pointdf
 WHERE ST_Contains(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.pointshape)
@@ -315,11 +315,11 @@ Format: `ST_PolygonFromText (Text:string, Delimiter:char)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_PolygonFromText(polygontable._c0,',') AS polygonshape
 FROM polygontable
 ```
 
-```SQL
+```sql
 SELECT ST_PolygonFromText('-74.0428197,40.6867969,-74.0421975,40.6921336,-74.0508020,40.6912794,-74.0428197,40.6867969', ',') AS polygonshape
 ```
diff --git a/docs/api/sql/Function.md b/docs/api/sql/Function.md
index 08752278..09297fda 100644
--- a/docs/api/sql/Function.md
+++ b/docs/api/sql/Function.md
@@ -7,7 +7,7 @@ Format: `ST_3DDistance (A:geometry, B:geometry)`
 Since: `v1.2.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_3DDistance(polygondf.countyshape, polygondf.countyshape)
 FROM polygondf
 ```
@@ -23,7 +23,7 @@ Format: `ST_AddPoint(geom: geometry, point: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AddPoint(ST_GeomFromText("LINESTRING(0 0, 1 1, 1 0)"), ST_GeomFromText("Point(21 52)"), 1)
 
 SELECT ST_AddPoint(ST_GeomFromText("Linestring(0 0, 1 1, 1 0)"), ST_GeomFromText("Point(21 52)"))
@@ -44,7 +44,7 @@ Format: `ST_Area (A:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Area(polygondf.countyshape)
 FROM polygondf
 ```
@@ -58,7 +58,7 @@ Format: `ST_AsBinary (A:geometry)`
 Since: `v1.1.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsBinary(polygondf.countyshape)
 FROM polygondf
 ```
@@ -76,7 +76,7 @@ Format: `ST_AsEWKB (A:geometry)`
 Since: `v1.1.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsEWKB(polygondf.countyshape)
 FROM polygondf
 ```
@@ -94,7 +94,7 @@ Format: `ST_AsEWKT (A:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsEWKT(polygondf.countyshape)
 FROM polygondf
 ```
@@ -108,7 +108,7 @@ Format: `ST_AsGeoJSON (A:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsGeoJSON(polygondf.countyshape)
 FROM polygondf
 ```
@@ -122,7 +122,7 @@ Format: `ST_AsGML (A:geometry)`
 Since: `v1.3.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsGML(polygondf.countyshape)
 FROM polygondf
 ```
@@ -136,7 +136,7 @@ Format: `ST_AsKML (A:geometry)`
 Since: `v1.3.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsKML(polygondf.countyshape)
 FROM polygondf
 ```
@@ -150,7 +150,7 @@ Format: `ST_AsText (A:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_AsText(polygondf.countyshape)
 FROM polygondf
 ```
@@ -164,7 +164,7 @@ Format: `ST_Azimuth(pointA: Point, pointB: Point)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Azimuth(ST_POINT(0.0, 25.0), ST_POINT(0.0, 0.0))
 ```
 
@@ -179,7 +179,7 @@ Format: `ST_Boundary(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Boundary(ST_GeomFromText('POLYGON((1 1,0 0, -1 1, 1 1))'))
 ```
 
@@ -194,7 +194,7 @@ Format: `ST_Buffer (A:geometry, buffer: Double)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Buffer(polygondf.countyshape, 1)
 FROM polygondf
 ```
@@ -209,7 +209,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_BuildArea(
     ST_GeomFromText('MULTILINESTRING((0 0, 20 0, 20 20, 0 20, 0 0),(2 2, 18 2, 18 18, 2 18, 2 2))')
 ) AS geom
@@ -235,7 +235,7 @@ Format: `ST_Centroid (A:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Centroid(polygondf.countyshape)
 FROM polygondf
 ```
@@ -254,7 +254,7 @@ Since: `v1.2.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Collect(
     ST_GeomFromText('POINT(21.427834 52.042576573)'),
     ST_GeomFromText('POINT(45.342524 56.342354355)')
@@ -273,7 +273,7 @@ Result:
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Collect(
     Array(
         ST_GeomFromText('POINT(21.427834 52.042576573)'),
@@ -311,7 +311,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 WITH test_data as (
     ST_GeomFromText(
         'GEOMETRYCOLLECTION(POINT(40 10), POLYGON((0 0, 0 5, 5 5, 5 0, 0 0)))'
@@ -343,7 +343,7 @@ Format: `ST_ConcaveHull (A:geometry, pctConvex:float, allowHoles:Boolean)`
 Since: `v1.4.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_ConcaveHull(polygondf.countyshape, pctConvex)`
 FROM polygondf
 ```
@@ -357,7 +357,7 @@ Format: `ST_ConvexHull (A:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_ConvexHull(polygondf.countyshape)
 FROM polygondf
 ```
@@ -372,7 +372,7 @@ Since: `v1.2.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Difference(ST_GeomFromWKT('POLYGON ((-3 -3, 3 -3, 3 3, -3 3, -3 -3))'), ST_GeomFromWKT('POLYGON ((0 -4, 4 -4, 4 4, 0 4, 0 -4))'))
 ```
 
@@ -391,7 +391,7 @@ Format: `ST_Distance (A:geometry, B:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Distance(polygondf.countyshape, polygondf.countyshape)
 FROM polygondf
 ```
@@ -406,7 +406,7 @@ Format: `ST_Dump(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Dump(ST_GeomFromText('MULTIPOINT ((10 40), (40 30), (20 20), (30 10))'))
 ```
 
@@ -421,7 +421,7 @@ Format: `ST_DumpPoints(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_DumpPoints(ST_GeomFromText('LINESTRING (0 0, 1 1, 1 0)'))
 ```
 
@@ -436,7 +436,7 @@ Format: `ST_EndPoint(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_EndPoint(ST_GeomFromText('LINESTRING(100 150,50 60, 70 80, 160 170)'))
 ```
 
@@ -452,7 +452,7 @@ Since: `v1.0.0`
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_Envelope(polygondf.countyshape)
 FROM polygondf
 ```
@@ -466,7 +466,7 @@ Format: `ST_ExteriorRing(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_ExteriorRing(ST_GeomFromText('POLYGON((0 0 1, 1 1 1, 1 2 1, 1 1 1, 0 0 1))'))
 ```
 
@@ -481,7 +481,7 @@ Format: `ST_FlipCoordinates(A:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_FlipCoordinates(df.geometry)
 FROM df
 ```
@@ -500,7 +500,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsText(
     ST_Force_2D(ST_GeomFromText('POLYGON((0 0 2,0 5 2,5 0 2,0 0 2),(1 1 2,3 1 2,1 3 2,1 1 2))'))
 ) AS geom
@@ -528,7 +528,7 @@ Example:
 
 Query:
 
-```SQL
+```sql
 SELECT ST_GeoHash(ST_GeomFromText('POINT(21.427834 52.042576573)'), 5) AS geohash
 ```
 
@@ -551,7 +551,7 @@ Format: `ST_GeometryN(geom: geometry, n: Int)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_GeometryN(ST_GeomFromText('MULTIPOINT((1 2), (3 4), (5 6), (8 9))'), 1)
 ```
 
@@ -566,7 +566,7 @@ Format: `ST_GeometryType (A:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_GeometryType(polygondf.countyshape)
 FROM polygondf
 ```
@@ -580,7 +580,7 @@ Format: `ST_InteriorRingN(geom: geometry, n: Int)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_InteriorRingN(ST_GeomFromText('POLYGON((0 0, 0 5, 5 5, 5 0, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1), (1 3, 2 3, 2 4, 1 4, 1 3), (3 3, 4 3, 4 4, 3 4, 3 3))'), 0)
 ```
 
@@ -596,7 +596,7 @@ Since: `v1.0.0`
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_Intersection(polygondf.countyshape, polygondf.countyshape)
 FROM polygondf
 ```
@@ -610,7 +610,7 @@ Format: `ST_IsClosed(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_IsClosed(ST_GeomFromText('LINESTRING(0 0, 1 1, 1 0)'))
 ```
 
@@ -626,7 +626,7 @@ Since: `v1.2.1`
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_IsEmpty(polygondf.countyshape)
 FROM polygondf
 ```
@@ -640,7 +640,7 @@ Format: `ST_IsRing(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_IsRing(ST_GeomFromText("LINESTRING(0 0, 0 1, 1 1, 1 0, 0 0)"))
 ```
 
@@ -656,7 +656,7 @@ Since: `v1.0.0`
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_IsSimple(polygondf.countyshape)
 FROM polygondf
 ```
@@ -671,7 +671,7 @@ Since: `v1.0.0`
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_IsValid(polygondf.countyshape)
 FROM polygondf
 ```
@@ -685,7 +685,7 @@ Format: ST_Length (A:geometry)
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Length(polygondf.countyshape)
 FROM polygondf
 ```
@@ -700,7 +700,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsText(
     ST_LineFromMultiPoint(ST_GeomFromText('MULTIPOINT((10 40), (40 30), (20 20), (30 10))'))
 ) AS geom
@@ -725,7 +725,7 @@ Format: `ST_LineInterpolatePoint (geom: geometry, fraction: Double)`
 Since: `v1.0.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_LineInterpolatePoint(ST_GeomFromWKT('LINESTRING(25 50, 100 125, 150 190)'), 0.2) as Interpolated
 ```
 
@@ -749,7 +749,7 @@ Format: `ST_LineMerge (A:geometry)`
 
 Since: `v1.0.0`
 
-```SQL
+```sql
 SELECT ST_LineMerge(geometry)
 FROM df
 ```
@@ -763,7 +763,7 @@ Format: `ST_LineSubstring (geom: geometry, startfraction: Double, endfraction: D
 Since: `v1.0.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_LineSubstring(ST_GeomFromWKT('LINESTRING(25 50, 100 125, 150 190)'), 0.333, 0.666) as Substring
 ```
 
@@ -787,7 +787,7 @@ Since: `v1.1.0`
 Example:
 
 Query:
-```SQL
+```sql
 SELECT
     ST_MakePolygon(
         ST_GeomFromText('LINESTRING(7 -1, 7 6, 9 6, 9 1, 7 -1)'),
@@ -821,7 +821,7 @@ Since: `v1.0.0`
 
 Spark SQL example:
 
-```SQL
+```sql
 WITH linestring AS (
     SELECT ST_GeomFromWKT('LINESTRING(1 1, 1 1)') AS geom
 ) SELECT ST_MakeValid(geom), ST_MakeValid(geom, true) FROM linestring
@@ -850,7 +850,7 @@ Format: `ST_MinimumBoundingCircle(geom: geometry, [Optional] quadrantSegments:in
 Since: `v1.0.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_MinimumBoundingCircle(ST_GeomFromText('POLYGON((1 1,0 0, -1 1, 1 1))'))
 ```
 
@@ -863,7 +863,7 @@ Format: `ST_MinimumBoundingRadius(geom: geometry)`
 Since: `v1.0.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_MinimumBoundingRadius(ST_GeomFromText('POLYGON((1 1,0 0, -1 1, 1 1))'))
 ```
 
@@ -880,7 +880,7 @@ Since: `v1.2.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Multi(
     ST_GeomFromText('POINT(1 1)')
 ) AS geom
@@ -905,7 +905,7 @@ Since: `v1.3.1`
 
 Spark SQL example with z co-rodinate:
 
-```SQL
+```sql
 SELECT ST_NDims(ST_GeomFromEWKT('POINT(1 1 2)'))
 ```
 
@@ -913,7 +913,7 @@ Output: `3`
 
 Spark SQL example with x,y co-ordinate:
 
-```SQL
+```sql
 SELECT ST_NDims(ST_GeomFromText('POINT(1 1)'))
 ```
 
@@ -931,7 +931,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsEWKT(ST_Normalize(ST_GeomFromWKT('POLYGON((0 1, 1 1, 1 0, 0 0, 0 1))'))) AS geom
 ```
 
@@ -953,7 +953,7 @@ Since: `v1.0.0`
 
 Format: `ST_NPoints (A:geometry)`
 
-```SQL
+```sql
 SELECT ST_NPoints(polygondf.countyshape)
 FROM polygondf
 ```
@@ -966,7 +966,7 @@ Format: `ST_NumGeometries (A:geometry)`
 
 Since: `v1.0.0`
 
-```SQL
+```sql
 SELECT ST_NumGeometries(df.geometry)
 FROM df
 ```
@@ -980,7 +980,7 @@ Format: `ST_NumInteriorRings(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_NumInteriorRings(ST_GeomFromText('POLYGON ((0 0, 0 5, 5 5, 5 0, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1))'))
 ```
 
@@ -995,7 +995,7 @@ Format: `ST_PointN(geom: geometry, n: integer)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_PointN(ST_GeomFromText("LINESTRING(0 0, 1 2, 2 4, 3 6)"), 2) AS geom
 ```
 
@@ -1052,7 +1052,7 @@ Since: `v1.0.0`
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_PrecisionReduce(polygondf.countyshape, 9)
 FROM polygondf
 ```
@@ -1069,7 +1069,7 @@ Format: `ST_RemovePoint(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_RemovePoint(ST_GeomFromText("LINESTRING(0 0, 1 1, 1 0)"), 1)
 ```
 
@@ -1085,7 +1085,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_AsText(
     ST_Reverse(ST_GeomFromText('LINESTRING(0 0, 1 2, 2 4, 3 6)'))
 ) AS geom
@@ -1111,7 +1111,7 @@ Since: `v1.3.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_SetPoint(ST_GeomFromText('LINESTRING (0 0, 0 1, 1 1)'), 2, ST_GeomFromText('POINT (1 0)')) AS geom
 ```
 
@@ -1134,7 +1134,7 @@ Format: `ST_SetSRID (A:geometry, srid: Integer)`
 Since: `v1.1.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_SetSRID(polygondf.countyshape, 3021)
 FROM polygondf
 ```
@@ -1148,7 +1148,7 @@ Since: `v1.0.0`
 
 Format: `ST_SimplifyPreserveTopology (A:geometry, distanceTolerance: Double)`
 
-```SQL
+```sql
 SELECT ST_SimplifyPreserveTopology(polygondf.countyshape, 10.0)
 FROM polygondf
 ```
@@ -1168,7 +1168,7 @@ Since: `v1.4.0`
 Format: `ST_Split (input: geometry, blade: geometry)`
 
 Spark SQL Example:
-```SQL
+```sql
 SELECT ST_Split(
     ST_GeomFromWKT('LINESTRING (0 0, 1.5 1.5, 2 2)'),
     ST_GeomFromWKT('MULTIPOINT (0.5 0.5, 1 1)'))
@@ -1185,7 +1185,7 @@ Format: `ST_SRID (A:geometry)`
 Since: `v1.1.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_SRID(polygondf.countyshape)
 FROM polygondf
 ```
@@ -1199,7 +1199,7 @@ Format: `ST_StartPoint(geom: geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_StartPoint(ST_GeomFromText('LINESTRING(100 150,50 60, 70 80, 160 170)'))
 ```
 
@@ -1214,7 +1214,7 @@ Format: `ST_SubDivide(geom: geometry, maxVertices: int)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_SubDivide(ST_GeomFromText("POLYGON((35 10, 45 45, 15 40, 10 20, 35 10), (20 30, 35 35, 30 20, 20 30))"), 5)
 
 ```
@@ -1241,7 +1241,7 @@ Output:
 
 Spark SQL example:
 
-```SQL
+```sql
 SELECT ST_SubDivide(ST_GeomFromText("LINESTRING(0 0, 85 85, 100 100, 120 120, 21 21, 10 10, 5 5)"), 5)
 ```
 
@@ -1269,7 +1269,7 @@ Since: `v1.1.0`
 Example:
 
 Query:
-```SQL
+```sql
 SELECT ST_SubDivideExplode(ST_GeomFromText("LINESTRING(0 0, 85 85, 100 100, 120 120, 21 21, 10 10, 5 5)"), 5)
 ```
 
@@ -1301,7 +1301,7 @@ Table:
 ```
 
 Query
-```SQL
+```sql
 select geom from geometries LATERAL VIEW ST_SubdivideExplode(geometry, 5) AS geom
 ```
 
@@ -1331,7 +1331,7 @@ Since: `v1.2.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_SymDifference(ST_GeomFromWKT('POLYGON ((-3 -3, 3 -3, 3 3, -3 3, -3 -3))'), ST_GeomFromWKT('POLYGON ((-2 -3, 4 -3, 4 3, -2 3, -2 -3))'))
 ```
 
@@ -1359,13 +1359,13 @@ Format: `ST_Transform (A:geometry, SourceCRS:string, TargetCRS:string ,[Optional
 Since: `v1.0.0`
 
 Spark SQL example (simple):
-```SQL
+```sql
 SELECT ST_Transform(polygondf.countyshape, 'epsg:4326','epsg:3857')
 FROM polygondf
 ```
 
 Spark SQL example (with optional parameters):
-```SQL
+```sql
 SELECT ST_Transform(polygondf.countyshape, 'epsg:4326','epsg:3857', false)
 FROM polygondf
 ```
@@ -1384,7 +1384,7 @@ Since: `v1.2.0`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_Union(ST_GeomFromWKT('POLYGON ((-3 -3, 3 -3, 3 3, -3 3, -3 -3))'), ST_GeomFromWKT('POLYGON ((1 -2, 5 0, 1 2, 1 -2))'))
 ```
 
@@ -1403,7 +1403,7 @@ Format: `ST_X(pointA: Point)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_X(ST_POINT(0.0 25.0))
 ```
 
@@ -1419,7 +1419,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_XMax(df.geometry) AS xmax
 FROM df
 ```
@@ -1438,7 +1438,7 @@ Since: `v1.2.1`
 
 Example:
 
-```SQL
+```sql
 SELECT ST_XMin(df.geometry) AS xmin
 FROM df
 ```
@@ -1456,7 +1456,7 @@ Format: `ST_Y(pointA: Point)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Y(ST_POINT(0.0 25.0))
 ```
 
@@ -1471,7 +1471,7 @@ Format: `ST_YMax (A:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_YMax(ST_GeomFromText('POLYGON((0 0 1, 1 1 1, 1 2 1, 1 1 1, 0 0 1))'))
 ```
 
@@ -1486,7 +1486,7 @@ Format: `ST_Y_Min (A:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_YMin(ST_GeomFromText('POLYGON((0 0 1, 1 1 1, 1 2 1, 1 1 1, 0 0 1))'))
 ```
 
@@ -1501,7 +1501,7 @@ Format: `ST_Z(pointA: Point)`
 Since: `v1.2.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Z(ST_POINT(0.0 25.0 11.0))
 ```
 
@@ -1516,7 +1516,7 @@ Format: `ST_ZMax(geom: geometry)`
 Since: `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_ZMax(ST_GeomFromText('POLYGON((0 0 1, 1 1 1, 1 2 1, 1 1 1, 0 0 1))'))
 ```
 
@@ -1531,7 +1531,7 @@ Format: `ST_ZMin(geom: geometry)`
 Since: `v1.3.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_ZMin(ST_GeomFromText('LINESTRING(1 3 4, 5 6 7)'))
 ```
 
diff --git a/docs/api/sql/Optimizer.md b/docs/api/sql/Optimizer.md
index 84c2e351..a6dd2c7a 100644
--- a/docs/api/sql/Optimizer.md
+++ b/docs/api/sql/Optimizer.md
@@ -9,19 +9,19 @@ Introduction: Find geometries from A and geometries from B such that each geomet
 
 Spark SQL Example:
 
-```SQL
+```sql
 SELECT *
 FROM polygondf, pointdf
 WHERE ST_Contains(polygondf.polygonshape,pointdf.pointshape)
 ```
 
-```SQL
+```sql
 SELECT *
 FROM polygondf, pointdf
 WHERE ST_Intersects(polygondf.polygonshape,pointdf.pointshape)
 ```
 
-```SQL
+```sql
 SELECT *
 FROM pointdf, polygondf
 WHERE ST_Within(pointdf.pointshape, polygondf.polygonshape)
@@ -46,14 +46,14 @@ Introduction: Find geometries from A and geometries from B such that the interna
 Spark SQL Example:
 
 *Only consider ==fully within a certain distance==*
-```SQL
+```sql
 SELECT *
 FROM pointdf1, pointdf2
 WHERE ST_Distance(pointdf1.pointshape1,pointdf2.pointshape2) < 2
 ```
 
 *Consider ==intersects within a certain distance==*
-```SQL
+```sql
 SELECT *
 FROM pointdf1, pointdf2
 WHERE ST_Distance(pointdf1.pointshape1,pointdf2.pointshape2) <= 2
@@ -83,7 +83,7 @@ The supported join type - broadcast side combinations are
 * Left outer - broadcast right
 * Right outer - broadcast left
 
-```Scala
+```scala
 pointDf.alias("pointDf").join(broadcast(polygonDf).alias("polygonDf"), expr("ST_Contains(polygonDf.polygonshape, pointDf.pointshape)"))
 ```
 
@@ -100,7 +100,7 @@ BroadcastIndexJoin pointshape#52: geometry, BuildRight, BuildRight, false ST_Con
 
 This also works for distance joins:
 
-```Scala
+```scala
 pointDf1.alias("pointDf1").join(broadcast(pointDf2).alias("pointDf2"), expr("ST_Distance(pointDf1.pointshape, pointDf2.pointshape) <= 2"))
 ```
 
@@ -123,7 +123,7 @@ Introduction: Given a join query and a predicate in the same WHERE clause, first
 
 Spark SQL Example:
 
-```SQL
+```sql
 SELECT *
 FROM polygondf, pointdf 
 WHERE ST_Contains(polygondf.polygonshape,pointdf.pointshape)
diff --git a/docs/api/sql/Overview.md b/docs/api/sql/Overview.md
index 11ec7607..855d517a 100644
--- a/docs/api/sql/Overview.md
+++ b/docs/api/sql/Overview.md
@@ -2,12 +2,12 @@
 
 ## Function list
 SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. It includes four kinds of SQL operators as follows. All these operators can be directly called through:
-```Scala
+```scala
 var myDataFrame = sparkSession.sql("YOUR_SQL")
 ```
 
 Alternatively, `expr` and `selectExpr` can be used:
-```Scala
+```scala
 myDataFrame.withColumn("geometry", expr("ST_*")).selectExpr("ST_*")
 ```
 
@@ -34,14 +34,14 @@ The detailed explanation is here [Write a SQL/DataFrame application](../../tutor
 
 1. Add Sedona-core and Sedona-SQL into your project POM.xml or build.sbt
 2. Declare your Spark Session
-```Scala
+```scala
 sparkSession = SparkSession.builder().
       config("spark.serializer","org.apache.spark.serializer.KryoSerializer").
       config("spark.kryo.registrator", "org.apache.sedona.core.serde.SedonaKryoRegistrator").
       master("local[*]").appName("mySedonaSQLdemo").getOrCreate()
 ```
 3. Add the following line after your SparkSession declaration:
-```Scala
+```scala
 import org.apache.sedona.sql.utils.SedonaSQLRegistrator
 SedonaSQLRegistrator.registerAll(sparkSession)
 ```
diff --git a/docs/api/sql/Parameter.md b/docs/api/sql/Parameter.md
index 8e534064..a80bf26a 100644
--- a/docs/api/sql/Parameter.md
+++ b/docs/api/sql/Parameter.md
@@ -2,7 +2,7 @@
 SedonaSQL supports many parameters. To change their values,
 
 1. Set it through SparkConf:
-```Scala
+```scala
 sparkSession = SparkSession.builder().
       config("spark.serializer","org.apache.spark.serializer.KryoSerializer").
       config("spark.kryo.registrator", "org.apache.sedona.core.serde.SedonaKryoRegistrator").
@@ -10,12 +10,12 @@ sparkSession = SparkSession.builder().
       master("local[*]").appName("mySedonaSQLdemo").getOrCreate()
 ```
 2. Check your current SedonaSQL configuration:
-```Scala
+```scala
 val sedonaConf = new SedonaConf(sparkSession.conf)
 println(sedonaConf)
 ```
 3. Sedona parameters can be changed at runtime:
-```Scala
+```scala
 sparkSession.conf.set("sedona.global.index","false")
 ```
 ## Explanation
diff --git a/docs/api/sql/Predicate.md b/docs/api/sql/Predicate.md
index e4e2714d..4e4b9c3c 100644
--- a/docs/api/sql/Predicate.md
+++ b/docs/api/sql/Predicate.md
@@ -7,7 +7,7 @@ Format: `ST_Contains (A:geometry, B:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Contains(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
@@ -22,7 +22,7 @@ Format: `ST_Crosses (A:geometry, B:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Crosses(pointdf.arealandmark, ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0))
@@ -37,7 +37,7 @@ Format: `ST_Disjoint (A:geometry, B:geometry)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT *
 FROM geom
 WHERE ST_Disjoinnt(geom.geom_a, geom.geom_b)
@@ -52,7 +52,7 @@ Format: `ST_Equals (A:geometry, B:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Equals(pointdf.arealandmark, ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0))
@@ -67,7 +67,7 @@ Format: `ST_Intersects (A:geometry, B:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Intersects(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
@@ -81,14 +81,14 @@ Format: `ST_OrderingEquals(A: geometry, B: geometry)`
 Since: `v1.2.1`
 
 Spark SQL example 1:
-```SQL
+```sql
 SELECT ST_OrderingEquals(ST_GeomFromWKT('POLYGON((2 0, 0 2, -2 0, 2 0))'), ST_GeomFromWKT('POLYGON((2 0, 0 2, -2 0, 2 0))'))
 ```
 
 Output: `true`
 
 Spark SQL example 2:
-```SQL
+```sql
 SELECT ST_OrderingEquals(ST_GeomFromWKT('POLYGON((2 0, 0 2, -2 0, 2 0))'), ST_GeomFromWKT('POLYGON((0 2, -2 0, 2 0, 0 2))'))
 ```
 
@@ -103,7 +103,7 @@ Format: `ST_Overlaps (A:geometry, B:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT *
 FROM geom
 WHERE ST_Overlaps(geom.geom_a, geom.geom_b)
@@ -117,7 +117,7 @@ Format: `ST_Touches (A:geometry, B:geometry)`
 
 Since: `v1.0.0`
 
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Touches(pointdf.arealandmark, ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0))
@@ -132,7 +132,7 @@ Format: `ST_Within (A:geometry, B:geometry)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Within(pointdf.arealandmark, ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0))
@@ -147,7 +147,7 @@ Format: `ST_Covers (A:geometry, B:geometry)`
 Since: `v1.3.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_Covers(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), pointdf.arealandmark)
@@ -162,7 +162,7 @@ Format: `ST_CoveredBy (A:geometry, B:geometry)`
 Since: `v1.3.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT * 
 FROM pointdf 
 WHERE ST_CoveredBy(pointdf.arealandmark, ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0))
diff --git a/docs/api/sql/Raster-loader.md b/docs/api/sql/Raster-loader.md
index 3c0425c2..d4c68447 100644
--- a/docs/api/sql/Raster-loader.md
+++ b/docs/api/sql/Raster-loader.md
@@ -11,7 +11,7 @@ The input path could be a path to a single GeoTiff image or a directory of GeoTi
  You can optionally append an option to drop invalid images. The geometry bound of each image is automatically loaded
 as a Sedona geometry and is transformed to WGS84 (EPSG:4326) reference system.
 
-```Scala
+```scala
 var geotiffDF = sparkSession.read.format("geotiff").option("dropInvalid", true).load("YOUR_PATH")
 geotiffDF.printSchema()
 ```
@@ -39,7 +39,7 @@ There are three more optional parameters for reading GeoTiff:
 
 An example with all GeoTiff read options:
 
-```Scala
+```scala
 var geotiffDF = sparkSession.read.format("geotiff").option("dropInvalid", true).option("readFromCRS", "EPSG:4499").option("readToCRS", "EPSG:4326").option("disableErrorInCRS", true).load("YOUR_PATH")
 geotiffDF.printSchema()
 ```
@@ -59,7 +59,7 @@ Output:
 
 You can also select sub-attributes individually to construct a new DataFrame
 
-```Scala
+```scala
 geotiffDF = geotiffDF.selectExpr("image.origin as origin","ST_GeomFromWkt(image.geometry) as Geom", "image.height as height", "image.width as width", "image.data as data", "image.nBands as bands")
 geotiffDF.createOrReplaceTempView("GeotiffDataframe")
 geotiffDF.show()
@@ -111,27 +111,27 @@ or
 
 Field names can be renamed, but schema should exactly match with one of the above two schemas. The output path could be a path to a directory where GeoTiff images will be saved. If the directory already exists, `write` should be called in `overwrite` mode.
 
-```Scala
+```scala
 var dfToWrite = sparkSession.read.format("geotiff").option("dropInvalid", true).option("readToCRS", "EPSG:4326").load("PATH_TO_INPUT_GEOTIFF_IMAGES")
 dfToWrite.write.format("geotiff").save("DESTINATION_PATH")
 ```
 
 You can override an existing path with the following approach:
 
-```Scala
+```scala
 dfToWrite.write.mode("overwrite").format("geotiff").save("DESTINATION_PATH")
 ```
 
 You can also extract the columns nested within `image` column and write the dataframe as GeoTiff image.
 
-```Scala
+```scala
 dfToWrite = dfToWrite.selectExpr("image.origin as origin","image.geometry as geometry", "image.height as height", "image.width as width", "image.data as data", "image.nBands as nBands")
 dfToWrite.write.mode("overwrite").format("geotiff").save("DESTINATION_PATH")
 ```
 
 If you want the saved GeoTiff images not to be distributed into multiple partitions, you can call coalesce to merge all files in a single partition.
 
-```Scala
+```scala
 dfToWrite.coalesce(1).write.mode("overwrite").format("geotiff").save("DESTINATION_PATH")
 ```
 
@@ -150,7 +150,7 @@ In case, you rename the columns of GeoTiff dataframe, you can set the correspond
 
 An example:
 
-```Scala
+```scala
 dfToWrite = sparkSession.read.format("geotiff").option("dropInvalid", true).option("readToCRS", "EPSG:4326").load("PATH_TO_INPUT_GEOTIFF_IMAGES")
 dfToWrite = dfToWrite.selectExpr("image.origin as source","ST_GeomFromWkt(image.geometry) as geom", "image.height as height", "image.width as width", "image.data as data", "image.nBands as bands")
 dfToWrite.write.mode("overwrite").format("geotiff").option("writeToCRS", "EPSG:4326").option("fieldOrigin", "source").option("fieldGeometry", "geom").option("fieldNBands", "bands").save("DESTINATION_PATH")
@@ -166,7 +166,7 @@ Since: `v1.1.0`
 
 Spark SQL example:
 
-```Scala
+```scala
 SELECT RS_Array(height * width, 0.0)
 ```
 
@@ -180,7 +180,7 @@ optional: alphaBand: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 val BandDF = spark.sql("select RS_Base64(h, w, band1, band2, RS_Array(h*w, 0)) as baseString from dataframe")
 BandDF.show()
 ```
@@ -214,7 +214,7 @@ Since: `v1.1.0`
 
 Spark SQL example:
 
-```Scala
+```scala
 val BandDF = spark.sql("select RS_GetBand(data, 2, Band) as targetBand from GeotiffDataframe")
 BandDF.show()
 ```
@@ -238,7 +238,7 @@ Format: `RS_HTML(base64:String, optional: width_in_px:String)`
 
 Spark SQL example:
 
-```Scala
+```scala
 df.selectExpr("RS_HTML(encodedstring, '300') as htmlstring" ).show()
 ```
 
diff --git a/docs/api/sql/Raster-operators.md b/docs/api/sql/Raster-operators.md
index dff8b2d4..cda16c00 100644
--- a/docs/api/sql/Raster-operators.md
+++ b/docs/api/sql/Raster-operators.md
@@ -7,7 +7,7 @@ Format: `RS_Add (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val sumDF = spark.sql("select RS_Add(band1, band2) as sumOfBands from dataframe")
 
@@ -22,7 +22,7 @@ Format: `RS_Append(data: Array[Double], newBand: Array[Double], nBands: Int)`
 Since: `v1.2.1`
 
 Spark SQL example:
-```Scala
+```scala
 
 val dfAppended = spark.sql("select RS_Append(data, normalizedDifference, nBands) as dataEdited from dataframe")
 
@@ -37,7 +37,7 @@ Format: `RS_BitwiseAND (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val biwiseandDF = spark.sql("select RS_BitwiseAND(band1, band2) as andvalue from dataframe")
 
@@ -52,7 +52,7 @@ Format: `RS_BitwiseOR (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val biwiseorDF = spark.sql("select RS_BitwiseOR(band1, band2) as or from dataframe")
 
@@ -67,7 +67,7 @@ Format: `RS_Count (Band1: Array[Double], Target: Double)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val countDF = spark.sql("select RS_Count(band1, target) as count from dataframe")
 
@@ -82,7 +82,7 @@ Format: `RS_Divide (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val multiplyDF = spark.sql("select RS_Divide(band1, band2) as divideBands from dataframe")
 
@@ -97,7 +97,7 @@ Format: `RS_FetchRegion (Band: Array[Double], coordinates: Array[Int], dimension
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val region = spark.sql("select RS_FetchRegion(Band,Array(0, 0, 1, 2),Array(3, 3)) as Region from dataframe")
 ```
@@ -111,7 +111,7 @@ Format: `RS_GreaterThan (Band: Array[Double], Target: Double)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val greaterDF = spark.sql("select RS_GreaterThan(band, target) as maskedvalues from dataframe")
 
@@ -126,7 +126,7 @@ Format: `RS_GreaterThanEqual (Band: Array[Double], Target: Double)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val greaterEqualDF = spark.sql("select RS_GreaterThanEqual(band, target) as maskedvalues from dataframe")
 
@@ -141,7 +141,7 @@ Format: `RS_LessThan (Band: Array[Double], Target: Double)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val lessDF = spark.sql("select RS_LessThan(band, target) as maskedvalues from dataframe")
 
@@ -156,7 +156,7 @@ Format: `RS_LessThanEqual (Band: Array[Double], Target: Double)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val lessEqualDF = spark.sql("select RS_LessThanEqual(band, target) as maskedvalues from dataframe")
 
@@ -171,7 +171,7 @@ Format: `RS_LogicalDifference (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val logicalDifference = spark.sql("select RS_LogicalDifference(band1, band2) as logdifference from dataframe")
 
@@ -186,7 +186,7 @@ Format: `RS_LogicalOver (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val logicalOver = spark.sql("select RS_LogicalOver(band1, band2) as logover from dataframe")
 
@@ -201,7 +201,7 @@ Format: `RS_Mean (Band: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val meanDF = spark.sql("select RS_Mean(band) as mean from dataframe")
 
@@ -216,7 +216,7 @@ Format: `RS_Mode (Band: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val modeDF = spark.sql("select RS_Mode(band) as mode from dataframe")
 
@@ -231,7 +231,7 @@ Format: `RS_Modulo (Band: Array[Double], Target: Double)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val moduloDF = spark.sql("select RS_Modulo(band, target) as modulo from dataframe")
 
@@ -246,7 +246,7 @@ Format: `RS_Multiply (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val multiplyDF = spark.sql("select RS_Multiply(band1, band2) as multiplyBands from dataframe")
 
@@ -261,7 +261,7 @@ Format: `RS_MultiplyFactor (Band1: Array[Double], Factor: Int)`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val multiplyFactorDF = spark.sql("select RS_MultiplyFactor(band1, 2) as multiplyfactor from dataframe")
 
@@ -274,7 +274,7 @@ Introduction: Normalize the value in the array to [0, 255]
 Since: `v1.1.0`
 
 Spark SQL example
-```SQL
+```sql
 SELECT RS_Normalize(band)
 ```
 
@@ -287,7 +287,7 @@ Format: `RS_NormalizedDifference (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val normalizedDF = spark.sql("select RS_NormalizedDifference(band1, band2) as normdifference from dataframe")
 
@@ -302,7 +302,7 @@ Format: `RS_SquareRoot (Band: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val rootDF = spark.sql("select RS_SquareRoot(band) as squareroot from dataframe")
 
@@ -317,7 +317,7 @@ Format: `RS_Subtract (Band1: Array[Double], Band2: Array[Double])`
 Since: `v1.1.0`
 
 Spark SQL example:
-```Scala
+```scala
 
 val subtractDF = spark.sql("select RS_Subtract(band1, band2) as differenceOfOfBands from dataframe")
 
diff --git a/docs/api/viz/sql.md b/docs/api/viz/sql.md
index cc72d0fb..ae4cdc66 100644
--- a/docs/api/viz/sql.md
+++ b/docs/api/viz/sql.md
@@ -4,14 +4,14 @@ The detailed explanation is here: [Visualize Spatial DataFrame/RDD](../../tutori
 
 1. Add Sedona-core, Sedona-SQL,Sedona-Viz into your project POM.xml or build.sbt
 2. Declare your Spark Session
-```Scala
+```scala
 sparkSession = SparkSession.builder().
 config("spark.serializer","org.apache.spark.serializer.KryoSerializer").
 config("spark.kryo.registrator", "org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator").
 master("local[*]").appName("mySedonaVizDemo").getOrCreate()
 ```
 3. Add the following lines after your SparkSession declaration:
-```Scala
+```scala
 SedonaSQLRegistrator.registerAll(sparkSession)
 SedonaVizRegistrator.registerAll(sparkSession)
 ```
@@ -34,7 +34,7 @@ Since: `v1.0.0`
 This function will normalize the weight according to the max weight among all pixels. Different pixel obtains different color.
 
 Spark SQL example:
-```SQL
+```sql
 SELECT pixels.px, ST_Colorize(pixels.weight, 999) AS color
 FROM pixels
 ```
@@ -44,7 +44,7 @@ FROM pixels
 If a mandatory color name is put as the third input argument, this function will directly output this color, without considering the weights. In this case, every pixel will possess the same color.
 
 Spark SQL example:
-```SQL
+```sql
 SELECT pixels.px, ST_Colorize(pixels.weight, 999, 'red') AS color
 FROM pixels
 ```
@@ -68,7 +68,7 @@ Format: `ST_EncodeImage (A:image)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_EncodeImage(images.img)
 FROM images
 ```
@@ -84,7 +84,7 @@ Format: `ST_Pixelize (A:geometry, ResolutionX:int, ResolutionY:int, Boundary:geo
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_Pixelize(shape, 256, 256, (ST_Envelope_Aggr(shape) FROM pointtable))
 FROM polygondf
 ```
@@ -101,7 +101,7 @@ Format: `ST_TileName (A:pixel, ZoomLevel:int)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT ST_TileName(pixels.px, 3)
 FROM pixels
 ```
@@ -117,7 +117,7 @@ Format: `ST_Render (A:pixel, B:color, C:Integer - optional zoom level)`
 Since: `v1.0.0`
 
 Spark SQL example:
-```SQL
+```sql
 SELECT tilename, ST_Render(pixels.px, pixels.color) AS tileimg
 FROM pixels
 GROUP BY tilename
diff --git a/docs/setup/databricks.md b/docs/setup/databricks.md
index 439503e7..bf97395e 100644
--- a/docs/setup/databricks.md
+++ b/docs/setup/databricks.md
@@ -48,13 +48,13 @@ __Sedona `1.1.1-incubating` is overall the recommended version to use. It is gen
 After you have installed the libraries and started the cluster, you can initialize the Sedona `ST_*` functions and types by running from your code: 
 
 (scala)
-```Scala
+```scala
 import org.apache.sedona.sql.utils.SedonaSQLRegistrator
 SedonaSQLRegistrator.registerAll(spark)
 ```
 
 (or python)
-```Python
+```python
 from sedona.register.geo_registrator import SedonaRegistrator
 SedonaRegistrator.registerAll(spark)
 ```
diff --git a/docs/setup/install-python.md b/docs/setup/install-python.md
index 9e653fa4..a298ef0e 100644
--- a/docs/setup/install-python.md
+++ b/docs/setup/install-python.md
@@ -70,6 +70,6 @@ export SPARK_HOME=~/Downloads/spark-3.0.1-bin-hadoop2.7
 
 ```bash
 export PYTHONPATH=$SPARK_HOME/python
-``` 
+```
 
 You can then play with [Sedona Python Jupyter notebook](../../tutorial/jupyter-notebook/).
\ No newline at end of file
diff --git a/docs/setup/install-r.md b/docs/setup/install-r.md
index 48367eaa..b2375857 100644
--- a/docs/setup/install-r.md
+++ b/docs/setup/install-r.md
@@ -51,7 +51,7 @@ registered when creating a Spark session, one simply needs to attach
 `apache.sedona` before instantiating a Spark connection. apache.sedona
 will take care of the rest. For example,
 
-``` r
+```r
 library(sparklyr)
 library(apache.sedona)
 
@@ -61,7 +61,7 @@ sc <- spark_connect(master = "yarn", spark_home = spark_home)
 
 will create a Sedona-capable Spark connection in YARN client mode, and
 
-``` r
+```r
 library(sparklyr)
 library(apache.sedona)
 
@@ -75,7 +75,7 @@ In `sparklyr`, one can easily inspect the Spark connection object to
 sanity-check it has been properly initialized with all Sedona-related
 dependencies, e.g.,
 
-``` r
+```r
 print(sc$extensions$packages)
 ```
 
@@ -89,7 +89,7 @@ print(sc$extensions$packages)
 
 and
 
-``` r
+```r
 spark_session(sc) %>%
   invoke("%>%", list("conf"), list("get", "spark.kryo.registrator")) %>%
   print()
diff --git a/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md b/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md
index cfd42a57..82cf57ca 100644
--- a/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md
+++ b/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md
@@ -14,11 +14,11 @@ The third level (i.e., 0.8.1) tells that this version only contains bug fixes, s
 Sedona provides a number of constructors for each SpatialRDD (PointRDD, PolygonRDD and LineStringRDD). In general, you have two options to start with.
 
 1. Initialize a SpatialRDD from your data source such as HDFS and S3. A typical example is as follows:
-```Java
+```java
 public PointRDD(JavaSparkContext sparkContext, String InputLocation, Integer Offset, FileDataSplitter splitter, boolean carryInputData, Integer partitions, StorageLevel newLevel)
 ```
 2. Initialize a SpatialRDD from an existing RDD. A typical example is as follows:
-```Java
+```java
 public PointRDD(JavaRDD<Point> rawSpatialRDD, StorageLevel newLevel)
 ```
 	
@@ -26,7 +26,7 @@ You may notice that these constructors all take as input a "StorageLevel" parame
 
 However, in some cases, you may know well about your datasets. If so, you can manually provide these information by calling this kind of Spatial RDD constructors:
 
-```Java
+```java
 public PointRDD(JavaSparkContext sparkContext, String InputLocation, Integer Offset, FileDataSplitter splitter, boolean carryInputData, Integer partitions, Envelope datasetBoundary, Integer approximateTotalCount) {
 ```
 Manually providing the dataset boundary and approximate total count helps Sedona avoiding several slow "Action"s during initialization.
diff --git a/docs/tutorial/core-python.md b/docs/tutorial/core-python.md
index d826471f..ec2ae086 100644
--- a/docs/tutorial/core-python.md
+++ b/docs/tutorial/core-python.md
@@ -162,7 +162,7 @@ The other attributes are combined together to a string and stored in ==UserData=
 To retrieve the UserData field, use the following code:
 ```python
 rdd_with_other_attributes = object_rdd.rawSpatialRDD.map(lambda x: x.getUserData())
-``` 
+```
 
 ## Write a Spatial Range Query
 
diff --git a/docs/tutorial/flink/sql.md b/docs/tutorial/flink/sql.md
index 909e9c70..facdfaec 100644
--- a/docs/tutorial/flink/sql.md
+++ b/docs/tutorial/flink/sql.md
@@ -1,7 +1,7 @@
 The page outlines the steps to manage spatial data using SedonaSQL. ==The example code is written in Java but also works for Scala==.
 
 SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. It includes four kinds of SQL operators as follows. All these operators can be directly called through:
-```Java
+```java
 Table myTable = tableEnv.sqlQuery("YOUR_SQL")
 ```
 
@@ -15,7 +15,7 @@ Detailed SedonaSQL APIs are available here: [SedonaSQL API](../../../api/flink/O
 
 ## Initiate Stream Environment
 Use the following code to initiate your `StreamExecutionEnvironment` at the beginning:
-```Java
+```java
 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment()
 EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode().build();
 StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings);
@@ -25,7 +25,7 @@ StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings);
 
 Add the following line after your `StreamExecutionEnvironment` and `StreamTableEnvironment` declaration
 
-```Java
+```java
 SedonaFlinkRegistrator.registerType(env);
 SedonaFlinkRegistrator.registerFunc(tableEnv);
 ```
@@ -61,7 +61,7 @@ Assume you have a Flink Table `tbl` like this:
 
 You can create a Table with a Geometry type column as follows:
 
-```Java
+```java
 tableEnv.createTemporaryView("myTable", tbl)
 Table geomTbl = tableEnv.sql("SELECT ST_GeomFromWKT(geom_polygon) as geom_polygon, name_polygon FROM myTable")
 geomTbl.execute().print()
@@ -91,7 +91,7 @@ Although it looks same with the input, actually the type of column geom_polygon
 
 To verify this, use the following code to print the schema of the DataFrame:
 
-```Java
+```java
 geomTbl.printSchema()
 ```
 
@@ -113,7 +113,7 @@ Sedona doesn't control the coordinate unit (degree-based or meter-based) of all
 
 To convert Coordinate Reference System of the Geometry column created before, use the following code:
 
-```Java
+```java
 Table geomTbl3857 = tableEnv.sqlQuery("SELECT ST_Transform(countyshape, "epsg:4326", "epsg:3857") AS geom_polygon, name_polygon FROM myTable")
 geomTbl3857.execute().print()
 ```
@@ -177,7 +177,7 @@ Use ==ST_Contains==, ==ST_Intersects== and so on to run a range query over a sin
 
 The following example finds all counties that are within the given polygon:
 
-```Java
+```java
 geomTable = tableEnv.sqlQuery(
   "
     SELECT *
@@ -196,7 +196,7 @@ Use ==ST_Distance== to calculate the distance and rank the distance.
 
 The following code returns the 5 nearest neighbor of the given polygon.
 
-```Java
+```java
 geomTable = tableEnv.sqlQuery(
   "
     SELECT countyname, ST_Distance(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape) AS distance
@@ -213,7 +213,7 @@ geomTable.execute().print()
 
 Use TableEnv's toDataStream function
 
-```Java
+```java
 DataStream<Row> geomStream = tableEnv.toDataStream(geomTable)
 ```
 
@@ -221,7 +221,7 @@ DataStream<Row> geomStream = tableEnv.toDataStream(geomTable)
 
 Then get the Geometry from each Row object using Map
 
-```Java
+```java
 import org.locationtech.jts.geom.Geometry;
 
 DataStream<Geometry> geometries = geomStream.map(new MapFunction<Row, Geometry>() {
@@ -252,7 +252,7 @@ The output will be
 
 You can concatenate other non-spatial attributes and store them in Geometry's `userData` field so you can recover them later on. `userData` field can be any object type.
 
-```Java
+```java
 import org.locationtech.jts.geom.Geometry;
 
 DataStream<Geometry> geometries = geomStream.map(new MapFunction<Row, Geometry>() {
@@ -268,7 +268,7 @@ geometries.print();
 
 The `print` command will not print out `userData` field. But you can get it this way:
 
-```Java
+```java
 import org.locationtech.jts.geom.Geometry;
 
 geometries.map(new MapFunction<Geometry, String>() {
@@ -301,7 +301,7 @@ The output will be
 
 * Create a Geometry from a WKT string
 
-```Java
+```java
 import org.apache.sedona.core.formatMapper.FormatUtils;
 import org.locationtech.jts.geom.Geometry;
 
@@ -317,7 +317,7 @@ DataStream<Geometry> geometries = text.map(new MapFunction<String, Geometry>() {
 
 * Create a Point from a String `1.1, 2.2`. Use `,` as the delimiter.
 
-```Java
+```java
 import org.apache.sedona.core.formatMapper.FormatUtils;
 import org.locationtech.jts.geom.Geometry;
 
@@ -333,7 +333,7 @@ DataStream<Geometry> geometries = text.map(new MapFunction<String, Geometry>() {
 
 * Create a Polygon from a String `1.1, 1.1, 10.1, 10.1`. This is a rectangle with (1.1, 1.1) and (10.1, 10.1) as their min/max corners.
 
-```Java
+```java
 import org.apache.sedona.core.formatMapper.FormatUtils;
 import org.locationtech.jts.geom.GeometryFactory;
 import org.locationtech.jts.geom.Geometry;
@@ -360,7 +360,7 @@ DataStream<Geometry> geometries = text.map(new MapFunction<String, Geometry>() {
 
 Put a geometry in a Flink Row to a `geomStream`. Note that you can put other attributes in Row as well. This example uses a constant value `myName` for all geometries.
 
-```Java
+```java
 import org.apache.sedona.core.formatMapper.FormatUtils;
 import org.locationtech.jts.geom.Geometry;
 import org.apache.flink.types.Row;
@@ -378,6 +378,6 @@ DataStream<Row> geomStream = text.map(new MapFunction<String, Row>() {
 ### Get Spatial Table
 
 Use TableEnv's fromDataStream function, with two column names `geom` and `geom_name`.
-```Java
+```java
 Table geomTable = tableEnv.fromDataStream(geomStream, "geom", "geom_name")
 ```
diff --git a/docs/tutorial/rdd-r.md b/docs/tutorial/rdd-r.md
index 17254b8f..eef1ec08 100644
--- a/docs/tutorial/rdd-r.md
+++ b/docs/tutorial/rdd-r.md
@@ -24,7 +24,7 @@ For example, the following code will import data from
 [arealm-small.csv](https://github.com/apache/sedona/blob/master/binder/data/arealm-small.csv)
 into a `SpatialRDD`:
 
-``` r
+```r
 pt_rdd <- sedona_read_dsv_to_typed_rdd(
   sc,
   location = "arealm-small.csv",
@@ -61,7 +61,7 @@ Binary), and GeoJSON formats. See `?apache.sedona::sedona_read_wkt`,
 One can also run `to_spatial_rdd()` to extract a SpatailRDD from a Spark
 SQL query, e.g.,
 
-``` r
+```r
 library(sparklyr)
 library(apache.sedona)
 library(dplyr)
diff --git a/docs/tutorial/rdd.md b/docs/tutorial/rdd.md
index 3df342f9..f97a5a2d 100644
--- a/docs/tutorial/rdd.md
+++ b/docs/tutorial/rdd.md
@@ -12,7 +12,7 @@ The page outlines the steps to create Spatial RDDs and run spatial queries using
 
 ## Initiate SparkContext
 
-```Scala
+```scala
 val conf = new SparkConf()
 conf.setAppName("SedonaRunnableExample") // Change this to a proper name
 conf.setMaster("local[*]") // Delete this if run in cluster mode
@@ -26,7 +26,7 @@ val sc = new SparkContext(conf)
 	Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption.
 
 If you add ==the Sedona full dependencies== as suggested above, please use the following two lines to enable Sedona Kryo serializer instead:
-```Scala
+```scala
 conf.set("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
 conf.set("spark.kryo.registrator", classOf[SedonaVizKryoRegistrator].getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
 ```
@@ -47,7 +47,7 @@ Suppose we have a `checkin.csv` CSV file at Path `/Download/checkin.csv` as foll
 This file has three columns and corresponding ==offsets==(Column IDs) are 0, 1, 2.
 Use the following code to create a PointRDD
 
-```Scala
+```scala
 val pointRDDInputLocation = "/Download/checkin.csv"
 val pointRDDOffset = 0 // The point long/lat starts from Column 0
 val pointRDDSplitter = FileDataSplitter.CSV
@@ -56,7 +56,7 @@ var objectRDD = new PointRDD(sc, pointRDDInputLocation, pointRDDOffset, pointRDD
 ```
 
 If the data file is in TSV format, just simply use the following line to replace the old FileDataSplitter:
-```Scala
+```scala
 val pointRDDSplitter = FileDataSplitter.TSV
 ```
 
@@ -77,7 +77,7 @@ This file has 11 columns and corresponding offsets (Column IDs) are 0 - 10. Colu
 	For polygon data, the last coordinate must be the same as the first coordinate because a polygon is a closed linear ring.
 	
 Use the following code to create a PolygonRDD.
-```Scala
+```scala
 val polygonRDDInputLocation = "/Download/checkinshape.csv"
 val polygonRDDStartOffset = 0 // The coordinates start from Column 0
 val polygonRDDEndOffset = 9 // The coordinates end at Column 9
@@ -87,7 +87,7 @@ var objectRDD = new PolygonRDD(sc, polygonRDDInputLocation, polygonRDDStartOffse
 ```
 
 If the data file is in TSV format, just simply use the following line to replace the old FileDataSplitter:
-```Scala
+```scala
 val polygonRDDSplitter = FileDataSplitter.TSV
 ```
 
@@ -111,7 +111,7 @@ This file has two columns and corresponding ==offsets==(Column IDs) are 0, 1. Co
 
 Use the following code to create a SpatialRDD
 
-```Scala
+```scala
 val inputLocation = "/Download/checkin.tsv"
 val wktColumn = 0 // The WKT string starts from Column 0
 val allowTopologyInvalidGeometries = true // Optional
@@ -134,7 +134,7 @@ Suppose we have a `polygon.json` GeoJSON file at Path `/Download/polygon.json` a
 ```
 
 Use the following code to create a generic SpatialRDD:
-```Scala
+```scala
 val inputLocation = "/Download/polygon.json"
 val allowTopologyInvalidGeometries = true // Optional
 val skipSyntaxInvalidGeometries = false // Optional
@@ -146,7 +146,7 @@ val spatialRDD = GeoJsonReader.readToGeometryRDD(sparkSession.sparkContext, inpu
 	
 #### From Shapefile
 
-```Scala
+```scala
 val shapefileInputLocation="/Download/myshapefile"
 val spatialRDD = ShapefileReader.readToGeometryRDD(sparkSession.sparkContext, shapefileInputLocation)
 ```
@@ -169,7 +169,7 @@ via `sedona.global.charset` system property before the call to `ShapefileReader.
 
 Example:
 
-```Scala
+```scala
 System.setProperty("sedona.global.charset", "utf8")
 ```
 
@@ -180,12 +180,12 @@ To create a generic SpatialRDD from CSV, TSV, WKT, WKB and GeoJSON input formats
 We use [checkin.csv CSV file](#pointrdd-from-csvtsv) as the example. You can create a generic SpatialRDD using the following steps:
 
 1. Load data in SedonaSQL.
-```Scala
+```scala
 var df = sparkSession.read.format("csv").option("header", "false").load(csvPointInputLocation)
 df.createOrReplaceTempView("inputtable")
 ```
 2. Create a Geometry type column in SedonaSQL
-```Scala
+```scala
 var spatialDf = sparkSession.sql(
 	"""
    		|SELECT ST_Point(CAST(inputtable._c0 AS Decimal(24,20)),CAST(inputtable._c1 AS Decimal(24,20))) AS checkin
@@ -193,7 +193,7 @@ var spatialDf = sparkSession.sql(
    	""".stripMargin)
 ```
 3. Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD
-```Scala
+```scala
 var spatialRDD = Adapter.toSpatialRdd(spatialDf, "checkin")
 ```
 
@@ -207,7 +207,7 @@ Sedona doesn't control the coordinate unit (degree-based or meter-based) of all
 
 To convert Coordinate Reference System of an SpatialRDD, use the following code:
 
-```Scala
+```scala
 val sourceCrsCode = "epsg:4326" // WGS84, the most common degree-based CRS
 val targetCrsCode = "epsg:3857" // The most common meter-based CRS
 objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, false)
@@ -217,7 +217,7 @@ objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, false)
 
 !!!warning
 	CRS transformation should be done right after creating each SpatialRDD, otherwise it will lead to wrong query results. For instance, use something like this:
-	```Scala
+	```scala
 	var objectRDD = new PointRDD(sc, pointRDDInputLocation, pointRDDOffset, pointRDDSplitter, carryOtherAttributes)
 	objectRDD.CRSTransform("epsg:4326", "epsg:3857", false)
 	```
@@ -231,9 +231,9 @@ Each SpatialRDD can carry non-spatial attributes such as price, age and name as
 The other attributes are combined together to a string and stored in ==UserData== field of each geometry.
 
 To retrieve the UserData field, use the following code:
-```Scala
+```scala
 val rddWithOtherAttributes = objectRDD.rawSpatialRDD.rdd.map[String](f=>f.getUserData.asInstanceOf[String])
-``` 
+```
 
 ## Write a Spatial Range Query
 
@@ -242,7 +242,7 @@ A spatial range query takes as input a range query window and an SpatialRDD and
 
 Assume you now have an SpatialRDD (typed or generic). You can use the following code to issue an Spatial Range Query on it.
 
-```Scala
+```scala
 val rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
 val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
 val usingIndex = false
@@ -263,7 +263,7 @@ var queryResult = RangeQuery.SpatialRangeQuery(spatialRDD, rangeQueryWindow, spa
 
 !!!note
 	Spatial range query is equivalent with a SELECT query with spatial predicate as search condition in Spatial SQL. An example query is as follows:
-	```SQL
+	```sql
 	SELECT *
 	FROM checkin
 	WHERE ST_Intersects(checkin.location, queryWindow)
@@ -275,14 +275,14 @@ Besides the rectangle (Envelope) type range query window, Sedona range query win
 
 The code to create a point is as follows:
 
-```Scala
+```scala
 val geometryFactory = new GeometryFactory()
 val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
 ```
 
 The code to create a polygon (with 4 vertexes) is as follows:
 
-```Scala
+```scala
 val geometryFactory = new GeometryFactory()
 val coordinates = new Array[Coordinate](5)
 coordinates(0) = new Coordinate(0,0)
@@ -295,7 +295,7 @@ val polygonObject = geometryFactory.createPolygon(coordinates)
 
 The code to create a line string (with 4 vertexes) is as follows:
 
-```Scala
+```scala
 val geometryFactory = new GeometryFactory()
 val coordinates = new Array[Coordinate](4)
 coordinates(0) = new Coordinate(0,0)
@@ -311,7 +311,7 @@ Sedona provides two types of spatial indexes, Quad-Tree and R-Tree. Once you spe
 
 To utilize a spatial index in a spatial range query, use the following code:
 
-```Scala
+```scala
 val rangeQueryWindow = new Envelope(-90.01, -80.01, 30.01, 40.01)
 val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by the window
 
@@ -335,7 +335,7 @@ A spatial K Nearnest Neighbor query takes as input a K, a query point and an Spa
 
 Assume you now have an SpatialRDD (typed or generic). You can use the following code to issue an Spatial KNN Query on it.
 
-```Scala
+```scala
 val geometryFactory = new GeometryFactory()
 val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
 val K = 1000 // K Nearest Neighbors
@@ -345,7 +345,7 @@ val result = KNNQuery.SpatialKnnQuery(objectRDD, pointObject, K, usingIndex)
 
 !!!note
 	Spatial KNN query that returns 5 Nearest Neighbors is equal to the following statement in Spatial SQL
-	```SQL
+	```sql
 	SELECT ck.name, ck.rating, ST_Distance(ck.location, myLocation) AS distance
 	FROM checkins ck
 	ORDER BY distance DESC
@@ -365,7 +365,7 @@ To learn how to create Polygon and LineString object, see [Range query window](#
 
 To utilize a spatial index in a spatial KNN query, use the following code:
 
-```Scala
+```scala
 val geometryFactory = new GeometryFactory()
 val pointObject = geometryFactory.createPoint(new Coordinate(-84.01, 34.01))
 val K = 1000 // K Nearest Neighbors
@@ -391,7 +391,7 @@ A spatial join query takes as input two Spatial RDD A and B. For each geometry i
 
 Assume you now have two SpatialRDDs (typed or generic). You can use the following code to issue an Spatial Join Query on them.
 
-```Scala
+```scala
 val spatialPredicate = SpatialPredicate.COVERED_BY // Only return gemeotries fully covered by each query window in queryWindowRDD
 val usingIndex = false
 
@@ -405,7 +405,7 @@ val result = JoinQuery.SpatialJoinQuery(objectRDD, queryWindowRDD, usingIndex, s
 
 !!!note
 	Spatial join query is equal to the following query in Spatial SQL:
-	```SQL
+	```sql
 	SELECT superhero.name
 	FROM city, superhero
 	WHERE ST_Contains(city.geom, superhero.geom);
@@ -418,14 +418,14 @@ Sedona spatial partitioning method can significantly speed up the join query. Th
 
 If you first partition SpatialRDD A, then you must use the partitioner of A to partition B.
 
-```Scala
+```scala
 objectRDD.spatialPartitioning(GridType.KDBTREE)
 queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
 ```
 
 Or 
 
-```Scala
+```scala
 queryWindowRDD.spatialPartitioning(GridType.KDBTREE)
 objectRDD.spatialPartitioning(queryWindowRDD.getPartitioner)
 ```
@@ -435,7 +435,7 @@ objectRDD.spatialPartitioning(queryWindowRDD.getPartitioner)
 
 To utilize a spatial index in a spatial join query, use the following code:
 
-```Scala
+```scala
 objectRDD.spatialPartitioning(joinQueryPartitioningType)
 queryWindowRDD.spatialPartitioning(objectRDD.getPartitioner)
 
@@ -473,7 +473,7 @@ A distance join query takes as input two Spatial RDD A and B and a distance. For
 
 Assume you now have two SpatialRDDs (typed or generic). You can use the following code to issue an Distance Join Query on them.
 
-```Scala
+```scala
 objectRddA.analyze()
 
 val circleRDD = new CircleRDD(objectRddA, 0.1) // Create a CircleRDD using the given distance
@@ -497,7 +497,7 @@ The output format of the distance join query is [here](#output-format_2).
 
 !!!note
 	Distance join query is equal to the following query in Spatial SQL:
-	```SQL
+	```sql
 	SELECT superhero.name
 	FROM city, superhero
 	WHERE ST_Distance(city.geom, superhero.geom) <= 10;
@@ -519,7 +519,7 @@ Typed SpatialRDD and generic SpatialRDD can be saved to permanent storage.
 
 Use the following code to save an SpatialRDD as a distributed WKT text file:
 
-```Scala
+```scala
 objectRDD.rawSpatialRDD.saveAsTextFile("hdfs://PATH")
 objectRDD.saveAsWKT("hdfs://PATH")
 ```
@@ -528,7 +528,7 @@ objectRDD.saveAsWKT("hdfs://PATH")
 
 Use the following code to save an SpatialRDD as a distributed WKB text file:
 
-```Scala
+```scala
 objectRDD.saveAsWKB("hdfs://PATH")
 ```
 
@@ -536,7 +536,7 @@ objectRDD.saveAsWKB("hdfs://PATH")
 
 Use the following code to save an SpatialRDD as a distributed GeoJSON text file:
 
-```Scala
+```scala
 objectRDD.saveAsGeoJSON("hdfs://PATH")
 ```
 
@@ -545,7 +545,7 @@ objectRDD.saveAsGeoJSON("hdfs://PATH")
 
 Use the following code to save an SpatialRDD as a distributed object file:
 
-```Scala
+```scala
 objectRDD.rawSpatialRDD.saveAsObjectFile("hdfs://PATH")
 ```
 
@@ -576,7 +576,7 @@ You can easily reload an SpatialRDD that has been saved to ==a distributed objec
 
 Use the following code to reload the PointRDD/PolygonRDD/LineStringRDD:
 
-```Scala
+```scala
 var savedRDD = new PointRDD(sc.objectFile[Point]("hdfs://PATH"))
 
 var savedRDD = new PointRDD(sc.objectFile[Polygon]("hdfs://PATH"))
@@ -588,13 +588,13 @@ var savedRDD = new PointRDD(sc.objectFile[LineString]("hdfs://PATH"))
 
 Use the following code to reload the SpatialRDD:
 
-```Scala
+```scala
 var savedRDD = new SpatialRDD[Geometry]
 savedRDD.rawSpatialRDD = sc.objectFile[Geometry]("hdfs://PATH")
 ```
 
 Use the following code to reload the indexed SpatialRDD:
-```Scala
+```scala
 var savedRDD = new SpatialRDD[Geometry]
 savedRDD.indexedRawRDD = sc.objectFile[SpatialIndex]("hdfs://PATH")
 ```
diff --git a/docs/tutorial/sql-r.md b/docs/tutorial/sql-r.md
index 89e9cbe1..4ac72201 100644
--- a/docs/tutorial/sql-r.md
+++ b/docs/tutorial/sql-r.md
@@ -5,7 +5,7 @@ In `apache.sedona` , `sdf_register()`, a S3 generic from `sparklyr`
 converting a lower-level object to a Spark dataframe, can be applied to
 a `SpatialRDD` objects:
 
-``` r
+```r
 library(sparklyr)
 library(apache.sedona)
 
@@ -30,7 +30,7 @@ Sedona can inter-operate seamlessly with other functions supported in
 `sparklyr`’s dbplyr SQL translation env. For example, the code below
 finds the average area of all polygons in `polygon_sdf`:
 
-``` r
+```r
 mean_area_sdf <- polygon_sdf %>%
   dplyr::summarize(mean_area = mean(ST_Area(geometry)))
 print(mean_area_sdf)
@@ -44,7 +44,7 @@ print(mean_area_sdf)
 Once spatial objects are imported into Spark dataframes, they can also
 be easily integrated with other non-spatial attributes, e.g.,
 
-``` r
+```r
 modified_polygon_sdf <- polygon_sdf %>%
   dplyr::mutate(type = "polygon")
 ```
diff --git a/docs/tutorial/sql.md b/docs/tutorial/sql.md
index 306255cb..0219c3e8 100644
--- a/docs/tutorial/sql.md
+++ b/docs/tutorial/sql.md
@@ -1,7 +1,7 @@
 The page outlines the steps to manage spatial data using SedonaSQL. ==The example code is written in Scala but also works for Java==.
 
 SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. It includes four kinds of SQL operators as follows. All these operators can be directly called through:
-```Scala
+```scala
 var myDataFrame = sparkSession.sql("YOUR_SQL")
 ```
 
@@ -19,7 +19,7 @@ Detailed SedonaSQL APIs are available here: [SedonaSQL API](../api/sql/Overview.
 
 ## Initiate SparkSession
 Use the following code to initiate your SparkSession at the beginning:
-```Scala
+```scala
 var sparkSession = SparkSession.builder()
 .master("local[*]") // Delete this if run in cluster mode
 .appName("readTestScala") // Change this to a proper name
@@ -33,7 +33,7 @@ var sparkSession = SparkSession.builder()
 	Sedona has a suite of well-written geometry and index serializers. Forgetting to enable these serializers will lead to high memory consumption.
 
 If you add ==the Sedona full dependencies== as suggested above, please use the following two lines to enable Sedona Kryo serializer instead:
-```Scala
+```scala
 .config("spark.serializer", classOf[KryoSerializer].getName) // org.apache.spark.serializer.KryoSerializer
 .config("spark.kryo.registrator", classOf[SedonaVizKryoRegistrator].getName) // org.apache.sedona.viz.core.Serde.SedonaVizKryoRegistrator
 ```
@@ -42,7 +42,7 @@ If you add ==the Sedona full dependencies== as suggested above, please use the f
 
 Add the following line after your SparkSession declaration
 
-```Scala
+```scala
 SedonaSQLRegistrator.registerAll(sparkSession)
 ```
 
@@ -64,7 +64,7 @@ The file may have many other columns.
 
 Use the following code to load the data and create a raw DataFrame:
 
-```Scala
+```scala
 var rawDf = sparkSession.read.format("csv").option("delimiter", "\t").option("header", "false").load("/Download/usa-county.tsv")
 rawDf.createOrReplaceTempView("rawdf")
 rawDf.show()
@@ -86,7 +86,7 @@ The output will be like this:
 All geometrical operations in SedonaSQL are on Geometry type objects. Therefore, before any kind of queries, you need to create a Geometry type column on a DataFrame.
 
 
-```Scala
+```scala
 var spatialDf = sparkSession.sql(
   """
     |SELECT ST_GeomFromWKT(_c0) AS countyshape, _c1, _c2
@@ -111,7 +111,7 @@ Although it looks same with the input, but actually the type of column countysha
 
 To verify this, use the following code to print the schema of the DataFrame:
 
-```Scala
+```scala
 spatialDf.printSchema()
 ```
 
@@ -140,7 +140,7 @@ Shapefile and GeoJSON must be loaded by SpatialRDD and converted to DataFrame us
 
 Since v`1.3.0`, Sedona natively supports loading GeoParquet file. Sedona will infer geometry fields using the "geo" metadata in GeoParquet files.
 
-```Scala
+```scala
 val df = sparkSession.read.format("geoparquet").load(geoparquetdatalocation1)
 df.printSchema()
 ```
@@ -164,7 +164,7 @@ Sedona doesn't control the coordinate unit (degree-based or meter-based) of all
 
 To convert Coordinate Reference System of the Geometry column created before, use the following code:
 
-```Scala
+```scala
 spatialDf = sparkSession.sql(
   """
     |SELECT ST_Transform(countyshape, "epsg:4326", "epsg:3857") AS newcountyshape, _c1, _c2, _c3, _c4, _c5, _c6, _c7
@@ -204,7 +204,7 @@ Use ==ST_Contains==, ==ST_Intersects==, ==ST_Within== to run a range query over
 
 The following example finds all counties that are within the given polygon:
 
-```Scala
+```scala
 spatialDf = sparkSession.sql(
   """
     |SELECT *
@@ -223,7 +223,7 @@ Use ==ST_Distance== to calculate the distance and rank the distance.
 
 The following code returns the 5 nearest neighbor of the given polygon.
 
-```Scala
+```scala
 spatialDf = sparkSession.sql(
   """
     |SELECT countyname, ST_Distance(ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0), newcountyshape) AS distance
@@ -249,7 +249,7 @@ To save a Spatial DataFrame to some permanent storage such as Hive tables and HD
 
 
 Use the following code to convert the Geometry column in a DataFrame back to a WKT string column:
-```Scala
+```scala
 var stringDf = sparkSession.sql(
   """
     |SELECT ST_AsText(countyshape)
@@ -265,7 +265,7 @@ var stringDf = sparkSession.sql(
 
 Since v`1.3.0`, Sedona natively supports writing GeoParquet file. GeoParquet can be saved as follows:
 
-```Scala
+```scala
 df.write.format("geoparquet").save(geoparquetoutputlocation + "/GeoParquet_File_Name.parquet")
 ```
 
@@ -275,7 +275,7 @@ df.write.format("geoparquet").save(geoparquetoutputlocation + "/GeoParquet_File_
 
 Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. Please read [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html)
 
-```Scala
+```scala
 var spatialRDD = Adapter.toSpatialRdd(spatialDf, "usacounty")
 ```
 
@@ -288,7 +288,7 @@ var spatialRDD = Adapter.toSpatialRdd(spatialDf, "usacounty")
 
 Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. Please read [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html)
 
-```Scala
+```scala
 var spatialDf = Adapter.toDf(spatialRDD, sparkSession)
 ```
 
@@ -299,7 +299,7 @@ types. Note that string schemas and not all data types are supported&mdash;pleas
 [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) to confirm what is supported for your use
 case. At least one column for the user data must be provided.
 
-```Scala
+```scala
 val schema = StructType(Array(
   StructField("county", GeometryUDT, nullable = true),
   StructField("name", StringType, nullable = true),
@@ -313,13 +313,13 @@ val spatialDf = Adapter.toDf(spatialRDD, schema, sparkSession)
 
 PairRDD is the result of a spatial join query or distance join query. SedonaSQL DataFrame-RDD Adapter can convert the result to a DataFrame. But you need to provide the name of other attributes.
 
-```Scala
+```scala
 var joinResultDf = Adapter.toDf(joinResultPairRDD, Seq("left_attribute1", "left_attribute2"), Seq("right_attribute1", "right_attribute2"), sparkSession)
 ```
 
 or you can use the attribute names directly from the input RDD
 
-```Scala
+```scala
 import scala.collection.JavaConversions._
 var joinResultDf = Adapter.toDf(joinResultPairRDD, leftRdd.fieldNames, rightRdd.fieldNames, sparkSession)
 ```
@@ -331,7 +331,7 @@ types. Note that string schemas and not all data types are supported&mdash;pleas
 [Adapter Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) to confirm what is supported for your use
 case. Columns for the left and right user data must be provided.
 
-```Scala
+```scala
 val schema = StructType(Array(
   StructField("leftGeometry", GeometryUDT, nullable = true),
   StructField("name", StringType, nullable = true),
@@ -360,7 +360,7 @@ Non-`String` arguments are assumed to be literals that are passed to the sedona
 
 A short example of using this API (uses the `array_min` and `array_max` Spark functions):
 
-```Scala
+```scala
 val values_df = spark.sql("SELECT array(0.0, 1.0, 2.0) AS values")
 val min_value = array_min("values")
 val max_value = array_max("values")
diff --git a/docs/tutorial/viz-r.md b/docs/tutorial/viz-r.md
index 3a7dc63b..4bce9e5c 100644
--- a/docs/tutorial/viz-r.md
+++ b/docs/tutorial/viz-r.md
@@ -6,7 +6,7 @@ to Sedona visualization routines. For example, the following is
 essentially the R equivalent of [this example in
 Scala](https://github.com/apache/sedona/blob/f6b1c5e24bdb67d2c8d701a9b2af1fb5658fdc4d/viz/src/main/scala/org/apache/sedona/viz/showcase/ScalaExample.scala#L142-L160).
 
-``` r
+```r
 library(sparklyr)
 library(apache.sedona)